title stringlengths 0 125 | url stringlengths 67 206 | markdown stringlengths 55 86.1k | html stringlengths 198 350k | crawlDate stringlengths 24 24 |
|---|---|---|---|---|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_downloads/4640004148fe54855750b60c95066e8c/trace_bert_neuron.py | ```
import torch
import torch_neuron
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Build tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=False)
# Setup some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "HuggingFace's headquarters are situated in Manhattan"
max_length = 128
batch_size = 6
paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors="pt")
example_inputs_paraphrase = (
torch.cat([paraphrase['input_ids']] * batch_size, 0),
torch.cat([paraphrase['attention_mask']] * batch_size, 0),
torch.cat([paraphrase['token_type_ids']] * batch_size, 0)
)
# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron
model_neuron_batch = torch_neuron.trace(model, example_inputs_paraphrase)
# Save the batched model
model_neuron_batch.save('bert_neuron_b{}.pt'.format(batch_size))
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">import torch
import torch_neuron
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Build tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=False)
# Setup some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "HuggingFace's headquarters are situated in Manhattan"
max_length = 128
batch_size = 6
paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors="pt")
example_inputs_paraphrase = (
torch.cat([paraphrase['input_ids']] * batch_size, 0),
torch.cat([paraphrase['attention_mask']] * batch_size, 0),
torch.cat([paraphrase['token_type_ids']] * batch_size, 0)
)
# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron
model_neuron_batch = torch_neuron.trace(model, example_inputs_paraphrase)
# Save the batched model
model_neuron_batch.save('bert_neuron_b{}.pt'.format(batch_size))
</pre></body></html> | 2023-09-29T20:55:25.625Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/yolo_v4.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Evaluate YOLO v4 on Inferentia"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This tutorial walks through compiling and evaluating YOLO v4 model implemented in PyTorch on Inferentia. \n",
"\n",
"The tutorial has five main sections:\n",
"\n",
"1. Define YOLO v4 model in PyTorch\n",
"2. Download the COCO 2017 evaluation dataset and define the data loader function\n",
"3. Build, Compile, and Save Neuron-Optimized YOLO v4 TorchScript\n",
"4. Evaluate Accuracy on the COCO 2017 Dataset\n",
"5. Benchmark COCO Dataset Performance of the Neuron-Optimized TorchScript\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuron`\n",
"- `torchvision`\n",
"- `pillow`\n",
"- `pycocotools`\n",
"- `neuron-cc[tensorflow]`\n",
"\n",
"Many of these packages will be installed by default when configuring your environment using the Neuron PyTorch setup guide. The additional dependencies must be installed here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install --upgrade pillow pycocotools "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Define YOLO v4 model in PyTorch \n",
"The following PyTorch model definition is from https://github.com/Tianxiaomo/pytorch-YOLOv4/."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import torch\n",
"import torch.neuron\n",
"from torch import nn\n",
"import torch.nn.functional as F\n",
"import os\n",
"import warnings\n",
"\n",
"# Setting up NeuronCore groups for inf1.6xlarge with 16 cores\n",
"n_cores = 16 # This value should be 4 on inf1.xlarge and inf1.2xlarge\n",
"os.environ['NEURON_RT_NUM_CORES'] = str(n_cores)\n",
"\n",
"\n",
"class Mish(torch.nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
"\n",
" def forward(self, x):\n",
" x = x * (torch.tanh(torch.nn.functional.softplus(x)))\n",
" return x\n",
"\n",
"\n",
"class Upsample(nn.Module):\n",
" def __init__(self):\n",
" super(Upsample, self).__init__()\n",
"\n",
" def forward(self, x, target_size, inference=False):\n",
" assert (x.data.dim() == 4)\n",
"\n",
" if inference:\n",
"\n",
" return x.view(x.size(0), x.size(1), x.size(2), 1, x.size(3), 1).\\\n",
" expand(x.size(0), x.size(1), x.size(2), target_size[2] // x.size(2), x.size(3), target_size[3] // x.size(3)).\\\n",
" contiguous().view(x.size(0), x.size(1), target_size[2], target_size[3])\n",
" else:\n",
" return F.interpolate(x, size=(target_size[2], target_size[3]), mode='nearest')\n",
"\n",
"\n",
"class Conv_Bn_Activation(nn.Module):\n",
" def __init__(self, in_channels, out_channels, kernel_size, stride, activation, bn=True, bias=False):\n",
" super().__init__()\n",
" pad = (kernel_size - 1) // 2\n",
"\n",
" self.conv = nn.ModuleList()\n",
" if bias:\n",
" self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad))\n",
" else:\n",
" self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad, bias=False))\n",
" if bn:\n",
" self.conv.append(nn.BatchNorm2d(out_channels))\n",
" if activation == \"mish\":\n",
" self.conv.append(Mish())\n",
" elif activation == \"relu\":\n",
" self.conv.append(nn.ReLU(inplace=True))\n",
" elif activation == \"leaky\":\n",
" self.conv.append(nn.LeakyReLU(0.1, inplace=True))\n",
" elif activation == \"linear\":\n",
" pass\n",
" else:\n",
" print(\"activate error !!! {} {} {}\".format(sys._getframe().f_code.co_filename,\n",
" sys._getframe().f_code.co_name, sys._getframe().f_lineno))\n",
"\n",
" def forward(self, x):\n",
" for l in self.conv:\n",
" x = l(x)\n",
" return x\n",
"\n",
"\n",
"class ResBlock(nn.Module):\n",
" \"\"\"\n",
" Sequential residual blocks each of which consists of \\\n",
" two convolution layers.\n",
" Args:\n",
" ch (int): number of input and output channels.\n",
" nblocks (int): number of residual blocks.\n",
" shortcut (bool): if True, residual tensor addition is enabled.\n",
" \"\"\"\n",
"\n",
" def __init__(self, ch, nblocks=1, shortcut=True):\n",
" super().__init__()\n",
" self.shortcut = shortcut\n",
" self.module_list = nn.ModuleList()\n",
" for i in range(nblocks):\n",
" resblock_one = nn.ModuleList()\n",
" resblock_one.append(Conv_Bn_Activation(ch, ch, 1, 1, 'mish'))\n",
" resblock_one.append(Conv_Bn_Activation(ch, ch, 3, 1, 'mish'))\n",
" self.module_list.append(resblock_one)\n",
"\n",
" def forward(self, x):\n",
" for module in self.module_list:\n",
" h = x\n",
" for res in module:\n",
" h = res(h)\n",
" x = x + h if self.shortcut else h\n",
" return x\n",
"\n",
"\n",
"class DownSample1(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(3, 32, 3, 1, 'mish')\n",
"\n",
" self.conv2 = Conv_Bn_Activation(32, 64, 3, 2, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # [route]\n",
" # layers = -2\n",
" self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
"\n",
" self.conv5 = Conv_Bn_Activation(64, 32, 1, 1, 'mish')\n",
" self.conv6 = Conv_Bn_Activation(32, 64, 3, 1, 'mish')\n",
" # [shortcut]\n",
" # from=-3\n",
" # activation = linear\n",
"\n",
" self.conv7 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # [route]\n",
" # layers = -1, -7\n",
" self.conv8 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x2)\n",
" # route -2\n",
" x4 = self.conv4(x2)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" # shortcut -3\n",
" x6 = x6 + x4\n",
"\n",
" x7 = self.conv7(x6)\n",
" # [route]\n",
" # layers = -1, -7\n",
" x7 = torch.cat([x7, x3], dim=1)\n",
" x8 = self.conv8(x7)\n",
" return x8\n",
"\n",
"\n",
"class DownSample2(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(64, 128, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
" # r -2\n",
" self.conv3 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=64, nblocks=2)\n",
"\n",
" # s -3\n",
" self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # r -1 -10\n",
" self.conv5 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample3(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(128, 256, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=128, nblocks=8)\n",
" self.conv4 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample4(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(256, 512, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=256, nblocks=8)\n",
" self.conv4 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample5(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(512, 1024, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=512, nblocks=4)\n",
" self.conv4 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(1024, 1024, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class Neck(nn.Module):\n",
" def __init__(self, inference=False):\n",
" super().__init__()\n",
" self.inference = inference\n",
"\n",
" self.conv1 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv2 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" # SPP\n",
" self.maxpool1 = nn.MaxPool2d(kernel_size=5, stride=1, padding=5 // 2)\n",
" self.maxpool2 = nn.MaxPool2d(kernel_size=9, stride=1, padding=9 // 2)\n",
" self.maxpool3 = nn.MaxPool2d(kernel_size=13, stride=1, padding=13 // 2)\n",
"\n",
" # R -1 -3 -5 -6\n",
" # SPP\n",
" self.conv4 = Conv_Bn_Activation(2048, 512, 1, 1, 'leaky')\n",
" self.conv5 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv6 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv7 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" # UP\n",
" self.upsample1 = Upsample()\n",
" # R 85\n",
" self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" # R -1 -3\n",
" self.conv9 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv10 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv11 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv12 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv13 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv14 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" # UP\n",
" self.upsample2 = Upsample()\n",
" # R 54\n",
" self.conv15 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" # R -1 -3\n",
" self.conv16 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" self.conv17 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv18 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" self.conv19 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv20 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
"\n",
" def forward(self, input, downsample4, downsample3, inference=False):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x2)\n",
" # SPP\n",
" m1 = self.maxpool1(x3)\n",
" m2 = self.maxpool2(x3)\n",
" m3 = self.maxpool3(x3)\n",
" spp = torch.cat([m3, m2, m1, x3], dim=1)\n",
" # SPP end\n",
" x4 = self.conv4(spp)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" x7 = self.conv7(x6)\n",
" # UP\n",
" up = self.upsample1(x7, downsample4.size(), self.inference)\n",
" # R 85\n",
" x8 = self.conv8(downsample4)\n",
" # R -1 -3\n",
" x8 = torch.cat([x8, up], dim=1)\n",
"\n",
" x9 = self.conv9(x8)\n",
" x10 = self.conv10(x9)\n",
" x11 = self.conv11(x10)\n",
" x12 = self.conv12(x11)\n",
" x13 = self.conv13(x12)\n",
" x14 = self.conv14(x13)\n",
"\n",
" # UP\n",
" up = self.upsample2(x14, downsample3.size(), self.inference)\n",
" # R 54\n",
" x15 = self.conv15(downsample3)\n",
" # R -1 -3\n",
" x15 = torch.cat([x15, up], dim=1)\n",
"\n",
" x16 = self.conv16(x15)\n",
" x17 = self.conv17(x16)\n",
" x18 = self.conv18(x17)\n",
" x19 = self.conv19(x18)\n",
" x20 = self.conv20(x19)\n",
" return x20, x13, x6\n",
"\n",
"\n",
"class Yolov4Head(nn.Module):\n",
" def __init__(self, output_ch, n_classes, inference=False):\n",
" super().__init__()\n",
" self.inference = inference\n",
"\n",
" self.conv1 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv2 = Conv_Bn_Activation(256, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
"\n",
" self.yolo1 = YoloLayer(\n",
" anchor_mask=[0, 1, 2], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=8)\n",
"\n",
" # R -4\n",
" self.conv3 = Conv_Bn_Activation(128, 256, 3, 2, 'leaky')\n",
"\n",
" # R -1 -16\n",
" self.conv4 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv5 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv6 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv7 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv9 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv10 = Conv_Bn_Activation(512, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
" \n",
" self.yolo2 = YoloLayer(\n",
" anchor_mask=[3, 4, 5], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=16)\n",
"\n",
" # R -4\n",
" self.conv11 = Conv_Bn_Activation(256, 512, 3, 2, 'leaky')\n",
"\n",
" # R -1 -37\n",
" self.conv12 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv13 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv14 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv15 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv16 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv17 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv18 = Conv_Bn_Activation(1024, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
" \n",
" self.yolo3 = YoloLayer(\n",
" anchor_mask=[6, 7, 8], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=32)\n",
"\n",
" def forward(self, input1, input2, input3):\n",
" x1 = self.conv1(input1)\n",
" x2 = self.conv2(x1)\n",
"\n",
" x3 = self.conv3(input1)\n",
" # R -1 -16\n",
" x3 = torch.cat([x3, input2], dim=1)\n",
" x4 = self.conv4(x3)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" x7 = self.conv7(x6)\n",
" x8 = self.conv8(x7)\n",
" x9 = self.conv9(x8)\n",
" x10 = self.conv10(x9)\n",
"\n",
" # R -4\n",
" x11 = self.conv11(x8)\n",
" # R -1 -37\n",
" x11 = torch.cat([x11, input3], dim=1)\n",
"\n",
" x12 = self.conv12(x11)\n",
" x13 = self.conv13(x12)\n",
" x14 = self.conv14(x13)\n",
" x15 = self.conv15(x14)\n",
" x16 = self.conv16(x15)\n",
" x17 = self.conv17(x16)\n",
" x18 = self.conv18(x17)\n",
" \n",
" if self.inference:\n",
" y1 = self.yolo1(x2)\n",
" y2 = self.yolo2(x10)\n",
" y3 = self.yolo3(x18)\n",
"\n",
" return get_region_boxes([y1, y2, y3])\n",
" \n",
" else:\n",
" return [x2, x10, x18]\n",
"\n",
"\n",
"class Yolov4(nn.Module):\n",
" def __init__(self, yolov4conv137weight=None, n_classes=80, inference=False):\n",
" super().__init__()\n",
"\n",
" output_ch = (4 + 1 + n_classes) * 3\n",
"\n",
" # backbone\n",
" self.down1 = DownSample1()\n",
" self.down2 = DownSample2()\n",
" self.down3 = DownSample3()\n",
" self.down4 = DownSample4()\n",
" self.down5 = DownSample5()\n",
" # neck\n",
" self.neek = Neck(inference)\n",
" # yolov4conv137\n",
" if yolov4conv137weight:\n",
" _model = nn.Sequential(self.down1, self.down2, self.down3, self.down4, self.down5, self.neek)\n",
" pretrained_dict = torch.load(yolov4conv137weight)\n",
"\n",
" model_dict = _model.state_dict()\n",
" # 1. filter out unnecessary keys\n",
" pretrained_dict = {k1: v for (k, v), k1 in zip(pretrained_dict.items(), model_dict)}\n",
" # 2. overwrite entries in the existing state dict\n",
" model_dict.update(pretrained_dict)\n",
" _model.load_state_dict(model_dict)\n",
" \n",
" # head\n",
" self.head = Yolov4Head(output_ch, n_classes, inference)\n",
"\n",
"\n",
" def forward(self, input):\n",
" d1 = self.down1(input)\n",
" d2 = self.down2(d1)\n",
" d3 = self.down3(d2)\n",
" d4 = self.down4(d3)\n",
" d5 = self.down5(d4)\n",
"\n",
" x20, x13, x6 = self.neek(d5, d4, d3)\n",
"\n",
" output = self.head(x20, x13, x6)\n",
" return output\n",
"\n",
"\n",
"def yolo_forward_dynamic(output, conf_thresh, num_classes, anchors, num_anchors, scale_x_y, only_objectness=1,\n",
" validation=False):\n",
" # Output would be invalid if it does not satisfy this assert\n",
" # assert (output.size(1) == (5 + num_classes) * num_anchors)\n",
"\n",
" # print(output.size())\n",
"\n",
" # Slice the second dimension (channel) of output into:\n",
" # [ 2, 2, 1, num_classes, 2, 2, 1, num_classes, 2, 2, 1, num_classes ]\n",
" # And then into\n",
" # bxy = [ 6 ] bwh = [ 6 ] det_conf = [ 3 ] cls_conf = [ num_classes * 3 ]\n",
" # batch = output.size(0)\n",
" # H = output.size(2)\n",
" # W = output.size(3)\n",
"\n",
" bxy_list = []\n",
" bwh_list = []\n",
" det_confs_list = []\n",
" cls_confs_list = []\n",
"\n",
" for i in range(num_anchors):\n",
" begin = i * (5 + num_classes)\n",
" end = (i + 1) * (5 + num_classes)\n",
" \n",
" bxy_list.append(output[:, begin : begin + 2])\n",
" bwh_list.append(output[:, begin + 2 : begin + 4])\n",
" det_confs_list.append(output[:, begin + 4 : begin + 5])\n",
" cls_confs_list.append(output[:, begin + 5 : end])\n",
"\n",
" # Shape: [batch, num_anchors * 2, H, W]\n",
" bxy = torch.cat(bxy_list, dim=1)\n",
" # Shape: [batch, num_anchors * 2, H, W]\n",
" bwh = torch.cat(bwh_list, dim=1)\n",
"\n",
" # Shape: [batch, num_anchors, H, W]\n",
" det_confs = torch.cat(det_confs_list, dim=1)\n",
" # Shape: [batch, num_anchors * H * W]\n",
" det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3))\n",
"\n",
" # Shape: [batch, num_anchors * num_classes, H, W]\n",
" cls_confs = torch.cat(cls_confs_list, dim=1)\n",
" # Shape: [batch, num_anchors, num_classes, H * W]\n",
" cls_confs = cls_confs.view(output.size(0), num_anchors, num_classes, output.size(2) * output.size(3))\n",
" # Shape: [batch, num_anchors, num_classes, H * W] --> [batch, num_anchors * H * W, num_classes] \n",
" cls_confs = cls_confs.permute(0, 1, 3, 2).reshape(output.size(0), num_anchors * output.size(2) * output.size(3), num_classes)\n",
"\n",
" # Apply sigmoid(), exp() and softmax() to slices\n",
" #\n",
" bxy = torch.sigmoid(bxy) * scale_x_y - 0.5 * (scale_x_y - 1)\n",
" bwh = torch.exp(bwh)\n",
" det_confs = torch.sigmoid(det_confs)\n",
" cls_confs = torch.sigmoid(cls_confs)\n",
"\n",
" # Prepare C-x, C-y, P-w, P-h (None of them are torch related)\n",
" grid_x = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(3) - 1, output.size(3)), axis=0).repeat(output.size(2), 0), axis=0), axis=0)\n",
" grid_y = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(2) - 1, output.size(2)), axis=1).repeat(output.size(3), 1), axis=0), axis=0)\n",
" # grid_x = torch.linspace(0, W - 1, W).reshape(1, 1, 1, W).repeat(1, 1, H, 1)\n",
" # grid_y = torch.linspace(0, H - 1, H).reshape(1, 1, H, 1).repeat(1, 1, 1, W)\n",
"\n",
" anchor_w = []\n",
" anchor_h = []\n",
" for i in range(num_anchors):\n",
" anchor_w.append(anchors[i * 2])\n",
" anchor_h.append(anchors[i * 2 + 1])\n",
"\n",
" device = None\n",
" cuda_check = output.is_cuda\n",
" if cuda_check:\n",
" device = output.get_device()\n",
"\n",
" bx_list = []\n",
" by_list = []\n",
" bw_list = []\n",
" bh_list = []\n",
"\n",
" # Apply C-x, C-y, P-w, P-h\n",
" for i in range(num_anchors):\n",
" ii = i * 2\n",
" # Shape: [batch, 1, H, W]\n",
" bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32)\n",
" # Shape: [batch, 1, H, W]\n",
" by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32)\n",
" # Shape: [batch, 1, H, W]\n",
" bw = bwh[:, ii : ii + 1] * anchor_w[i]\n",
" # Shape: [batch, 1, H, W]\n",
" bh = bwh[:, ii + 1 : ii + 2] * anchor_h[i]\n",
"\n",
" bx_list.append(bx)\n",
" by_list.append(by)\n",
" bw_list.append(bw)\n",
" bh_list.append(bh)\n",
"\n",
"\n",
" ########################################\n",
" # Figure out bboxes from slices #\n",
" ########################################\n",
" \n",
" # Shape: [batch, num_anchors, H, W]\n",
" bx = torch.cat(bx_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" by = torch.cat(by_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" bw = torch.cat(bw_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" bh = torch.cat(bh_list, dim=1)\n",
"\n",
" # Shape: [batch, 2 * num_anchors, H, W]\n",
" bx_bw = torch.cat((bx, bw), dim=1)\n",
" # Shape: [batch, 2 * num_anchors, H, W]\n",
" by_bh = torch.cat((by, bh), dim=1)\n",
"\n",
" # normalize coordinates to [0, 1]\n",
" bx_bw /= output.size(3)\n",
" by_bh /= output.size(2)\n",
"\n",
" # Shape: [batch, num_anchors * H * W, 1]\n",
" bx = bx_bw[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" by = by_bh[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" bw = bx_bw[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" bh = by_bh[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
"\n",
" bx1 = bx - bw * 0.5\n",
" by1 = by - bh * 0.5\n",
" bx2 = bx1 + bw\n",
" by2 = by1 + bh\n",
"\n",
" # Shape: [batch, num_anchors * h * w, 4] -> [batch, num_anchors * h * w, 1, 4]\n",
" boxes = torch.cat((bx1, by1, bx2, by2), dim=2).view(output.size(0), num_anchors * output.size(2) * output.size(3), 1, 4)\n",
" # boxes = boxes.repeat(1, 1, num_classes, 1)\n",
"\n",
" # boxes: [batch, num_anchors * H * W, 1, 4]\n",
" # cls_confs: [batch, num_anchors * H * W, num_classes]\n",
" # det_confs: [batch, num_anchors * H * W]\n",
"\n",
" det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" confs = cls_confs * det_confs\n",
"\n",
" # boxes: [batch, num_anchors * H * W, 1, 4]\n",
" # confs: [batch, num_anchors * H * W, num_classes]\n",
"\n",
" return boxes, confs\n",
"\n",
"class YoloLayer(nn.Module):\n",
" \"\"\"\n",
" Yolo layer\n",
" model_out: while inference,is post-processing inside or outside the model\n",
" true:outside\n",
" \"\"\"\n",
" def __init__(self, anchor_mask=[], num_classes=0, anchors=[], num_anchors=1, stride=32, model_out=False):\n",
" super(YoloLayer, self).__init__()\n",
" self.anchor_mask = anchor_mask\n",
" self.num_classes = num_classes\n",
" self.anchors = anchors\n",
" self.num_anchors = num_anchors\n",
" self.anchor_step = len(anchors) // num_anchors\n",
" self.coord_scale = 1\n",
" self.noobject_scale = 1\n",
" self.object_scale = 5\n",
" self.class_scale = 1\n",
" self.thresh = 0.6\n",
" self.stride = stride\n",
" self.seen = 0\n",
" self.scale_x_y = 1\n",
"\n",
" self.model_out = model_out\n",
"\n",
" def forward(self, output, target=None):\n",
" if self.training:\n",
" return output\n",
" masked_anchors = []\n",
" for m in self.anchor_mask:\n",
" masked_anchors += self.anchors[m * self.anchor_step:(m + 1) * self.anchor_step]\n",
" masked_anchors = [anchor / self.stride for anchor in masked_anchors]\n",
"\n",
" return yolo_forward_dynamic(output, self.thresh, self.num_classes, masked_anchors, len(self.anchor_mask),scale_x_y=self.scale_x_y)\n",
"\n",
"\n",
"def get_region_boxes(boxes_and_confs):\n",
"\n",
" # print('Getting boxes from boxes and confs ...')\n",
"\n",
" boxes_list = []\n",
" confs_list = []\n",
"\n",
" for item in boxes_and_confs:\n",
" boxes_list.append(item[0])\n",
" confs_list.append(item[1])\n",
"\n",
" # boxes: [batch, num1 + num2 + num3, 1, 4]\n",
" # confs: [batch, num1 + num2 + num3, num_classes]\n",
" boxes = torch.cat(boxes_list, dim=1)\n",
" confs = torch.cat(confs_list, dim=1)\n",
" \n",
" return boxes, confs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Download the COCO 2017 evaluation dataset and define the data loader function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!curl -LO http://images.cocodataset.org/zips/val2017.zip\n",
"!curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q val2017.zip\n",
"!unzip annotations_trainval2017.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define data loader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"import time\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"import torchvision.datasets as dset\n",
"from pycocotools.coco import COCO\n",
"\n",
"\n",
"def get_image_filenames(root=os.getcwd()):\n",
" \"\"\"\n",
" Generate paths to the coco dataset image files.\n",
" \n",
" Args:\n",
" root (str): The root folder contains.\n",
" \n",
" Yields:\n",
" filename (str): The path to an image file.\n",
" \"\"\"\n",
" image_path = os.path.join(root, 'val2017')\n",
" for root, dirs, files in os.walk(image_path):\n",
" for filename in files:\n",
" yield os.path.join(image_path, filename)\n",
"\n",
" \n",
"def get_coco_dataloader(coco2017_root, transform, subset_indices=None):\n",
" \"\"\"\n",
" Create the dataset loader and ground truth coco dataset.\n",
" \n",
" Arguments:\n",
" coco2017_root (str): The root directory to load the data/labels from.\n",
" transform (torchvision.Transform): A transform to apply to the images.\n",
" subset_indices (list): Indices used to create a subset of the dataset.\n",
"\n",
" Returns: \n",
" loader (iterable): Produces transformed images and labels.\n",
" cocoGt (pycocotools.coco.COCO): Contains the ground truth in coco \n",
" format.\n",
" label_info (dict): A mapping from label id to the human-readable name.\n",
" \"\"\"\n",
"\n",
" # Create the dataset\n",
" coco2017_img_path = os.path.join(coco2017_root, 'val2017')\n",
" coco2017_ann_path = os.path.join(\n",
" coco2017_root, 'annotations/instances_val2017.json')\n",
"\n",
" # check the number of images in val2017 - Should be 5000\n",
" num_files = len(list(get_image_filenames(coco2017_root)))\n",
" print('\\nNumber of images in val2017 = {}\\n'.format(num_files))\n",
"\n",
" # load annotations to decode classification results\n",
" with open(coco2017_ann_path) as f:\n",
" annotate_json = json.load(f)\n",
" label_info = {label[\"id\"]: label[\"name\"]\n",
" for label in annotate_json['categories']}\n",
"\n",
" # initialize COCO ground truth dataset\n",
" cocoGt = COCO(coco2017_ann_path)\n",
"\n",
" # create the dataset using torchvision's coco detection dataset\n",
" coco_val_data = dset.CocoDetection(\n",
" root=coco2017_img_path, \n",
" annFile=coco2017_ann_path, \n",
" transform=transform\n",
" )\n",
"\n",
" if subset_indices is not None:\n",
" # Create a smaller subset of the data for testing - e.g. to pinpoint error at image 516\n",
" coco_val_data = torch.utils.data.Subset(coco_val_data, subset_indices)\n",
"\n",
" # create the dataloader using torch dataloader\n",
" loader = torch.utils.data.DataLoader(coco_val_data, batch_size=1, shuffle=False)\n",
"\n",
" return loader, cocoGt, label_info\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load dataset\n",
"Here 2 dataset loaders are created and the resulting data is displayed\n",
"- `orig_coco_val_data_loader`: Contains the original unmodified image\n",
"- `coco_val_data_loader`: Contains images of a standardized size of 608x608 pixels "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"coco2017_root = './'\n",
"orig_coco_val_data_loader, *_ = get_coco_dataloader(coco2017_root, transforms.ToTensor())\n",
"transform = transforms.Compose([transforms.Resize([608, 608]), transforms.ToTensor()])\n",
"coco_val_data_loader, cocoGt, label_info = get_coco_dataloader(coco2017_root, transform)\n",
"image_orig, _ = next(iter(orig_coco_val_data_loader))\n",
"print(image_orig.shape)\n",
"image, image_info = next(iter(coco_val_data_loader))\n",
"image_id = image_info[0][\"image_id\"].item()\n",
"print(image.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define some helper functions for deployment (inference)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def postprocess(boxes, scores, score_threshold=0.05, iou_threshold=0.5):\n",
" \"\"\"\n",
" Classifies and filters bounding boxes from Yolo V4 output.\n",
" \n",
" Performs classification, filtering, and non-maximum suppression to remove\n",
" boxes that are irrelevant. The result is the filtered set of boxes, the \n",
" associated label confidence score, and the predicted label.\n",
" \n",
" See: https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.nms\n",
" \n",
" Args:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The categories scores for each box.\n",
" score_threshold (float): Ignore boxes with scores below threshold.\n",
" iou_threshold (float): Discards boxes with intersection above threshold. \n",
" \n",
" Returns:\n",
" boxes (torch.Tensor): The filtered Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box.\n",
" \"\"\"\n",
" \n",
" # shape: [n_batch, n_boxes, 1, 4] => [n_boxes, 4] # Assumes n_batch size is 1\n",
" boxes = boxes.squeeze()\n",
"\n",
" # shape: [n_batch, n_boxes, 80] => [n_boxes, 80] # Assumes n_batch size is 1\n",
" scores = scores.squeeze()\n",
"\n",
" # Classify each box according to the maximum category score\n",
" score, column = torch.max(scores, dim=1)\n",
"\n",
" # Filter out rows for scores which are below threshold\n",
" mask = score > score_threshold\n",
"\n",
" # Filter model output data\n",
" boxes = boxes[mask]\n",
" score = score[mask]\n",
" idxs = column[mask]\n",
"\n",
" # Perform non-max suppression on all categories at once. shape: [n_keep,]\n",
" keep = torchvision.ops.batched_nms(\n",
" boxes=boxes, \n",
" scores=score, \n",
" idxs=idxs,\n",
" iou_threshold=iou_threshold,\n",
" )\n",
"\n",
" # The image category id associated with each column\n",
" categories = torch.tensor([\n",
" 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16,\n",
" 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31,\n",
" 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,\n",
" 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56,\n",
" 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72,\n",
" 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85,\n",
" 86, 87, 88, 89, 90\n",
" ])\n",
" \n",
" boxes = boxes[keep] # shape: [n_keep, 4]\n",
" score = score[keep] # shape: [n_keep,]\n",
" idxs = idxs[keep]\n",
" label = categories[idxs] # shape: [n_keep,]\n",
" \n",
" return boxes, score, label\n",
"\n",
"\n",
"def get_results_as_dict(boxes, scores, labels, image_orig):\n",
" \"\"\"\n",
" Transforms post-processed output into dictionary output.\n",
" \n",
" This translates the model coordinate bounding boxes (x1, y1, x2, y2) \n",
" into a rectangular description (x, y, width, height) scaled to the \n",
" original image size.\n",
" \n",
" Args:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box.\n",
" image_orig (torch.Tensor): The image to scale the bounding boxes to.\n",
" \n",
" Returns:\n",
" output (dict): The dictionary of rectangle bounding boxes.\n",
" \"\"\"\n",
" h_size, w_size = image_orig.shape[-2:]\n",
"\n",
" x1 = boxes[:, 0] * w_size\n",
" y1 = boxes[:, 1] * h_size\n",
" x2 = boxes[:, 2] * w_size\n",
" y2 = boxes[:, 3] * h_size\n",
"\n",
" width = x2 - x1\n",
" height = y2 - y1\n",
"\n",
" boxes = torch.stack([x1, y1, width, height]).T\n",
" return {\n",
" 'boxes': boxes.detach().numpy(),\n",
" 'labels': labels.detach().numpy(),\n",
" 'scores': scores.detach().numpy(),\n",
" }\n",
"\n",
"\n",
"def prepare_for_coco_detection(predictions):\n",
" \"\"\"\n",
" Convert dictionary model predictions into an expected COCO dataset format.\n",
" \n",
" Args:\n",
" predictions (dict): The list of box coordinates, scores, and labels.\n",
" \n",
" Returns:\n",
" output (list[dict]): The list of bounding boxes.\n",
" \"\"\"\n",
" coco_results = []\n",
" for original_id, prediction in predictions.items():\n",
" if len(prediction) == 0:\n",
" continue\n",
"\n",
" boxes = prediction[\"boxes\"].tolist()\n",
" scores = prediction[\"scores\"].tolist()\n",
" labels = prediction[\"labels\"].tolist()\n",
"\n",
" coco_results.extend(\n",
" [\n",
" {\n",
" \"image_id\": original_id,\n",
" \"category_id\": labels[k],\n",
" \"bbox\": box,\n",
" \"score\": scores[k],\n",
" }\n",
" for k, box in enumerate(boxes)\n",
" ]\n",
" )\n",
" return coco_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download pretrained checkpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"def download_file_from_google_drive(id, destination):\n",
" response = requests.post('https://drive.google.com/uc?id='+id+'&confirm=t')\n",
" save_response_content(response, destination)\n",
"\n",
"def save_response_content(response, destination):\n",
" CHUNK_SIZE = 32768\n",
" with open(destination, \"wb\") as f:\n",
" for chunk in response.iter_content(CHUNK_SIZE):\n",
" if chunk: # filter out keep-alive new chunks\n",
" f.write(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"download_file_from_google_drive('1wv_LiFeCRYwtpkqREPeI13-gPELBDwuJ', './yolo_v4.pth')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Build, Compile, and Save Neuron-Optimized YOLO v4 TorchScript\n",
"### Construct model and load pretrained checkpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model = Yolov4(yolov4conv137weight=None, n_classes=80, inference=True)\n",
"weightfile = \"./yolo_v4.pth\"\n",
"pretrained_dict = torch.load(weightfile, map_location=torch.device('cpu'))\n",
"model.load_state_dict(pretrained_dict)\n",
"model.eval()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Execute inference for a single image and display output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as patches\n",
"\n",
"image_orig, _ = next(iter(orig_coco_val_data_loader))\n",
"image, _ = next(iter(coco_val_data_loader))\n",
"boxes, scores = model(image)\n",
"boxes, scores, labels = postprocess(boxes, scores)\n",
"result_dict = get_results_as_dict(boxes, scores, labels, image_orig)\n",
"\n",
"fig, ax = plt.subplots(figsize=(10, 10))\n",
"ax.imshow(image_orig.numpy().squeeze(0).transpose(1, 2, 0))\n",
"for xywh, _ in zip(result_dict['boxes'], result_dict['labels']):\n",
" x, y, w, h = xywh\n",
" rect = patches.Rectangle((x, y), w, h, linewidth=1, edgecolor='g', facecolor='none')\n",
" ax.add_patch(rect)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Run compilation with manually specified device placement\n",
"\n",
"First, inspect the model without running compilation by adding the `skip_compiler=True` argument to the `torch.neuron.trace` call."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model_neuron_for_inspection = torch.neuron.trace(model, image, skip_compiler=True)\n",
"print(model_neuron_for_inspection)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Inspecting the model, we discover that there are many `aten::slice` operations in some submodules called `YoloLayer`. Although these operations are supported by the neuron-cc compiler, they are not going to run efficiently on the Inferentia hardware. To work it around, we recommend to manually place these operators on CPU."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To manually place `YoloLayer` on CPU, we may make use of the `subgraph_builder_function` argument in `torch.neuron.trace`. It is a callback function that returns `True` or `False` based on information available in `node`. The typical use is a condition based on either `node.name` or `node.type_string`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def subgraph_builder_function(node):\n",
" return 'YoloLayer' not in node.name\n",
"\n",
"model_neuron = torch.neuron.trace(model, image, subgraph_builder_function=subgraph_builder_function)\n",
"model_neuron.save('yolo_v4_neuron.pt')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compilation is now finished and the compiled model has been saved to a local file called 'yolo_v4_neuron.pt'. Saving is important due to the slow compilation process."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4: Evaluate Accuracy on the COCO 2017 Dataset\n",
"### Load compiled model and run inference\n",
"To validate accuracy of the compiled model, lets run inference on the COCO 2017 validation dataset. We start by defining a helper function `run_inference`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def run_inference(dataloader, dataloader_orig, model, convert=True, modelName=''):\n",
" \"\"\"\n",
" Run Yolo V4 inference on the COCO dataset.\n",
" \n",
" Args:\n",
" dataloader (iterable): Data loader of input processed images and labels.\n",
" dataloader_orig (iterable): Data loader with original images.\n",
" model (torch.nn.Module): The torch model to run inference against.\n",
" convert (bool): Set to False when using a vanilla torchvision model that \n",
" does not need to be transformed into coco format.\n",
" \n",
" Returns: \n",
" imgIds (list): The list of images with predictions.\n",
" cocoDt (pycocotools.coco.COCO): Contains the predictions from the model \n",
" in coco format.\n",
" \"\"\"\n",
" print('\\n================ Starting Inference on {} Images using {} model ================\\n'.format(\n",
" len(dataloader), modelName))\n",
"\n",
" modelName = str(modelName).replace(\" \", \"_\")\n",
"\n",
" # convert predicition to cocoDt\n",
" # code from def evaluate in https://github.com/pytorch/vision/blob/master/references/detection/engine.py\n",
" imgIds = []\n",
" results = []\n",
" skippedImages = []\n",
"\n",
" # time inference\n",
" inference_time = 0.0\n",
" for idx, ((image, targets), (image_orig, _)) in enumerate(zip(dataloader, dataloader_orig)):\n",
" # if target is empty, skip the image because it breaks the scripted model\n",
" if not targets:\n",
" skippedImages.append(idx)\n",
" continue\n",
"\n",
" # get the predictions\n",
" start_time = time.time()\n",
" boxes, scores = model(image)\n",
" delta = time.time() - start_time\n",
" inference_time += delta\n",
" boxes, scores, labels = postprocess(boxes, scores)\n",
" outputs = get_results_as_dict(boxes, scores, labels, image_orig)\n",
"\n",
" res = {target[\"image_id\"].item(): output for target,\n",
" output in zip(targets, [outputs])}\n",
"\n",
" # add the image id to imgIds\n",
" image_id = targets[0][\"image_id\"].item()\n",
" imgIds.append(image_id)\n",
"\n",
" # convert the predicition into cocoDt results\n",
" pred = prepare_for_coco_detection(res)\n",
" results.extend(pred)\n",
"\n",
" print('\\n==================== Performance Measurement ====================')\n",
" print('Finished inference on {} images in {:.2f} seconds'.format(\n",
" len(dataloader), inference_time))\n",
" print('=================================================================\\n')\n",
"\n",
" # create bbox detections file\n",
" # following code in https://github.com/aws/aws-neuron-sdk/blob/master/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb\n",
" resultsfile = modelName + '_bbox_detections.json'\n",
" print('Generating json file...')\n",
" with open(resultsfile, 'w') as f:\n",
" json.dump(results, f)\n",
"\n",
" # return COCO api object with loadRes\n",
" cocoDt = cocoGt.loadRes(resultsfile)\n",
"\n",
" return imgIds, cocoDt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to simply load the compiled model from disk and then run inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_neuron = torch.jit.load('yolo_v4_neuron.pt')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgIds, cocoDt = run_inference(coco_val_data_loader, orig_coco_val_data_loader, model_neuron)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then use the standard `pycocotools` routines to generate a report of bounding box precision/recall."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pycocotools.cocoeval import COCOeval\n",
"\n",
"cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')\n",
"cocoEval.params.imgIds = imgIds\n",
"cocoEval.evaluate()\n",
"cocoEval.accumulate()\n",
"cocoEval.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For reference, we may perform the same evaluation on the CPU model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgIdsRef, cocoDtRef = run_inference(coco_val_data_loader, orig_coco_val_data_loader, model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cocoEval = COCOeval(cocoGt, cocoDtRef, 'bbox')\n",
"cocoEval.params.imgIds = imgIdsRef\n",
"cocoEval.evaluate()\n",
"cocoEval.accumulate()\n",
"cocoEval.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 5: Benchmark COCO Dataset Performance of the Neuron-Optimized TorchScript\n",
"The following code snippet sets up data parallel on 16 NeuronCores and runs saturated multi-threaded inference on the Inferentia accelerator. Note that the number of cores (`n_cores`) should be set to the number of available NeuronCores on the current instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.neuron\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"import torchvision.datasets as dset\n",
"import multiprocessing as mp\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import PIL\n",
"import os\n",
"import time\n",
"\n",
"n_threads = 16\n",
"\n",
"def get_image_filenames(root=os.getcwd()):\n",
" \"\"\"\n",
" Generate paths to the coco dataset image files.\n",
" \n",
" Args:\n",
" root (str): The root folder contains.\n",
" \n",
" Yields:\n",
" filename (str): The path to an image file.\n",
" \"\"\"\n",
" image_path = os.path.join(root, 'val2017')\n",
" for root, dirs, files in os.walk(image_path):\n",
" for filename in files:\n",
" yield os.path.join(image_path, filename)\n",
"\n",
"def preprocess(path):\n",
" \"\"\"\n",
" Load an image and convert to the expected Yolo V4 tensor format.\n",
" \n",
" Args:\n",
" path (str): The image file to load from disk. \n",
" \n",
" Returns:\n",
" result (torch.Tensor): The image for prediction. Shape: [1, 3, 608, 608]\n",
" \"\"\"\n",
" image = PIL.Image.open(path).convert('RGB')\n",
" resized = torchvision.transforms.functional.resize(image, [608, 608])\n",
" tensor = torchvision.transforms.functional.to_tensor(resized)\n",
" return tensor.unsqueeze(0).to(torch.float32)\n",
"\n",
"\n",
"def load_model(filename='yolo_v4_neuron.pt'):\n",
" \"\"\"\n",
" Load and pre-warm the Yolo V4 model.\n",
" \n",
" Args:\n",
" filename (str): The location to load the model from.\n",
" \n",
" Returns:\n",
" model (torch.nn.Module): The torch model.\n",
" \"\"\"\n",
" \n",
" # Load model from disk\n",
" model = torch.jit.load(filename)\n",
"\n",
" # Warm up model on neuron by running a single example image\n",
" filename = next(iter(get_image_filenames()))\n",
" image = preprocess(filename)\n",
" model(image)\n",
"\n",
" return model\n",
"\n",
"\n",
"def task(model, filename):\n",
" \"\"\"\n",
" The thread task to perform prediction.\n",
" \n",
" This does the full end-to-end processing of an image from loading from disk\n",
" all the way to classifying and filtering bounding boxes.\n",
" \n",
" Args:\n",
" model (torch.nn.Module): The model to run processing with\n",
" filename (str): The image file to load from disk. \n",
" \n",
" Returns:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box. \n",
" \"\"\"\n",
" image = preprocess(filename)\n",
" begin = time.time()\n",
" boxes, scores = model(image)\n",
" delta = time.time() - begin\n",
" return postprocess(boxes, scores), delta\n",
"\n",
"\n",
"def benchmark():\n",
" \"\"\"\n",
" Run a benchmark on the entire COCO dataset against the neuron model.\n",
" \"\"\"\n",
" \n",
" # Load a model into each NeuronCore\n",
" models = [load_model() for _ in range(n_cores)]\n",
" \n",
" # Create input/output lists\n",
" filenames = list(get_image_filenames())\n",
" results = list()\n",
" latency = list()\n",
" \n",
" # We want to keep track of average completion time per thread\n",
" sum_time = 0.0\n",
" \n",
" # Submit all tasks and wait for them to finish\n",
" with ThreadPoolExecutor(n_threads) as pool:\n",
" for i, filename in enumerate(filenames):\n",
" result = pool.submit(task, models[i % len(models)], filename)\n",
" results.append(result)\n",
" for result in results:\n",
" results, times = result.result() # Note: Outputs unused for benchmark\n",
" latency.append(times)\n",
" sum_time += times\n",
" \n",
" print('Duration: ', sum_time / n_threads)\n",
" print('Images Per Second:', len(filenames) / (sum_time / n_threads))\n",
" print(\"Latency P50: {:.1f}\".format(np.percentile(latency[1000:], 50)*1000.0))\n",
" print(\"Latency P90: {:.1f}\".format(np.percentile(latency[1000:], 90)*1000.0))\n",
" print(\"Latency P95: {:.1f}\".format(np.percentile(latency[1000:], 95)*1000.0))\n",
" print(\"Latency P99: {:.1f}\".format(np.percentile(latency[1000:], 99)*1000.0))\n",
"\n",
"benchmark()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Evaluate YOLO v4 on Inferentia"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This tutorial walks through compiling and evaluating YOLO v4 model implemented in PyTorch on Inferentia. \n",
"\n",
"The tutorial has five main sections:\n",
"\n",
"1. Define YOLO v4 model in PyTorch\n",
"2. Download the COCO 2017 evaluation dataset and define the data loader function\n",
"3. Build, Compile, and Save Neuron-Optimized YOLO v4 TorchScript\n",
"4. Evaluate Accuracy on the COCO 2017 Dataset\n",
"5. Benchmark COCO Dataset Performance of the Neuron-Optimized TorchScript\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuron`\n",
"- `torchvision`\n",
"- `pillow`\n",
"- `pycocotools`\n",
"- `neuron-cc[tensorflow]`\n",
"\n",
"Many of these packages will be installed by default when configuring your environment using the Neuron PyTorch setup guide. The additional dependencies must be installed here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install --upgrade pillow pycocotools "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Define YOLO v4 model in PyTorch \n",
"The following PyTorch model definition is from https://github.com/Tianxiaomo/pytorch-YOLOv4/."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import torch\n",
"import torch.neuron\n",
"from torch import nn\n",
"import torch.nn.functional as F\n",
"import os\n",
"import warnings\n",
"\n",
"# Setting up NeuronCore groups for inf1.6xlarge with 16 cores\n",
"n_cores = 16 # This value should be 4 on inf1.xlarge and inf1.2xlarge\n",
"os.environ['NEURON_RT_NUM_CORES'] = str(n_cores)\n",
"\n",
"\n",
"class Mish(torch.nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
"\n",
" def forward(self, x):\n",
" x = x * (torch.tanh(torch.nn.functional.softplus(x)))\n",
" return x\n",
"\n",
"\n",
"class Upsample(nn.Module):\n",
" def __init__(self):\n",
" super(Upsample, self).__init__()\n",
"\n",
" def forward(self, x, target_size, inference=False):\n",
" assert (x.data.dim() == 4)\n",
"\n",
" if inference:\n",
"\n",
" return x.view(x.size(0), x.size(1), x.size(2), 1, x.size(3), 1).\\\n",
" expand(x.size(0), x.size(1), x.size(2), target_size[2] // x.size(2), x.size(3), target_size[3] // x.size(3)).\\\n",
" contiguous().view(x.size(0), x.size(1), target_size[2], target_size[3])\n",
" else:\n",
" return F.interpolate(x, size=(target_size[2], target_size[3]), mode='nearest')\n",
"\n",
"\n",
"class Conv_Bn_Activation(nn.Module):\n",
" def __init__(self, in_channels, out_channels, kernel_size, stride, activation, bn=True, bias=False):\n",
" super().__init__()\n",
" pad = (kernel_size - 1) // 2\n",
"\n",
" self.conv = nn.ModuleList()\n",
" if bias:\n",
" self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad))\n",
" else:\n",
" self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad, bias=False))\n",
" if bn:\n",
" self.conv.append(nn.BatchNorm2d(out_channels))\n",
" if activation == \"mish\":\n",
" self.conv.append(Mish())\n",
" elif activation == \"relu\":\n",
" self.conv.append(nn.ReLU(inplace=True))\n",
" elif activation == \"leaky\":\n",
" self.conv.append(nn.LeakyReLU(0.1, inplace=True))\n",
" elif activation == \"linear\":\n",
" pass\n",
" else:\n",
" print(\"activate error !!! {} {} {}\".format(sys._getframe().f_code.co_filename,\n",
" sys._getframe().f_code.co_name, sys._getframe().f_lineno))\n",
"\n",
" def forward(self, x):\n",
" for l in self.conv:\n",
" x = l(x)\n",
" return x\n",
"\n",
"\n",
"class ResBlock(nn.Module):\n",
" \"\"\"\n",
" Sequential residual blocks each of which consists of \\\n",
" two convolution layers.\n",
" Args:\n",
" ch (int): number of input and output channels.\n",
" nblocks (int): number of residual blocks.\n",
" shortcut (bool): if True, residual tensor addition is enabled.\n",
" \"\"\"\n",
"\n",
" def __init__(self, ch, nblocks=1, shortcut=True):\n",
" super().__init__()\n",
" self.shortcut = shortcut\n",
" self.module_list = nn.ModuleList()\n",
" for i in range(nblocks):\n",
" resblock_one = nn.ModuleList()\n",
" resblock_one.append(Conv_Bn_Activation(ch, ch, 1, 1, 'mish'))\n",
" resblock_one.append(Conv_Bn_Activation(ch, ch, 3, 1, 'mish'))\n",
" self.module_list.append(resblock_one)\n",
"\n",
" def forward(self, x):\n",
" for module in self.module_list:\n",
" h = x\n",
" for res in module:\n",
" h = res(h)\n",
" x = x + h if self.shortcut else h\n",
" return x\n",
"\n",
"\n",
"class DownSample1(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(3, 32, 3, 1, 'mish')\n",
"\n",
" self.conv2 = Conv_Bn_Activation(32, 64, 3, 2, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # [route]\n",
" # layers = -2\n",
" self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
"\n",
" self.conv5 = Conv_Bn_Activation(64, 32, 1, 1, 'mish')\n",
" self.conv6 = Conv_Bn_Activation(32, 64, 3, 1, 'mish')\n",
" # [shortcut]\n",
" # from=-3\n",
" # activation = linear\n",
"\n",
" self.conv7 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # [route]\n",
" # layers = -1, -7\n",
" self.conv8 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x2)\n",
" # route -2\n",
" x4 = self.conv4(x2)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" # shortcut -3\n",
" x6 = x6 + x4\n",
"\n",
" x7 = self.conv7(x6)\n",
" # [route]\n",
" # layers = -1, -7\n",
" x7 = torch.cat([x7, x3], dim=1)\n",
" x8 = self.conv8(x7)\n",
" return x8\n",
"\n",
"\n",
"class DownSample2(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(64, 128, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
" # r -2\n",
" self.conv3 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=64, nblocks=2)\n",
"\n",
" # s -3\n",
" self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')\n",
" # r -1 -10\n",
" self.conv5 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample3(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(128, 256, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=128, nblocks=8)\n",
" self.conv4 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample4(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(256, 512, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=256, nblocks=8)\n",
" self.conv4 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class DownSample5(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.conv1 = Conv_Bn_Activation(512, 1024, 3, 2, 'mish')\n",
" self.conv2 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')\n",
" self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')\n",
"\n",
" self.resblock = ResBlock(ch=512, nblocks=4)\n",
" self.conv4 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')\n",
" self.conv5 = Conv_Bn_Activation(1024, 1024, 1, 1, 'mish')\n",
"\n",
" def forward(self, input):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x1)\n",
"\n",
" r = self.resblock(x3)\n",
" x4 = self.conv4(r)\n",
"\n",
" x4 = torch.cat([x4, x2], dim=1)\n",
" x5 = self.conv5(x4)\n",
" return x5\n",
"\n",
"\n",
"class Neck(nn.Module):\n",
" def __init__(self, inference=False):\n",
" super().__init__()\n",
" self.inference = inference\n",
"\n",
" self.conv1 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv2 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" # SPP\n",
" self.maxpool1 = nn.MaxPool2d(kernel_size=5, stride=1, padding=5 // 2)\n",
" self.maxpool2 = nn.MaxPool2d(kernel_size=9, stride=1, padding=9 // 2)\n",
" self.maxpool3 = nn.MaxPool2d(kernel_size=13, stride=1, padding=13 // 2)\n",
"\n",
" # R -1 -3 -5 -6\n",
" # SPP\n",
" self.conv4 = Conv_Bn_Activation(2048, 512, 1, 1, 'leaky')\n",
" self.conv5 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv6 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv7 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" # UP\n",
" self.upsample1 = Upsample()\n",
" # R 85\n",
" self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" # R -1 -3\n",
" self.conv9 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv10 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv11 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv12 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv13 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv14 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" # UP\n",
" self.upsample2 = Upsample()\n",
" # R 54\n",
" self.conv15 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" # R -1 -3\n",
" self.conv16 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" self.conv17 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv18 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
" self.conv19 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv20 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')\n",
"\n",
" def forward(self, input, downsample4, downsample3, inference=False):\n",
" x1 = self.conv1(input)\n",
" x2 = self.conv2(x1)\n",
" x3 = self.conv3(x2)\n",
" # SPP\n",
" m1 = self.maxpool1(x3)\n",
" m2 = self.maxpool2(x3)\n",
" m3 = self.maxpool3(x3)\n",
" spp = torch.cat([m3, m2, m1, x3], dim=1)\n",
" # SPP end\n",
" x4 = self.conv4(spp)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" x7 = self.conv7(x6)\n",
" # UP\n",
" up = self.upsample1(x7, downsample4.size(), self.inference)\n",
" # R 85\n",
" x8 = self.conv8(downsample4)\n",
" # R -1 -3\n",
" x8 = torch.cat([x8, up], dim=1)\n",
"\n",
" x9 = self.conv9(x8)\n",
" x10 = self.conv10(x9)\n",
" x11 = self.conv11(x10)\n",
" x12 = self.conv12(x11)\n",
" x13 = self.conv13(x12)\n",
" x14 = self.conv14(x13)\n",
"\n",
" # UP\n",
" up = self.upsample2(x14, downsample3.size(), self.inference)\n",
" # R 54\n",
" x15 = self.conv15(downsample3)\n",
" # R -1 -3\n",
" x15 = torch.cat([x15, up], dim=1)\n",
"\n",
" x16 = self.conv16(x15)\n",
" x17 = self.conv17(x16)\n",
" x18 = self.conv18(x17)\n",
" x19 = self.conv19(x18)\n",
" x20 = self.conv20(x19)\n",
" return x20, x13, x6\n",
"\n",
"\n",
"class Yolov4Head(nn.Module):\n",
" def __init__(self, output_ch, n_classes, inference=False):\n",
" super().__init__()\n",
" self.inference = inference\n",
"\n",
" self.conv1 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')\n",
" self.conv2 = Conv_Bn_Activation(256, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
"\n",
" self.yolo1 = YoloLayer(\n",
" anchor_mask=[0, 1, 2], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=8)\n",
"\n",
" # R -4\n",
" self.conv3 = Conv_Bn_Activation(128, 256, 3, 2, 'leaky')\n",
"\n",
" # R -1 -16\n",
" self.conv4 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv5 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv6 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv7 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')\n",
" self.conv9 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')\n",
" self.conv10 = Conv_Bn_Activation(512, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
" \n",
" self.yolo2 = YoloLayer(\n",
" anchor_mask=[3, 4, 5], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=16)\n",
"\n",
" # R -4\n",
" self.conv11 = Conv_Bn_Activation(256, 512, 3, 2, 'leaky')\n",
"\n",
" # R -1 -37\n",
" self.conv12 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv13 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv14 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv15 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv16 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')\n",
" self.conv17 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')\n",
" self.conv18 = Conv_Bn_Activation(1024, output_ch, 1, 1, 'linear', bn=False, bias=True)\n",
" \n",
" self.yolo3 = YoloLayer(\n",
" anchor_mask=[6, 7, 8], num_classes=n_classes,\n",
" anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],\n",
" num_anchors=9, stride=32)\n",
"\n",
" def forward(self, input1, input2, input3):\n",
" x1 = self.conv1(input1)\n",
" x2 = self.conv2(x1)\n",
"\n",
" x3 = self.conv3(input1)\n",
" # R -1 -16\n",
" x3 = torch.cat([x3, input2], dim=1)\n",
" x4 = self.conv4(x3)\n",
" x5 = self.conv5(x4)\n",
" x6 = self.conv6(x5)\n",
" x7 = self.conv7(x6)\n",
" x8 = self.conv8(x7)\n",
" x9 = self.conv9(x8)\n",
" x10 = self.conv10(x9)\n",
"\n",
" # R -4\n",
" x11 = self.conv11(x8)\n",
" # R -1 -37\n",
" x11 = torch.cat([x11, input3], dim=1)\n",
"\n",
" x12 = self.conv12(x11)\n",
" x13 = self.conv13(x12)\n",
" x14 = self.conv14(x13)\n",
" x15 = self.conv15(x14)\n",
" x16 = self.conv16(x15)\n",
" x17 = self.conv17(x16)\n",
" x18 = self.conv18(x17)\n",
" \n",
" if self.inference:\n",
" y1 = self.yolo1(x2)\n",
" y2 = self.yolo2(x10)\n",
" y3 = self.yolo3(x18)\n",
"\n",
" return get_region_boxes([y1, y2, y3])\n",
" \n",
" else:\n",
" return [x2, x10, x18]\n",
"\n",
"\n",
"class Yolov4(nn.Module):\n",
" def __init__(self, yolov4conv137weight=None, n_classes=80, inference=False):\n",
" super().__init__()\n",
"\n",
" output_ch = (4 + 1 + n_classes) * 3\n",
"\n",
" # backbone\n",
" self.down1 = DownSample1()\n",
" self.down2 = DownSample2()\n",
" self.down3 = DownSample3()\n",
" self.down4 = DownSample4()\n",
" self.down5 = DownSample5()\n",
" # neck\n",
" self.neek = Neck(inference)\n",
" # yolov4conv137\n",
" if yolov4conv137weight:\n",
" _model = nn.Sequential(self.down1, self.down2, self.down3, self.down4, self.down5, self.neek)\n",
" pretrained_dict = torch.load(yolov4conv137weight)\n",
"\n",
" model_dict = _model.state_dict()\n",
" # 1. filter out unnecessary keys\n",
" pretrained_dict = {k1: v for (k, v), k1 in zip(pretrained_dict.items(), model_dict)}\n",
" # 2. overwrite entries in the existing state dict\n",
" model_dict.update(pretrained_dict)\n",
" _model.load_state_dict(model_dict)\n",
" \n",
" # head\n",
" self.head = Yolov4Head(output_ch, n_classes, inference)\n",
"\n",
"\n",
" def forward(self, input):\n",
" d1 = self.down1(input)\n",
" d2 = self.down2(d1)\n",
" d3 = self.down3(d2)\n",
" d4 = self.down4(d3)\n",
" d5 = self.down5(d4)\n",
"\n",
" x20, x13, x6 = self.neek(d5, d4, d3)\n",
"\n",
" output = self.head(x20, x13, x6)\n",
" return output\n",
"\n",
"\n",
"def yolo_forward_dynamic(output, conf_thresh, num_classes, anchors, num_anchors, scale_x_y, only_objectness=1,\n",
" validation=False):\n",
" # Output would be invalid if it does not satisfy this assert\n",
" # assert (output.size(1) == (5 + num_classes) * num_anchors)\n",
"\n",
" # print(output.size())\n",
"\n",
" # Slice the second dimension (channel) of output into:\n",
" # [ 2, 2, 1, num_classes, 2, 2, 1, num_classes, 2, 2, 1, num_classes ]\n",
" # And then into\n",
" # bxy = [ 6 ] bwh = [ 6 ] det_conf = [ 3 ] cls_conf = [ num_classes * 3 ]\n",
" # batch = output.size(0)\n",
" # H = output.size(2)\n",
" # W = output.size(3)\n",
"\n",
" bxy_list = []\n",
" bwh_list = []\n",
" det_confs_list = []\n",
" cls_confs_list = []\n",
"\n",
" for i in range(num_anchors):\n",
" begin = i * (5 + num_classes)\n",
" end = (i + 1) * (5 + num_classes)\n",
" \n",
" bxy_list.append(output[:, begin : begin + 2])\n",
" bwh_list.append(output[:, begin + 2 : begin + 4])\n",
" det_confs_list.append(output[:, begin + 4 : begin + 5])\n",
" cls_confs_list.append(output[:, begin + 5 : end])\n",
"\n",
" # Shape: [batch, num_anchors * 2, H, W]\n",
" bxy = torch.cat(bxy_list, dim=1)\n",
" # Shape: [batch, num_anchors * 2, H, W]\n",
" bwh = torch.cat(bwh_list, dim=1)\n",
"\n",
" # Shape: [batch, num_anchors, H, W]\n",
" det_confs = torch.cat(det_confs_list, dim=1)\n",
" # Shape: [batch, num_anchors * H * W]\n",
" det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3))\n",
"\n",
" # Shape: [batch, num_anchors * num_classes, H, W]\n",
" cls_confs = torch.cat(cls_confs_list, dim=1)\n",
" # Shape: [batch, num_anchors, num_classes, H * W]\n",
" cls_confs = cls_confs.view(output.size(0), num_anchors, num_classes, output.size(2) * output.size(3))\n",
" # Shape: [batch, num_anchors, num_classes, H * W] --> [batch, num_anchors * H * W, num_classes] \n",
" cls_confs = cls_confs.permute(0, 1, 3, 2).reshape(output.size(0), num_anchors * output.size(2) * output.size(3), num_classes)\n",
"\n",
" # Apply sigmoid(), exp() and softmax() to slices\n",
" #\n",
" bxy = torch.sigmoid(bxy) * scale_x_y - 0.5 * (scale_x_y - 1)\n",
" bwh = torch.exp(bwh)\n",
" det_confs = torch.sigmoid(det_confs)\n",
" cls_confs = torch.sigmoid(cls_confs)\n",
"\n",
" # Prepare C-x, C-y, P-w, P-h (None of them are torch related)\n",
" grid_x = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(3) - 1, output.size(3)), axis=0).repeat(output.size(2), 0), axis=0), axis=0)\n",
" grid_y = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(2) - 1, output.size(2)), axis=1).repeat(output.size(3), 1), axis=0), axis=0)\n",
" # grid_x = torch.linspace(0, W - 1, W).reshape(1, 1, 1, W).repeat(1, 1, H, 1)\n",
" # grid_y = torch.linspace(0, H - 1, H).reshape(1, 1, H, 1).repeat(1, 1, 1, W)\n",
"\n",
" anchor_w = []\n",
" anchor_h = []\n",
" for i in range(num_anchors):\n",
" anchor_w.append(anchors[i * 2])\n",
" anchor_h.append(anchors[i * 2 + 1])\n",
"\n",
" device = None\n",
" cuda_check = output.is_cuda\n",
" if cuda_check:\n",
" device = output.get_device()\n",
"\n",
" bx_list = []\n",
" by_list = []\n",
" bw_list = []\n",
" bh_list = []\n",
"\n",
" # Apply C-x, C-y, P-w, P-h\n",
" for i in range(num_anchors):\n",
" ii = i * 2\n",
" # Shape: [batch, 1, H, W]\n",
" bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32)\n",
" # Shape: [batch, 1, H, W]\n",
" by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32)\n",
" # Shape: [batch, 1, H, W]\n",
" bw = bwh[:, ii : ii + 1] * anchor_w[i]\n",
" # Shape: [batch, 1, H, W]\n",
" bh = bwh[:, ii + 1 : ii + 2] * anchor_h[i]\n",
"\n",
" bx_list.append(bx)\n",
" by_list.append(by)\n",
" bw_list.append(bw)\n",
" bh_list.append(bh)\n",
"\n",
"\n",
" ########################################\n",
" # Figure out bboxes from slices #\n",
" ########################################\n",
" \n",
" # Shape: [batch, num_anchors, H, W]\n",
" bx = torch.cat(bx_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" by = torch.cat(by_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" bw = torch.cat(bw_list, dim=1)\n",
" # Shape: [batch, num_anchors, H, W]\n",
" bh = torch.cat(bh_list, dim=1)\n",
"\n",
" # Shape: [batch, 2 * num_anchors, H, W]\n",
" bx_bw = torch.cat((bx, bw), dim=1)\n",
" # Shape: [batch, 2 * num_anchors, H, W]\n",
" by_bh = torch.cat((by, bh), dim=1)\n",
"\n",
" # normalize coordinates to [0, 1]\n",
" bx_bw /= output.size(3)\n",
" by_bh /= output.size(2)\n",
"\n",
" # Shape: [batch, num_anchors * H * W, 1]\n",
" bx = bx_bw[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" by = by_bh[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" bw = bx_bw[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" bh = by_bh[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
"\n",
" bx1 = bx - bw * 0.5\n",
" by1 = by - bh * 0.5\n",
" bx2 = bx1 + bw\n",
" by2 = by1 + bh\n",
"\n",
" # Shape: [batch, num_anchors * h * w, 4] -> [batch, num_anchors * h * w, 1, 4]\n",
" boxes = torch.cat((bx1, by1, bx2, by2), dim=2).view(output.size(0), num_anchors * output.size(2) * output.size(3), 1, 4)\n",
" # boxes = boxes.repeat(1, 1, num_classes, 1)\n",
"\n",
" # boxes: [batch, num_anchors * H * W, 1, 4]\n",
" # cls_confs: [batch, num_anchors * H * W, num_classes]\n",
" # det_confs: [batch, num_anchors * H * W]\n",
"\n",
" det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)\n",
" confs = cls_confs * det_confs\n",
"\n",
" # boxes: [batch, num_anchors * H * W, 1, 4]\n",
" # confs: [batch, num_anchors * H * W, num_classes]\n",
"\n",
" return boxes, confs\n",
"\n",
"class YoloLayer(nn.Module):\n",
" \"\"\"\n",
" Yolo layer\n",
" model_out: while inference,is post-processing inside or outside the model\n",
" true:outside\n",
" \"\"\"\n",
" def __init__(self, anchor_mask=[], num_classes=0, anchors=[], num_anchors=1, stride=32, model_out=False):\n",
" super(YoloLayer, self).__init__()\n",
" self.anchor_mask = anchor_mask\n",
" self.num_classes = num_classes\n",
" self.anchors = anchors\n",
" self.num_anchors = num_anchors\n",
" self.anchor_step = len(anchors) // num_anchors\n",
" self.coord_scale = 1\n",
" self.noobject_scale = 1\n",
" self.object_scale = 5\n",
" self.class_scale = 1\n",
" self.thresh = 0.6\n",
" self.stride = stride\n",
" self.seen = 0\n",
" self.scale_x_y = 1\n",
"\n",
" self.model_out = model_out\n",
"\n",
" def forward(self, output, target=None):\n",
" if self.training:\n",
" return output\n",
" masked_anchors = []\n",
" for m in self.anchor_mask:\n",
" masked_anchors += self.anchors[m * self.anchor_step:(m + 1) * self.anchor_step]\n",
" masked_anchors = [anchor / self.stride for anchor in masked_anchors]\n",
"\n",
" return yolo_forward_dynamic(output, self.thresh, self.num_classes, masked_anchors, len(self.anchor_mask),scale_x_y=self.scale_x_y)\n",
"\n",
"\n",
"def get_region_boxes(boxes_and_confs):\n",
"\n",
" # print('Getting boxes from boxes and confs ...')\n",
"\n",
" boxes_list = []\n",
" confs_list = []\n",
"\n",
" for item in boxes_and_confs:\n",
" boxes_list.append(item[0])\n",
" confs_list.append(item[1])\n",
"\n",
" # boxes: [batch, num1 + num2 + num3, 1, 4]\n",
" # confs: [batch, num1 + num2 + num3, num_classes]\n",
" boxes = torch.cat(boxes_list, dim=1)\n",
" confs = torch.cat(confs_list, dim=1)\n",
" \n",
" return boxes, confs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Download the COCO 2017 evaluation dataset and define the data loader function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!curl -LO http://images.cocodataset.org/zips/val2017.zip\n",
"!curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q val2017.zip\n",
"!unzip annotations_trainval2017.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define data loader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"import time\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"import torchvision.datasets as dset\n",
"from pycocotools.coco import COCO\n",
"\n",
"\n",
"def get_image_filenames(root=os.getcwd()):\n",
" \"\"\"\n",
" Generate paths to the coco dataset image files.\n",
" \n",
" Args:\n",
" root (str): The root folder contains.\n",
" \n",
" Yields:\n",
" filename (str): The path to an image file.\n",
" \"\"\"\n",
" image_path = os.path.join(root, 'val2017')\n",
" for root, dirs, files in os.walk(image_path):\n",
" for filename in files:\n",
" yield os.path.join(image_path, filename)\n",
"\n",
" \n",
"def get_coco_dataloader(coco2017_root, transform, subset_indices=None):\n",
" \"\"\"\n",
" Create the dataset loader and ground truth coco dataset.\n",
" \n",
" Arguments:\n",
" coco2017_root (str): The root directory to load the data/labels from.\n",
" transform (torchvision.Transform): A transform to apply to the images.\n",
" subset_indices (list): Indices used to create a subset of the dataset.\n",
"\n",
" Returns: \n",
" loader (iterable): Produces transformed images and labels.\n",
" cocoGt (pycocotools.coco.COCO): Contains the ground truth in coco \n",
" format.\n",
" label_info (dict): A mapping from label id to the human-readable name.\n",
" \"\"\"\n",
"\n",
" # Create the dataset\n",
" coco2017_img_path = os.path.join(coco2017_root, 'val2017')\n",
" coco2017_ann_path = os.path.join(\n",
" coco2017_root, 'annotations/instances_val2017.json')\n",
"\n",
" # check the number of images in val2017 - Should be 5000\n",
" num_files = len(list(get_image_filenames(coco2017_root)))\n",
" print('\\nNumber of images in val2017 = {}\\n'.format(num_files))\n",
"\n",
" # load annotations to decode classification results\n",
" with open(coco2017_ann_path) as f:\n",
" annotate_json = json.load(f)\n",
" label_info = {label[\"id\"]: label[\"name\"]\n",
" for label in annotate_json['categories']}\n",
"\n",
" # initialize COCO ground truth dataset\n",
" cocoGt = COCO(coco2017_ann_path)\n",
"\n",
" # create the dataset using torchvision's coco detection dataset\n",
" coco_val_data = dset.CocoDetection(\n",
" root=coco2017_img_path, \n",
" annFile=coco2017_ann_path, \n",
" transform=transform\n",
" )\n",
"\n",
" if subset_indices is not None:\n",
" # Create a smaller subset of the data for testing - e.g. to pinpoint error at image 516\n",
" coco_val_data = torch.utils.data.Subset(coco_val_data, subset_indices)\n",
"\n",
" # create the dataloader using torch dataloader\n",
" loader = torch.utils.data.DataLoader(coco_val_data, batch_size=1, shuffle=False)\n",
"\n",
" return loader, cocoGt, label_info\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load dataset\n",
"Here 2 dataset loaders are created and the resulting data is displayed\n",
"- `orig_coco_val_data_loader`: Contains the original unmodified image\n",
"- `coco_val_data_loader`: Contains images of a standardized size of 608x608 pixels "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"coco2017_root = './'\n",
"orig_coco_val_data_loader, *_ = get_coco_dataloader(coco2017_root, transforms.ToTensor())\n",
"transform = transforms.Compose([transforms.Resize([608, 608]), transforms.ToTensor()])\n",
"coco_val_data_loader, cocoGt, label_info = get_coco_dataloader(coco2017_root, transform)\n",
"image_orig, _ = next(iter(orig_coco_val_data_loader))\n",
"print(image_orig.shape)\n",
"image, image_info = next(iter(coco_val_data_loader))\n",
"image_id = image_info[0][\"image_id\"].item()\n",
"print(image.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define some helper functions for deployment (inference)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def postprocess(boxes, scores, score_threshold=0.05, iou_threshold=0.5):\n",
" \"\"\"\n",
" Classifies and filters bounding boxes from Yolo V4 output.\n",
" \n",
" Performs classification, filtering, and non-maximum suppression to remove\n",
" boxes that are irrelevant. The result is the filtered set of boxes, the \n",
" associated label confidence score, and the predicted label.\n",
" \n",
" See: https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.nms\n",
" \n",
" Args:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The categories scores for each box.\n",
" score_threshold (float): Ignore boxes with scores below threshold.\n",
" iou_threshold (float): Discards boxes with intersection above threshold. \n",
" \n",
" Returns:\n",
" boxes (torch.Tensor): The filtered Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box.\n",
" \"\"\"\n",
" \n",
" # shape: [n_batch, n_boxes, 1, 4] => [n_boxes, 4] # Assumes n_batch size is 1\n",
" boxes = boxes.squeeze()\n",
"\n",
" # shape: [n_batch, n_boxes, 80] => [n_boxes, 80] # Assumes n_batch size is 1\n",
" scores = scores.squeeze()\n",
"\n",
" # Classify each box according to the maximum category score\n",
" score, column = torch.max(scores, dim=1)\n",
"\n",
" # Filter out rows for scores which are below threshold\n",
" mask = score > score_threshold\n",
"\n",
" # Filter model output data\n",
" boxes = boxes[mask]\n",
" score = score[mask]\n",
" idxs = column[mask]\n",
"\n",
" # Perform non-max suppression on all categories at once. shape: [n_keep,]\n",
" keep = torchvision.ops.batched_nms(\n",
" boxes=boxes, \n",
" scores=score, \n",
" idxs=idxs,\n",
" iou_threshold=iou_threshold,\n",
" )\n",
"\n",
" # The image category id associated with each column\n",
" categories = torch.tensor([\n",
" 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16,\n",
" 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31,\n",
" 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,\n",
" 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56,\n",
" 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72,\n",
" 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85,\n",
" 86, 87, 88, 89, 90\n",
" ])\n",
" \n",
" boxes = boxes[keep] # shape: [n_keep, 4]\n",
" score = score[keep] # shape: [n_keep,]\n",
" idxs = idxs[keep]\n",
" label = categories[idxs] # shape: [n_keep,]\n",
" \n",
" return boxes, score, label\n",
"\n",
"\n",
"def get_results_as_dict(boxes, scores, labels, image_orig):\n",
" \"\"\"\n",
" Transforms post-processed output into dictionary output.\n",
" \n",
" This translates the model coordinate bounding boxes (x1, y1, x2, y2) \n",
" into a rectangular description (x, y, width, height) scaled to the \n",
" original image size.\n",
" \n",
" Args:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box.\n",
" image_orig (torch.Tensor): The image to scale the bounding boxes to.\n",
" \n",
" Returns:\n",
" output (dict): The dictionary of rectangle bounding boxes.\n",
" \"\"\"\n",
" h_size, w_size = image_orig.shape[-2:]\n",
"\n",
" x1 = boxes[:, 0] * w_size\n",
" y1 = boxes[:, 1] * h_size\n",
" x2 = boxes[:, 2] * w_size\n",
" y2 = boxes[:, 3] * h_size\n",
"\n",
" width = x2 - x1\n",
" height = y2 - y1\n",
"\n",
" boxes = torch.stack([x1, y1, width, height]).T\n",
" return {\n",
" 'boxes': boxes.detach().numpy(),\n",
" 'labels': labels.detach().numpy(),\n",
" 'scores': scores.detach().numpy(),\n",
" }\n",
"\n",
"\n",
"def prepare_for_coco_detection(predictions):\n",
" \"\"\"\n",
" Convert dictionary model predictions into an expected COCO dataset format.\n",
" \n",
" Args:\n",
" predictions (dict): The list of box coordinates, scores, and labels.\n",
" \n",
" Returns:\n",
" output (list[dict]): The list of bounding boxes.\n",
" \"\"\"\n",
" coco_results = []\n",
" for original_id, prediction in predictions.items():\n",
" if len(prediction) == 0:\n",
" continue\n",
"\n",
" boxes = prediction[\"boxes\"].tolist()\n",
" scores = prediction[\"scores\"].tolist()\n",
" labels = prediction[\"labels\"].tolist()\n",
"\n",
" coco_results.extend(\n",
" [\n",
" {\n",
" \"image_id\": original_id,\n",
" \"category_id\": labels[k],\n",
" \"bbox\": box,\n",
" \"score\": scores[k],\n",
" }\n",
" for k, box in enumerate(boxes)\n",
" ]\n",
" )\n",
" return coco_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download pretrained checkpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"def download_file_from_google_drive(id, destination):\n",
" response = requests.post('https://drive.google.com/uc?id='+id+'&confirm=t')\n",
" save_response_content(response, destination)\n",
"\n",
"def save_response_content(response, destination):\n",
" CHUNK_SIZE = 32768\n",
" with open(destination, \"wb\") as f:\n",
" for chunk in response.iter_content(CHUNK_SIZE):\n",
" if chunk: # filter out keep-alive new chunks\n",
" f.write(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"download_file_from_google_drive('1wv_LiFeCRYwtpkqREPeI13-gPELBDwuJ', './yolo_v4.pth')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Build, Compile, and Save Neuron-Optimized YOLO v4 TorchScript\n",
"### Construct model and load pretrained checkpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model = Yolov4(yolov4conv137weight=None, n_classes=80, inference=True)\n",
"weightfile = \"./yolo_v4.pth\"\n",
"pretrained_dict = torch.load(weightfile, map_location=torch.device('cpu'))\n",
"model.load_state_dict(pretrained_dict)\n",
"model.eval()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Execute inference for a single image and display output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as patches\n",
"\n",
"image_orig, _ = next(iter(orig_coco_val_data_loader))\n",
"image, _ = next(iter(coco_val_data_loader))\n",
"boxes, scores = model(image)\n",
"boxes, scores, labels = postprocess(boxes, scores)\n",
"result_dict = get_results_as_dict(boxes, scores, labels, image_orig)\n",
"\n",
"fig, ax = plt.subplots(figsize=(10, 10))\n",
"ax.imshow(image_orig.numpy().squeeze(0).transpose(1, 2, 0))\n",
"for xywh, _ in zip(result_dict['boxes'], result_dict['labels']):\n",
" x, y, w, h = xywh\n",
" rect = patches.Rectangle((x, y), w, h, linewidth=1, edgecolor='g', facecolor='none')\n",
" ax.add_patch(rect)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Run compilation with manually specified device placement\n",
"\n",
"First, inspect the model without running compilation by adding the `skip_compiler=True` argument to the `torch.neuron.trace` call."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model_neuron_for_inspection = torch.neuron.trace(model, image, skip_compiler=True)\n",
"print(model_neuron_for_inspection)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Inspecting the model, we discover that there are many `aten::slice` operations in some submodules called `YoloLayer`. Although these operations are supported by the neuron-cc compiler, they are not going to run efficiently on the Inferentia hardware. To work it around, we recommend to manually place these operators on CPU."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To manually place `YoloLayer` on CPU, we may make use of the `subgraph_builder_function` argument in `torch.neuron.trace`. It is a callback function that returns `True` or `False` based on information available in `node`. The typical use is a condition based on either `node.name` or `node.type_string`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def subgraph_builder_function(node):\n",
" return 'YoloLayer' not in node.name\n",
"\n",
"model_neuron = torch.neuron.trace(model, image, subgraph_builder_function=subgraph_builder_function)\n",
"model_neuron.save('yolo_v4_neuron.pt')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compilation is now finished and the compiled model has been saved to a local file called 'yolo_v4_neuron.pt'. Saving is important due to the slow compilation process."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4: Evaluate Accuracy on the COCO 2017 Dataset\n",
"### Load compiled model and run inference\n",
"To validate accuracy of the compiled model, lets run inference on the COCO 2017 validation dataset. We start by defining a helper function `run_inference`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def run_inference(dataloader, dataloader_orig, model, convert=True, modelName=''):\n",
" \"\"\"\n",
" Run Yolo V4 inference on the COCO dataset.\n",
" \n",
" Args:\n",
" dataloader (iterable): Data loader of input processed images and labels.\n",
" dataloader_orig (iterable): Data loader with original images.\n",
" model (torch.nn.Module): The torch model to run inference against.\n",
" convert (bool): Set to False when using a vanilla torchvision model that \n",
" does not need to be transformed into coco format.\n",
" \n",
" Returns: \n",
" imgIds (list): The list of images with predictions.\n",
" cocoDt (pycocotools.coco.COCO): Contains the predictions from the model \n",
" in coco format.\n",
" \"\"\"\n",
" print('\\n================ Starting Inference on {} Images using {} model ================\\n'.format(\n",
" len(dataloader), modelName))\n",
"\n",
" modelName = str(modelName).replace(\" \", \"_\")\n",
"\n",
" # convert predicition to cocoDt\n",
" # code from def evaluate in https://github.com/pytorch/vision/blob/master/references/detection/engine.py\n",
" imgIds = []\n",
" results = []\n",
" skippedImages = []\n",
"\n",
" # time inference\n",
" inference_time = 0.0\n",
" for idx, ((image, targets), (image_orig, _)) in enumerate(zip(dataloader, dataloader_orig)):\n",
" # if target is empty, skip the image because it breaks the scripted model\n",
" if not targets:\n",
" skippedImages.append(idx)\n",
" continue\n",
"\n",
" # get the predictions\n",
" start_time = time.time()\n",
" boxes, scores = model(image)\n",
" delta = time.time() - start_time\n",
" inference_time += delta\n",
" boxes, scores, labels = postprocess(boxes, scores)\n",
" outputs = get_results_as_dict(boxes, scores, labels, image_orig)\n",
"\n",
" res = {target[\"image_id\"].item(): output for target,\n",
" output in zip(targets, [outputs])}\n",
"\n",
" # add the image id to imgIds\n",
" image_id = targets[0][\"image_id\"].item()\n",
" imgIds.append(image_id)\n",
"\n",
" # convert the predicition into cocoDt results\n",
" pred = prepare_for_coco_detection(res)\n",
" results.extend(pred)\n",
"\n",
" print('\\n==================== Performance Measurement ====================')\n",
" print('Finished inference on {} images in {:.2f} seconds'.format(\n",
" len(dataloader), inference_time))\n",
" print('=================================================================\\n')\n",
"\n",
" # create bbox detections file\n",
" # following code in https://github.com/aws/aws-neuron-sdk/blob/master/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb\n",
" resultsfile = modelName + '_bbox_detections.json'\n",
" print('Generating json file...')\n",
" with open(resultsfile, 'w') as f:\n",
" json.dump(results, f)\n",
"\n",
" # return COCO api object with loadRes\n",
" cocoDt = cocoGt.loadRes(resultsfile)\n",
"\n",
" return imgIds, cocoDt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to simply load the compiled model from disk and then run inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_neuron = torch.jit.load('yolo_v4_neuron.pt')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgIds, cocoDt = run_inference(coco_val_data_loader, orig_coco_val_data_loader, model_neuron)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then use the standard `pycocotools` routines to generate a report of bounding box precision/recall."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pycocotools.cocoeval import COCOeval\n",
"\n",
"cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')\n",
"cocoEval.params.imgIds = imgIds\n",
"cocoEval.evaluate()\n",
"cocoEval.accumulate()\n",
"cocoEval.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For reference, we may perform the same evaluation on the CPU model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgIdsRef, cocoDtRef = run_inference(coco_val_data_loader, orig_coco_val_data_loader, model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cocoEval = COCOeval(cocoGt, cocoDtRef, 'bbox')\n",
"cocoEval.params.imgIds = imgIdsRef\n",
"cocoEval.evaluate()\n",
"cocoEval.accumulate()\n",
"cocoEval.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 5: Benchmark COCO Dataset Performance of the Neuron-Optimized TorchScript\n",
"The following code snippet sets up data parallel on 16 NeuronCores and runs saturated multi-threaded inference on the Inferentia accelerator. Note that the number of cores (`n_cores`) should be set to the number of available NeuronCores on the current instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.neuron\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"import torchvision.datasets as dset\n",
"import multiprocessing as mp\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import PIL\n",
"import os\n",
"import time\n",
"\n",
"n_threads = 16\n",
"\n",
"def get_image_filenames(root=os.getcwd()):\n",
" \"\"\"\n",
" Generate paths to the coco dataset image files.\n",
" \n",
" Args:\n",
" root (str): The root folder contains.\n",
" \n",
" Yields:\n",
" filename (str): The path to an image file.\n",
" \"\"\"\n",
" image_path = os.path.join(root, 'val2017')\n",
" for root, dirs, files in os.walk(image_path):\n",
" for filename in files:\n",
" yield os.path.join(image_path, filename)\n",
"\n",
"def preprocess(path):\n",
" \"\"\"\n",
" Load an image and convert to the expected Yolo V4 tensor format.\n",
" \n",
" Args:\n",
" path (str): The image file to load from disk. \n",
" \n",
" Returns:\n",
" result (torch.Tensor): The image for prediction. Shape: [1, 3, 608, 608]\n",
" \"\"\"\n",
" image = PIL.Image.open(path).convert('RGB')\n",
" resized = torchvision.transforms.functional.resize(image, [608, 608])\n",
" tensor = torchvision.transforms.functional.to_tensor(resized)\n",
" return tensor.unsqueeze(0).to(torch.float32)\n",
"\n",
"\n",
"def load_model(filename='yolo_v4_neuron.pt'):\n",
" \"\"\"\n",
" Load and pre-warm the Yolo V4 model.\n",
" \n",
" Args:\n",
" filename (str): The location to load the model from.\n",
" \n",
" Returns:\n",
" model (torch.nn.Module): The torch model.\n",
" \"\"\"\n",
" \n",
" # Load model from disk\n",
" model = torch.jit.load(filename)\n",
"\n",
" # Warm up model on neuron by running a single example image\n",
" filename = next(iter(get_image_filenames()))\n",
" image = preprocess(filename)\n",
" model(image)\n",
"\n",
" return model\n",
"\n",
"\n",
"def task(model, filename):\n",
" \"\"\"\n",
" The thread task to perform prediction.\n",
" \n",
" This does the full end-to-end processing of an image from loading from disk\n",
" all the way to classifying and filtering bounding boxes.\n",
" \n",
" Args:\n",
" model (torch.nn.Module): The model to run processing with\n",
" filename (str): The image file to load from disk. \n",
" \n",
" Returns:\n",
" boxes (torch.Tensor): The Yolo V4 bounding boxes.\n",
" scores (torch.Tensor): The label score for each box.\n",
" labels (torch.Tensor): The label for each box. \n",
" \"\"\"\n",
" image = preprocess(filename)\n",
" begin = time.time()\n",
" boxes, scores = model(image)\n",
" delta = time.time() - begin\n",
" return postprocess(boxes, scores), delta\n",
"\n",
"\n",
"def benchmark():\n",
" \"\"\"\n",
" Run a benchmark on the entire COCO dataset against the neuron model.\n",
" \"\"\"\n",
" \n",
" # Load a model into each NeuronCore\n",
" models = [load_model() for _ in range(n_cores)]\n",
" \n",
" # Create input/output lists\n",
" filenames = list(get_image_filenames())\n",
" results = list()\n",
" latency = list()\n",
" \n",
" # We want to keep track of average completion time per thread\n",
" sum_time = 0.0\n",
" \n",
" # Submit all tasks and wait for them to finish\n",
" with ThreadPoolExecutor(n_threads) as pool:\n",
" for i, filename in enumerate(filenames):\n",
" result = pool.submit(task, models[i % len(models)], filename)\n",
" results.append(result)\n",
" for result in results:\n",
" results, times = result.result() # Note: Outputs unused for benchmark\n",
" latency.append(times)\n",
" sum_time += times\n",
" \n",
" print('Duration: ', sum_time / n_threads)\n",
" print('Images Per Second:', len(filenames) / (sum_time / n_threads))\n",
" print(\"Latency P50: {:.1f}\".format(np.percentile(latency[1000:], 50)*1000.0))\n",
" print(\"Latency P90: {:.1f}\".format(np.percentile(latency[1000:], 90)*1000.0))\n",
" print(\"Latency P95: {:.1f}\".format(np.percentile(latency[1000:], 95)*1000.0))\n",
" print(\"Latency P99: {:.1f}\".format(np.percentile(latency[1000:], 99)*1000.0))\n",
"\n",
"benchmark()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
</pre></body></html> | 2023-09-29T20:55:25.738Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/github-changes.rst.txt | ```
.. post:: Oct 10, 2022 02:00
:language: en
:tags: github
.. _announce-aws-neuron-github-org:
Introducing New Neuron GitHub Repositories
------------------------------------------
Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron Github repositories will be migrated
to the new `AWS Neuron GitHub Organization <https://github.com/aws-neuron>`_.
The new AWS Neuron GitHub Organization will include the `Neuron SDK GitHub <https://github.com/aws-neuron/aws-neuron-sdk>`_ repository and will include the following additional new GitHub repositories:
.. list-table:: AWS Neuron GitHub Organization
:widths: auto
:header-rows: 1
:align: left
:class: table-smaller-font-size
* - New GitHub repository
- Description
* - `AWS Neuron Samples <https://github.com/aws-neuron/aws-neuron-samples>`_
- Repository that hosts examples and scripts used in the Neuron documentation tutorials
* - `AWS Neuron Reference for Megatron-LM <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`_
- Repository that hosts Neuron support for Megatron-LM
* - `AWS Neuron Samples for AWS ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`_
- Repository that hosts Neuron support for AWS ParallelCluster
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Oct 10, 2022 02:00
:language: en
:tags: github
.. _announce-aws-neuron-github-org:
Introducing New Neuron GitHub Repositories
------------------------------------------
Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron Github repositories will be migrated
to the new `AWS Neuron GitHub Organization <https://github.com/aws-neuron>`_.
The new AWS Neuron GitHub Organization will include the `Neuron SDK GitHub <https://github.com/aws-neuron/aws-neuron-sdk>`_ repository and will include the following additional new GitHub repositories:
.. list-table:: AWS Neuron GitHub Organization
:widths: auto
:header-rows: 1
:align: left
:class: table-smaller-font-size
* - New GitHub repository
- Description
* - `AWS Neuron Samples <https://github.com/aws-neuron/aws-neuron-samples>`_
- Repository that hosts examples and scripts used in the Neuron documentation tutorials
* - `AWS Neuron Reference for Megatron-LM <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`_
- Repository that hosts Neuron support for Megatron-LM
* - `AWS Neuron Samples for AWS ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`_
- Repository that hosts Neuron support for AWS ParallelCluster
</pre></body></html> | 2023-09-29T20:55:25.746Z | |
TensorFlow-Model-Server-Neuron 2.x Release Notes — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2.html#tensorflow-modelserver-rn-v2 | # TensorFlow-Model-Server-Neuron 2.x Release Notes — AWS Neuron Documentation
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## TensorFlow-Model-Server-Neuron 2.x Release Notes[#](#tensorflow-model-server-neuron-2-x-release-notes "Permalink to this headline")
Table of contents
- [TensorFlow Model Server Neuron 2.x release \[2.4.0.0\]](#tensorflow-model-server-neuron-2-x-release-2-4-0-0)
- [TensorFlow Model Server Neuron 2.x release \[2.3.0.0\]](#tensorflow-model-server-neuron-2-x-release-2-3-0-0)
- [TensorFlow Model Server Neuron 2.x release \[2.2.0.0\]](#tensorflow-model-server-neuron-2-x-release-2-2-0-0)
- [New in this release](#new-in-this-release)
- [Summary](#summary)
This document lists the release notes for the TensorFlow-Model-Server-Neuron package.
## [TensorFlow Model Server Neuron 2.x release \[2.4.0.0\]](#id2)[#](#tensorflow-model-server-neuron-2-x-release-2-4-0-0 "Permalink to this headline")
Date: 11/23/2022
- Deprecated the NEURONCORE\_GROUP\_SIZES environment variable.
- Minor bug fixes.
## [TensorFlow Model Server Neuron 2.x release \[2.3.0.0\]](#id3)[#](#tensorflow-model-server-neuron-2-x-release-2-3-0-0 "Permalink to this headline")
Date: 04/29/2022
- Added support for tensorflow-model-serving 2.8.0.
## [TensorFlow Model Server Neuron 2.x release \[2.2.0.0\]](#id4)[#](#tensorflow-model-server-neuron-2-x-release-2-2-0-0 "Permalink to this headline")
Date: 03/25/2022
- Updated tensorflow-serving 2.5 to 2.5.4.
- Add support for tensorflow-model-serving 2.6 and 2.7.
### TensorFlow Model Server Neuron 2.x release \[2.1.6.0\][#](#tensorflow-model-server-neuron-2-x-release-2-1-6-0 "Permalink to this headline")
Date: 01/20/2022
- Updated tensorflow-model-server 2.5 to version 2.5.3
### TensorFlow Model Server Neuron 2.x release \[2.0.4.0\][#](#tensorflow-model-server-neuron-2-x-release-2-0-4-0 "Permalink to this headline")
Date: 11/05/2021
- Updated Neuron Runtime (which is integrated within this package) to `libnrt 2.2.18.0` to fix a container issue that was preventing the use of containers when /dev/neuron0 was not present. See details here neuron-runtime-release-notes.
### TensorFlow Model Server Neuron 2.x release \[2.0.3.0\][#](#tensorflow-model-server-neuron-2-x-release-2-0-3-0 "Permalink to this headline")
Date: 10/27/2021
## [New in this release](#id5)[#](#new-in-this-release "Permalink to this headline")
- TensorFlow Model Server Neuron 2.x now support Neuron Runtime 2.x (`libnrt.so` shared library) only.
> Important
>
> - You must update to the latest Neuron Driver (`aws-neuron-dkms` version 2.1 or newer) for proper functionality of the new runtime library.
>
> - Read [Introducing Neuron Runtime 2.x (libnrt.so)](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt) application note that describes [why are we making this change](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-why) and how [this change will affect the Neuron SDK](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-how-sdk) in detail.
>
> - Read [Migrate your application to Neuron Runtime 2.x (libnrt.so)](../../../general/appnotes/neuron1x/introducing-libnrt.html#neuron-migrating-apps-neuron-to-libnrt) for detailed information of how to migrate your application.
>
### TensorFlow Model Server Neuron 2.x release \[1.6.8.0\][#](#tensorflow-model-server-neuron-2-x-release-1-6-8-0 "Permalink to this headline")
Date: 08/12/2021
## [Summary](#id6)[#](#summary "Permalink to this headline")
TensorFlow 2.x - tensorflow-model-server-neuron now support TensorFlow 2.x, tensorflow-model-server-neuron package versions 2.1.4, 2.2.2, 2.3.0, 2.4.1, and 2.5.1 support TensorFlow 2.x.
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow-Model-Server-Neuron 2.x Release Notes — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Frelease-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-4-0-0">
TensorFlow Model Server Neuron 2.x release [2.4.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-3-0-0">
TensorFlow Model Server Neuron 2.x release [2.3.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-2-0-0">
TensorFlow Model Server Neuron 2.x release [2.2.0.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-1-6-0">
TensorFlow Model Server Neuron 2.x release [2.1.6.0]
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-0-4-0">
TensorFlow Model Server Neuron 2.x release [2.0.4.0]
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-0-3-0">
TensorFlow Model Server Neuron 2.x release [2.0.3.0]
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#new-in-this-release">
New in this release
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-1-6-8-0">
TensorFlow Model Server Neuron 2.x release [1.6.8.0]
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary">
Summary
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow-Model-Server-Neuron 2.x Release Notes</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-4-0-0">
TensorFlow Model Server Neuron 2.x release [2.4.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-3-0-0">
TensorFlow Model Server Neuron 2.x release [2.3.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-2-0-0">
TensorFlow Model Server Neuron 2.x release [2.2.0.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-1-6-0">
TensorFlow Model Server Neuron 2.x release [2.1.6.0]
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-0-4-0">
TensorFlow Model Server Neuron 2.x release [2.0.4.0]
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-2-0-3-0">
TensorFlow Model Server Neuron 2.x release [2.0.3.0]
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#new-in-this-release">
New in this release
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-2-x-release-1-6-8-0">
TensorFlow Model Server Neuron 2.x release [1.6.8.0]
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary">
Summary
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-notes">
<span id="tensorflow-modelserver-rn-v2"></span><h1>TensorFlow-Model-Server-Neuron 2.x Release Notes<a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-notes" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-2-x-release-2-4-0-0" id="id2">TensorFlow Model Server Neuron 2.x release [2.4.0.0]</a></p></li>
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-2-x-release-2-3-0-0" id="id3">TensorFlow Model Server Neuron 2.x release [2.3.0.0]</a></p></li>
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-2-x-release-2-2-0-0" id="id4">TensorFlow Model Server Neuron 2.x release [2.2.0.0]</a></p></li>
<li><p><a class="reference internal" href="#new-in-this-release" id="id5">New in this release</a></p></li>
<li><p><a class="reference internal" href="#summary" id="id6">Summary</a></p></li>
</ul>
</div>
<p>This document lists the release notes for the
TensorFlow-Model-Server-Neuron package.</p>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-4-0-0">
<h2><a class="toc-backref" href="#id2">TensorFlow Model Server Neuron 2.x release [2.4.0.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-4-0-0" title="Permalink to this headline">#</a></h2>
<p>Date: 11/23/2022</p>
<ul class="simple">
<li><p>Deprecated the NEURONCORE_GROUP_SIZES environment variable.</p></li>
<li><p>Minor bug fixes.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-3-0-0">
<h2><a class="toc-backref" href="#id3">TensorFlow Model Server Neuron 2.x release [2.3.0.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-3-0-0" title="Permalink to this headline">#</a></h2>
<p>Date: 04/29/2022</p>
<ul class="simple">
<li><p>Added support for tensorflow-model-serving 2.8.0.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-2-0-0">
<h2><a class="toc-backref" href="#id4">TensorFlow Model Server Neuron 2.x release [2.2.0.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-2-0-0" title="Permalink to this headline">#</a></h2>
<p>Date: 03/25/2022</p>
<ul class="simple">
<li><p>Updated tensorflow-serving 2.5 to 2.5.4.</p></li>
<li><p>Add support for tensorflow-model-serving 2.6 and 2.7.</p></li>
</ul>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-1-6-0">
<h3>TensorFlow Model Server Neuron 2.x release [2.1.6.0]<a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-1-6-0" title="Permalink to this headline">#</a></h3>
<p>Date: 01/20/2022</p>
<ul class="simple">
<li><p>Updated tensorflow-model-server 2.5 to version 2.5.3</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-0-4-0">
<h3>TensorFlow Model Server Neuron 2.x release [2.0.4.0]<a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-0-4-0" title="Permalink to this headline">#</a></h3>
<p>Date: 11/05/2021</p>
<ul class="simple">
<li><p>Updated Neuron Runtime (which is integrated within this package) to <code class="docutils literal notranslate"><span class="pre">libnrt</span> <span class="pre">2.2.18.0</span></code> to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here <span class="xref std std-ref">neuron-runtime-release-notes</span>.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-2-0-3-0">
<h3>TensorFlow Model Server Neuron 2.x release [2.0.3.0]<a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-2-0-3-0" title="Permalink to this headline">#</a></h3>
<p>Date: 10/27/2021</p>
</div>
</div>
<div class="section" id="new-in-this-release">
<h2><a class="toc-backref" href="#id5">New in this release</a><a class="headerlink" href="#new-in-this-release" title="Permalink to this headline">#</a></h2>
<ul>
<li><p>TensorFlow Model Server Neuron 2.x now support Neuron Runtime 2.x (<code class="docutils literal notranslate"><span class="pre">libnrt.so</span></code> shared library) only.</p>
<blockquote>
<div><div class="admonition important">
<p class="admonition-title">Important</p>
<ul class="simple">
<li><p>You must update to the latest Neuron Driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> version 2.1 or newer)
for proper functionality of the new runtime library.</p></li>
<li><p>Read <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt"><span class="std std-ref">Introducing Neuron Runtime 2.x (libnrt.so)</span></a>
application note that describes <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-why"><span class="std std-ref">why are we making this
change</span></a> and
how <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-how-sdk"><span class="std std-ref">this change will affect the Neuron
SDK</span></a> in detail.</p></li>
<li><p>Read <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#neuron-migrating-apps-neuron-to-libnrt"><span class="std std-ref">Migrate your application to Neuron Runtime 2.x (libnrt.so)</span></a> for detailed information of how to
migrate your application.</p></li>
</ul>
</div>
</div></blockquote>
</li>
</ul>
<div class="section" id="tensorflow-model-server-neuron-2-x-release-1-6-8-0">
<span id="id1"></span><h3>TensorFlow Model Server Neuron 2.x release [1.6.8.0]<a class="headerlink" href="#tensorflow-model-server-neuron-2-x-release-1-6-8-0" title="Permalink to this headline">#</a></h3>
<p>Date: 08/12/2021</p>
</div>
</div>
<div class="section" id="summary">
<h2><a class="toc-backref" href="#id6">Summary</a><a class="headerlink" href="#summary" title="Permalink to this headline">#</a></h2>
<p>TensorFlow 2.x - tensorflow-model-server-neuron now support TensorFlow 2.x, tensorflow-model-server-neuron package versions 2.1.4, 2.2.2, 2.3.0, 2.4.1, and 2.5.1 support TensorFlow 2.x.</p>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:25.823Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/support.rst.txt | ```
.. _neuron_support:
Support
=======
.. toctree::
:maxdepth: 1
SDK Maintenance Policy </general/sdk-policy>
Security Disclosures </general/security>
Contact Us </general/contact>
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_support:
Support
=======
.. toctree::
:maxdepth: 1
SDK Maintenance Policy </general/sdk-policy>
Security Disclosures </general/security>
Contact Us </general/contact>
</pre></body></html> | 2023-09-29T20:55:25.834Z | |
TensorFlow-Model-Server-Neuron 1.x Release Notes — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron.html#tensorflow-modelserver-rn | # TensorFlow-Model-Server-Neuron 1.x Release Notes — AWS Neuron Documentation
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## TensorFlow-Model-Server-Neuron 1.x Release Notes[#](#tensorflow-model-server-neuron-1-x-release-notes "Permalink to this headline")
Table of contents
- [TensorFlow Model Server Neuron 1.x release \[2.4.0.0\]](#tensorflow-model-server-neuron-1-x-release-2-4-0-0)
- [TensorFlow Model Server Neuron 1.x release \[2.2.0.0\]](#tensorflow-model-server-neuron-1-x-release-2-2-0-0)
- [TensorFlow Model Server Neuron 1.x release \[2.0.4.0\]](#tensorflow-model-server-neuron-1-x-release-2-0-4-0)
- [TensorFlow Model Server Neuron 1.x release \[2.0.3.0\]](#tensorflow-model-server-neuron-1-x-release-2-0-3-0)
- [\[1.15.0.1.5.1.0\]](#id1)
- [\[1.15.0.1.4.0.0\]](#id3)
- [\[1.15.0.1.3.3.0\]](#id6)
- [\[1.15.0.1.2.9.0\]](#id9)
- [\[1.15.0.1.2.8.0\]](#id12)
- [\[1.15.0.1.2.2.0\]](#id15)
- [\[1.15.0.1.1.3.0\]](#id18)
- [\[1.15.0.1.0.2168.0\]](#id21)
- [\[1.15.0.1.0.2043.0\]](#id24)
- [\[1.15.0.1.0.1965.0\]](#id27)
- [\[1.15.0.1.0.1953.0\]](#id30)
- [\[1.15.0.1.0.1891.0\]](#id33)
- [\[1.15.0.1.0.1796.0\]](#id36)
- [\[1.15.0.1.0.1572.0\]](#id39)
- [\[1.15.0.1.0.1333.0\]](#id42)
- [\[1.15.0.1.0.1240.0\]](#id45)
- [\[1.15.0.1.0.997.0\]](#id48)
- [\[1.15.0.1.0.803.0\]](#id51)
- [\[1.15.0.1.0.749.0\]](#id54)
- [\[1.15.0.1.0.663.0\]](#id57)
This document lists the release notes for the TensorFlow-Model-Server-Neuron package.
## [TensorFlow Model Server Neuron 1.x release \[2.4.0.0\]](#id60)[#](#tensorflow-model-server-neuron-1-x-release-2-4-0-0 "Permalink to this headline")
Date: 11/23/2022
- Deprecated the NEURONCORE\_GROUP\_SIZES environment variable.
- Minor bug fixes.
## [TensorFlow Model Server Neuron 1.x release \[2.2.0.0\]](#id61)[#](#tensorflow-model-server-neuron-1-x-release-2-2-0-0 "Permalink to this headline")
Date: 03/25/2022
- Minor bug fixes.
## [TensorFlow Model Server Neuron 1.x release \[2.0.4.0\]](#id62)[#](#tensorflow-model-server-neuron-1-x-release-2-0-4-0 "Permalink to this headline")
Date: 11/05/2021
- Updated Neuron Runtime (which is integrated within this package) to `libnrt 2.2.18.0` to fix a container issue that was preventing the use of containers when /dev/neuron0 was not present. See details here neuron-runtime-release-notes.
## [TensorFlow Model Server Neuron 1.x release \[2.0.3.0\]](#id63)[#](#tensorflow-model-server-neuron-1-x-release-2-0-3-0 "Permalink to this headline")
Date: 10/27/2021
### New in this release[#](#new-in-this-release "Permalink to this headline")
- TensorFlow Model Server Neuron 1.x now support Neuron Runtime 2.x (`libnrt.so` shared library) only.
> Important
>
> - You must update to the latest Neuron Driver (`aws-neuron-dkms` version 2.1 or newer) for proper functionality of the new runtime library.
>
> - Read [Introducing Neuron Runtime 2.x (libnrt.so)](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt) application note that describes [why are we making this change](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-why) and how [this change will affect the Neuron SDK](../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-how-sdk) in detail.
>
> - Read [Migrate your application to Neuron Runtime 2.x (libnrt.so)](../../../general/appnotes/neuron1x/introducing-libnrt.html#neuron-migrating-apps-neuron-to-libnrt) for detailed information of how to migrate your application.
>
## [\[1.15.0.1.5.1.0\]](#id64)[#](#id1 "Permalink to this headline")
Date: 07/02/2021
## [\[1.15.0.1.4.0.0\]](#id65)[#](#id3 "Permalink to this headline")
Date: 05/24/2021
### Summary[#](#id5 "Permalink to this headline")
1. Remove SIGINT/SIGTERM handler and rely on mechnisms provided by Neuron runtime for resource cleanup.
2. Uncap protobuf size limit.
## [\[1.15.0.1.3.3.0\]](#id66)[#](#id6 "Permalink to this headline")
Date: 05/01/2021
## [\[1.15.0.1.2.9.0\]](#id67)[#](#id9 "Permalink to this headline")
Date: 03/04/2021
## [\[1.15.0.1.2.8.0\]](#id68)[#](#id12 "Permalink to this headline")
Date: 02/24/2021
## [\[1.15.0.1.2.2.0\]](#id69)[#](#id15 "Permalink to this headline")
Date: 01/30/2021
## [\[1.15.0.1.1.3.0\]](#id70)[#](#id18 "Permalink to this headline")
Date: 12/23/2020
## [\[1.15.0.1.0.2168.0\]](#id71)[#](#id21 "Permalink to this headline")
Date: 11/17/2020
## [\[1.15.0.1.0.2043.0\]](#id72)[#](#id24 "Permalink to this headline")
Date: 09/22/2020
## [\[1.15.0.1.0.1965.0\]](#id73)[#](#id27 "Permalink to this headline")
Date: 08/08/2020
## [\[1.15.0.1.0.1953.0\]](#id74)[#](#id30 "Permalink to this headline")
Date: 08/05/2020
## [\[1.15.0.1.0.1891.0\]](#id75)[#](#id33 "Permalink to this headline")
Date: 07/16/2020
## [\[1.15.0.1.0.1796.0\]](#id76)[#](#id36 "Permalink to this headline")
Date 6/11/2020
## [\[1.15.0.1.0.1572.0\]](#id77)[#](#id39 "Permalink to this headline")
Date 5/11/2020
## [\[1.15.0.1.0.1333.0\]](#id78)[#](#id42 "Permalink to this headline")
Date 3/26/2020
## [\[1.15.0.1.0.1240.0\]](#id79)[#](#id45 "Permalink to this headline")
Date 2/27/2020
## [\[1.15.0.1.0.997.0\]](#id80)[#](#id48 "Permalink to this headline")
Date 1/27/2019
## [\[1.15.0.1.0.803.0\]](#id81)[#](#id51 "Permalink to this headline")
Date 12/20/2019
## [\[1.15.0.1.0.749.0\]](#id82)[#](#id54 "Permalink to this headline")
Date 12/1/2019
## [\[1.15.0.1.0.663.0\]](#id83)[#](#id57 "Permalink to this headline")
Date 11/29/2019
### Summary[#](#summary-11 "Permalink to this headline")
This version is available only in released DLAMI v26.0. See TensorFlow-Neuron Release Notes. Please update to latest version.
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow-Model-Server-Neuron 1.x Release Notes — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Frelease-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-4-0-0">
TensorFlow Model Server Neuron 1.x release [2.4.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-2-0-0">
TensorFlow Model Server Neuron 1.x release [2.2.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-0-4-0">
TensorFlow Model Server Neuron 1.x release [2.0.4.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-0-3-0">
TensorFlow Model Server Neuron 1.x release [2.0.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#new-in-this-release">
New in this release
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id1">
[1.15.0.1.5.1.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
[1.15.0.1.4.0.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id5">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id6">
[1.15.0.1.3.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id8">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id9">
[1.15.0.1.2.9.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id11">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id12">
[1.15.0.1.2.8.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id14">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id15">
[1.15.0.1.2.2.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id17">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id18">
[1.15.0.1.1.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id20">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id21">
[1.15.0.1.0.2168.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id23">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id24">
[1.15.0.1.0.2043.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id26">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id27">
[1.15.0.1.0.1965.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-1">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id30">
[1.15.0.1.0.1953.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-2">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id33">
[1.15.0.1.0.1891.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-3">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id36">
[1.15.0.1.0.1796.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-4">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id39">
[1.15.0.1.0.1572.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-5">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id42">
[1.15.0.1.0.1333.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-6">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id45">
[1.15.0.1.0.1240.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-7">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id48">
[1.15.0.1.0.997.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-8">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id51">
[1.15.0.1.0.803.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-9">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id54">
[1.15.0.1.0.749.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-10">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id57">
[1.15.0.1.0.663.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-11">
Summary
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow-Model-Server-Neuron 1.x Release Notes</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-4-0-0">
TensorFlow Model Server Neuron 1.x release [2.4.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-2-0-0">
TensorFlow Model Server Neuron 1.x release [2.2.0.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-0-4-0">
TensorFlow Model Server Neuron 1.x release [2.0.4.0]
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#tensorflow-model-server-neuron-1-x-release-2-0-3-0">
TensorFlow Model Server Neuron 1.x release [2.0.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#new-in-this-release">
New in this release
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id1">
[1.15.0.1.5.1.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
[1.15.0.1.4.0.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id5">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id6">
[1.15.0.1.3.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id8">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id9">
[1.15.0.1.2.9.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id11">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id12">
[1.15.0.1.2.8.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id14">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id15">
[1.15.0.1.2.2.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id17">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id18">
[1.15.0.1.1.3.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id20">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id21">
[1.15.0.1.0.2168.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id23">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id24">
[1.15.0.1.0.2043.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id26">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id27">
[1.15.0.1.0.1965.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-1">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id30">
[1.15.0.1.0.1953.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-2">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id33">
[1.15.0.1.0.1891.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-3">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id36">
[1.15.0.1.0.1796.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-4">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id39">
[1.15.0.1.0.1572.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-5">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id42">
[1.15.0.1.0.1333.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-6">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id45">
[1.15.0.1.0.1240.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-7">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id48">
[1.15.0.1.0.997.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-8">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id51">
[1.15.0.1.0.803.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-9">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id54">
[1.15.0.1.0.749.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-10">
Summary
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#id57">
[1.15.0.1.0.663.0]
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#summary-11">
Summary
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="tensorflow-model-server-neuron-1-x-release-notes">
<span id="tensorflow-modeslserver-neuron-rn"></span><span id="tensorflow-modelserver-rn"></span><h1>TensorFlow-Model-Server-Neuron 1.x Release Notes<a class="headerlink" href="#tensorflow-model-server-neuron-1-x-release-notes" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-1-x-release-2-4-0-0" id="id60">TensorFlow Model Server Neuron 1.x release [2.4.0.0]</a></p></li>
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-1-x-release-2-2-0-0" id="id61">TensorFlow Model Server Neuron 1.x release [2.2.0.0]</a></p></li>
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-1-x-release-2-0-4-0" id="id62">TensorFlow Model Server Neuron 1.x release [2.0.4.0]</a></p></li>
<li><p><a class="reference internal" href="#tensorflow-model-server-neuron-1-x-release-2-0-3-0" id="id63">TensorFlow Model Server Neuron 1.x release [2.0.3.0]</a></p></li>
<li><p><a class="reference internal" href="#id1" id="id64">[1.15.0.1.5.1.0]</a></p></li>
<li><p><a class="reference internal" href="#id3" id="id65">[1.15.0.1.4.0.0]</a></p></li>
<li><p><a class="reference internal" href="#id6" id="id66">[1.15.0.1.3.3.0]</a></p></li>
<li><p><a class="reference internal" href="#id9" id="id67">[1.15.0.1.2.9.0]</a></p></li>
<li><p><a class="reference internal" href="#id12" id="id68">[1.15.0.1.2.8.0]</a></p></li>
<li><p><a class="reference internal" href="#id15" id="id69">[1.15.0.1.2.2.0]</a></p></li>
<li><p><a class="reference internal" href="#id18" id="id70">[1.15.0.1.1.3.0]</a></p></li>
<li><p><a class="reference internal" href="#id21" id="id71">[1.15.0.1.0.2168.0]</a></p></li>
<li><p><a class="reference internal" href="#id24" id="id72">[1.15.0.1.0.2043.0]</a></p></li>
<li><p><a class="reference internal" href="#id27" id="id73">[1.15.0.1.0.1965.0]</a></p></li>
<li><p><a class="reference internal" href="#id30" id="id74">[1.15.0.1.0.1953.0]</a></p></li>
<li><p><a class="reference internal" href="#id33" id="id75">[1.15.0.1.0.1891.0]</a></p></li>
<li><p><a class="reference internal" href="#id36" id="id76">[1.15.0.1.0.1796.0]</a></p></li>
<li><p><a class="reference internal" href="#id39" id="id77">[1.15.0.1.0.1572.0]</a></p></li>
<li><p><a class="reference internal" href="#id42" id="id78">[1.15.0.1.0.1333.0]</a></p></li>
<li><p><a class="reference internal" href="#id45" id="id79">[1.15.0.1.0.1240.0]</a></p></li>
<li><p><a class="reference internal" href="#id48" id="id80">[1.15.0.1.0.997.0]</a></p></li>
<li><p><a class="reference internal" href="#id51" id="id81">[1.15.0.1.0.803.0]</a></p></li>
<li><p><a class="reference internal" href="#id54" id="id82">[1.15.0.1.0.749.0]</a></p></li>
<li><p><a class="reference internal" href="#id57" id="id83">[1.15.0.1.0.663.0]</a></p></li>
</ul>
</div>
<p>This document lists the release notes for the
TensorFlow-Model-Server-Neuron package.</p>
<div class="section" id="tensorflow-model-server-neuron-1-x-release-2-4-0-0">
<h2><a class="toc-backref" href="#id60">TensorFlow Model Server Neuron 1.x release [2.4.0.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-1-x-release-2-4-0-0" title="Permalink to this headline">#</a></h2>
<p>Date: 11/23/2022</p>
<ul class="simple">
<li><p>Deprecated the NEURONCORE_GROUP_SIZES environment variable.</p></li>
<li><p>Minor bug fixes.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-1-x-release-2-2-0-0">
<h2><a class="toc-backref" href="#id61">TensorFlow Model Server Neuron 1.x release [2.2.0.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-1-x-release-2-2-0-0" title="Permalink to this headline">#</a></h2>
<p>Date: 03/25/2022</p>
<ul class="simple">
<li><p>Minor bug fixes.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-1-x-release-2-0-4-0">
<h2><a class="toc-backref" href="#id62">TensorFlow Model Server Neuron 1.x release [2.0.4.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-1-x-release-2-0-4-0" title="Permalink to this headline">#</a></h2>
<p>Date: 11/05/2021</p>
<ul class="simple">
<li><p>Updated Neuron Runtime (which is integrated within this package) to <code class="docutils literal notranslate"><span class="pre">libnrt</span> <span class="pre">2.2.18.0</span></code> to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here <span class="xref std std-ref">neuron-runtime-release-notes</span>.</p></li>
</ul>
</div>
<div class="section" id="tensorflow-model-server-neuron-1-x-release-2-0-3-0">
<h2><a class="toc-backref" href="#id63">TensorFlow Model Server Neuron 1.x release [2.0.3.0]</a><a class="headerlink" href="#tensorflow-model-server-neuron-1-x-release-2-0-3-0" title="Permalink to this headline">#</a></h2>
<p>Date: 10/27/2021</p>
<div class="section" id="new-in-this-release">
<h3>New in this release<a class="headerlink" href="#new-in-this-release" title="Permalink to this headline">#</a></h3>
<ul>
<li><p>TensorFlow Model Server Neuron 1.x now support Neuron Runtime 2.x (<code class="docutils literal notranslate"><span class="pre">libnrt.so</span></code> shared library) only.</p>
<blockquote>
<div><div class="admonition important">
<p class="admonition-title">Important</p>
<ul class="simple">
<li><p>You must update to the latest Neuron Driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> version 2.1 or newer)
for proper functionality of the new runtime library.</p></li>
<li><p>Read <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt"><span class="std std-ref">Introducing Neuron Runtime 2.x (libnrt.so)</span></a>
application note that describes <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-why"><span class="std std-ref">why are we making this
change</span></a> and
how <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#introduce-libnrt-how-sdk"><span class="std std-ref">this change will affect the Neuron
SDK</span></a> in detail.</p></li>
<li><p>Read <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html#neuron-migrating-apps-neuron-to-libnrt"><span class="std std-ref">Migrate your application to Neuron Runtime 2.x (libnrt.so)</span></a> for detailed information of how to
migrate your application.</p></li>
</ul>
</div>
</div></blockquote>
</li>
</ul>
</div>
</div>
<div class="section" id="id1">
<span id="id2"></span><h2><a class="toc-backref" href="#id64">[1.15.0.1.5.1.0]</a><a class="headerlink" href="#id1" title="Permalink to this headline">#</a></h2>
<p>Date: 07/02/2021</p>
<div class="section" id="summary">
<h3>Summary<a class="headerlink" href="#summary" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id3">
<span id="id4"></span><h2><a class="toc-backref" href="#id65">[1.15.0.1.4.0.0]</a><a class="headerlink" href="#id3" title="Permalink to this headline">#</a></h2>
<p>Date: 05/24/2021</p>
<div class="section" id="id5">
<h3>Summary<a class="headerlink" href="#id5" title="Permalink to this headline">#</a></h3>
<ol class="arabic simple">
<li><p>Remove SIGINT/SIGTERM handler and rely on mechnisms provided by Neuron runtime for resource cleanup.</p></li>
<li><p>Uncap protobuf size limit.</p></li>
</ol>
</div>
</div>
<div class="section" id="id6">
<span id="id7"></span><h2><a class="toc-backref" href="#id66">[1.15.0.1.3.3.0]</a><a class="headerlink" href="#id6" title="Permalink to this headline">#</a></h2>
<p>Date: 05/01/2021</p>
<div class="section" id="id8">
<h3>Summary<a class="headerlink" href="#id8" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id9">
<span id="id10"></span><h2><a class="toc-backref" href="#id67">[1.15.0.1.2.9.0]</a><a class="headerlink" href="#id9" title="Permalink to this headline">#</a></h2>
<p>Date: 03/04/2021</p>
<div class="section" id="id11">
<h3>Summary<a class="headerlink" href="#id11" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id12">
<span id="id13"></span><h2><a class="toc-backref" href="#id68">[1.15.0.1.2.8.0]</a><a class="headerlink" href="#id12" title="Permalink to this headline">#</a></h2>
<p>Date: 02/24/2021</p>
<div class="section" id="id14">
<h3>Summary<a class="headerlink" href="#id14" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id15">
<span id="id16"></span><h2><a class="toc-backref" href="#id69">[1.15.0.1.2.2.0]</a><a class="headerlink" href="#id15" title="Permalink to this headline">#</a></h2>
<p>Date: 01/30/2021</p>
<div class="section" id="id17">
<h3>Summary<a class="headerlink" href="#id17" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id18">
<span id="id19"></span><h2><a class="toc-backref" href="#id70">[1.15.0.1.1.3.0]</a><a class="headerlink" href="#id18" title="Permalink to this headline">#</a></h2>
<p>Date: 12/23/2020</p>
<div class="section" id="id20">
<h3>Summary<a class="headerlink" href="#id20" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id21">
<span id="id22"></span><h2><a class="toc-backref" href="#id71">[1.15.0.1.0.2168.0]</a><a class="headerlink" href="#id21" title="Permalink to this headline">#</a></h2>
<p>Date: 11/17/2020</p>
<div class="section" id="id23">
<h3>Summary<a class="headerlink" href="#id23" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id24">
<span id="id25"></span><h2><a class="toc-backref" href="#id72">[1.15.0.1.0.2043.0]</a><a class="headerlink" href="#id24" title="Permalink to this headline">#</a></h2>
<p>Date: 09/22/2020</p>
<div class="section" id="id26">
<h3>Summary<a class="headerlink" href="#id26" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id27">
<span id="id28"></span><h2><a class="toc-backref" href="#id73">[1.15.0.1.0.1965.0]</a><a class="headerlink" href="#id27" title="Permalink to this headline">#</a></h2>
<p>Date: 08/08/2020</p>
<div class="section" id="summary-1">
<span id="id29"></span><h3>Summary<a class="headerlink" href="#summary-1" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id30">
<span id="id31"></span><h2><a class="toc-backref" href="#id74">[1.15.0.1.0.1953.0]</a><a class="headerlink" href="#id30" title="Permalink to this headline">#</a></h2>
<p>Date: 08/05/2020</p>
<div class="section" id="summary-2">
<span id="id32"></span><h3>Summary<a class="headerlink" href="#summary-2" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id33">
<span id="id34"></span><h2><a class="toc-backref" href="#id75">[1.15.0.1.0.1891.0]</a><a class="headerlink" href="#id33" title="Permalink to this headline">#</a></h2>
<p>Date: 07/16/2020</p>
<div class="section" id="summary-3">
<span id="id35"></span><h3>Summary<a class="headerlink" href="#summary-3" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id36">
<span id="id37"></span><h2><a class="toc-backref" href="#id76">[1.15.0.1.0.1796.0]</a><a class="headerlink" href="#id36" title="Permalink to this headline">#</a></h2>
<p>Date 6/11/2020</p>
<div class="section" id="summary-4">
<span id="id38"></span><h3>Summary<a class="headerlink" href="#summary-4" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id39">
<span id="id40"></span><h2><a class="toc-backref" href="#id77">[1.15.0.1.0.1572.0]</a><a class="headerlink" href="#id39" title="Permalink to this headline">#</a></h2>
<p>Date 5/11/2020</p>
<div class="section" id="summary-5">
<span id="id41"></span><h3>Summary<a class="headerlink" href="#summary-5" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id42">
<span id="id43"></span><h2><a class="toc-backref" href="#id78">[1.15.0.1.0.1333.0]</a><a class="headerlink" href="#id42" title="Permalink to this headline">#</a></h2>
<p>Date 3/26/2020</p>
<div class="section" id="summary-6">
<span id="id44"></span><h3>Summary<a class="headerlink" href="#summary-6" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id45">
<span id="id46"></span><h2><a class="toc-backref" href="#id79">[1.15.0.1.0.1240.0]</a><a class="headerlink" href="#id45" title="Permalink to this headline">#</a></h2>
<p>Date 2/27/2020</p>
<div class="section" id="summary-7">
<span id="id47"></span><h3>Summary<a class="headerlink" href="#summary-7" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id48">
<span id="id49"></span><h2><a class="toc-backref" href="#id80">[1.15.0.1.0.997.0]</a><a class="headerlink" href="#id48" title="Permalink to this headline">#</a></h2>
<p>Date 1/27/2019</p>
<div class="section" id="summary-8">
<span id="id50"></span><h3>Summary<a class="headerlink" href="#summary-8" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id51">
<span id="id52"></span><h2><a class="toc-backref" href="#id81">[1.15.0.1.0.803.0]</a><a class="headerlink" href="#id51" title="Permalink to this headline">#</a></h2>
<p>Date 12/20/2019</p>
<div class="section" id="summary-9">
<span id="id53"></span><h3>Summary<a class="headerlink" href="#summary-9" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id54">
<span id="id55"></span><h2><a class="toc-backref" href="#id82">[1.15.0.1.0.749.0]</a><a class="headerlink" href="#id54" title="Permalink to this headline">#</a></h2>
<p>Date 12/1/2019</p>
<div class="section" id="summary-10">
<span id="id56"></span><h3>Summary<a class="headerlink" href="#summary-10" title="Permalink to this headline">#</a></h3>
<p>No change. See <a class="reference internal" href="../tensorflow-neuron/tensorflow-neuron.html#tensorflow-neuron-release-notes"><span class="std std-ref">TensorFlow Neuron (tensorflow-neuron (TF1.x)) Release Notes</span></a> for related TensorFlow-Neuron release
notes.</p>
</div>
</div>
<div class="section" id="id57">
<span id="id58"></span><h2><a class="toc-backref" href="#id83">[1.15.0.1.0.663.0]</a><a class="headerlink" href="#id57" title="Permalink to this headline">#</a></h2>
<p>Date 11/29/2019</p>
<div class="section" id="summary-11">
<span id="id59"></span><h3>Summary<a class="headerlink" href="#summary-11" title="Permalink to this headline">#</a></h3>
<p>This version is available only in released DLAMI v26.0. See
TensorFlow-Neuron Release Notes. Please
<span class="xref std std-ref">update</span> to latest version.</p>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:25.980Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/transformers-marianmt.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Transformers MarianMT Tutorial\n",
"\n",
"In this tutorial, you will deploy the [HuggingFace MarianMT](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html) model for text translation.\n",
"\n",
"This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page.\n",
"\n",
"To generate text, you will be using the beam search algorithm to incrementally generate token candidates until the full output text has been created. Unlike simple single-pass models, this algorithm divides the work into two distinct phases:\n",
"\n",
"- **Encoder**: Convert the input text into an encoded representation. (Executed once)\n",
"- **Decoder**: Use the encoded representation of the input text and the current output tokens to incrementally generate the set of next best candidate tokens. (Executed many times)\n",
"\n",
"In this tutorial you will perform the following steps:\n",
"\n",
"- **Compile**: Compile both the Encoder and Decoder for Neuron using simplified interfaces for inference.\n",
"- **Infer**: Run on CPU and Neuron and compare results.\n",
"\n",
"Finally, a completely unrolled decoder will be built which simplifies the implementation at the cost of performing fixed-length inferences."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"\n",
"This tutorial has the following dependencies:\n",
"\n",
"- `transformers==4.25.1`\n",
"- `torch-neuron`\n",
"- `sentencepiece`\n",
"- `neuron-cc[tensorflow]`\n",
"\n",
"The following will install the required `transformers` version. Note that encoder/decoder API changes across different minor versions requires that you are specific about the version used. Also note that the `torch-neuron` version is pinned due to `transformer` compatibility issues."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install sentencepiece transformers==4.26.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Parameters\n",
"\n",
"The parameters of a generative model can be tuned for different use-cases. In this example, you'll tailor the parameters to a single inference beam search for an on-demand inference use-case. See the [MarianConfig](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html#marianconfig) for parameter details.\n",
"\n",
"Rather than varying the encoder/decoder token sizes at runtime, you must define these parameters prior to compilation. The encoder/decoder token sizes are important tunable parameters as a large token sequence will offer greater sentence length flexibility but perform worse than a small token sequence.\n",
"\n",
"To maximize performance on Neuron, the `num_beams`, `max_encode_length` and `max_decoder_length` should be made as small as possible for the use-case.\n",
"\n",
"For this tutorial you will use a model that translates sentences of up to 32 token from English to German."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"model_name = \"Helsinki-NLP/opus-mt-en-de\" # English -> German model\n",
"num_texts = 1 # Number of input texts to decode\n",
"num_beams = 4 # Number of beams per input text\n",
"max_encoder_length = 32 # Maximum input token length\n",
"max_decoder_length = 32 # Maximum output token length"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CPU Model Inference\n",
"\n",
"Start by executing the model on CPU to test its execution.\n",
"\n",
"The following defines the inference function which will be used to compare the Neuron and CPU output. In this example you will display all beam search sequences that were generated. For a real on-demand use case, set the `num_beams` to `1` to return only the top result."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def infer(model, tokenizer, text):\n",
"\n",
" # Truncate and pad the max length to ensure that the token size is compatible with fixed-sized encoder (Not necessary for pure CPU execution)\n",
" batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors=\"pt\")\n",
" output = model.generate(**batch, max_length=max_decoder_length, num_beams=num_beams, num_return_sequences=num_beams)\n",
" results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]\n",
"\n",
" print('Texts:')\n",
" for i, summary in enumerate(results):\n",
" print(i + 1, summary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that after loading the model, we also set the maximum length. This will later be used to limit the size of the compiled model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import MarianMTModel, MarianTokenizer\n",
"\n",
"model_cpu = MarianMTModel.from_pretrained(model_name)\n",
"model_cpu.config.max_length = max_decoder_length\n",
"model_cpu.eval()\n",
"\n",
"tokenizer = MarianTokenizer.from_pretrained(model_name)\n",
"\n",
"sample_text = \"I am a small frog.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infer(model_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Padded Model\n",
"In order to perform inference on Neuron, the model must be changed in a way that it supports tracing and fixed-sized inputs. One way in which this is possible is to use a pad the model inputs to the maximum possible tensor sizes. The benefit of using a padded model is that it supports variable length text generation up to a specified length `max_decoder_length`. A consequence of padding is that it can negatively impact performance due to large data transfers.\n",
"\n",
"### PaddedEncoder & PaddedDecoder Modules\n",
"Here you will define wrappers around the encoder and decoder portions of the generation model that are compatible with `torch.jit.trace` as well as fixed-sized inputs.\n",
"\n",
"The following are important features which are distinct from the default configuration:\n",
"\n",
"1. Disabled `return_dict`. When this is enabled, the network uses `dataclass` type outputs which are not compatible with `torch.jit.trace`.\n",
"2. Disabled `use_cache`. When this option is enabled, the network expects a collection of cache tensors which grow upon each iteration. Since Neuron requires fixed sized inputs, this must be disabled.\n",
"3. The `GenerationMixin:beam_search` implementation uses only the logits for the current iteration index from the original decoder layer output. Since inputs must be padded, performance can be improved by selecting only a subset of the hidden state prior to the final linear layer. For efficiency on Neuron, this reduction uses an elementwise-multiply to mask out the unused hidden values and then sums along an axis.\n",
"4. Since a reduction step is insterted between the decoder output and the final logit calculation, the original `model` attribute is not used. Instead the `PaddedDecoder` class combines the decoder, reducer, and linear layers into a combined forward pass. In the original model there is a clear distinction between the decoder layer and the final linear layer. These layers are fused together to get one large fully optimized graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from torch.nn import functional as F\n",
"\n",
"\n",
"class PaddedEncoder(torch.nn.Module):\n",
"\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.encoder = model.model.encoder\n",
" self.main_input_name = 'input_ids'\n",
" \n",
" def forward(self, input_ids, attention_mask):\n",
" return self.encoder(input_ids, attention_mask=attention_mask, return_dict=False)\n",
"\n",
"\n",
"class PaddedDecoder(torch.nn.Module):\n",
"\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.weight = model.model.shared.weight.clone().detach()\n",
" self.bias = model.final_logits_bias.clone().detach()\n",
" self.decoder = model.model.decoder\n",
"\n",
" def forward(self, input_ids, attention_mask, encoder_outputs, index):\n",
"\n",
" # Invoke the decoder\n",
" hidden, = self.decoder(\n",
" input_ids=input_ids,\n",
" encoder_hidden_states=encoder_outputs,\n",
" encoder_attention_mask=attention_mask,\n",
" return_dict=False,\n",
" use_cache=False,\n",
" )\n",
"\n",
" _, n_length, _ = hidden.shape\n",
"\n",
" # Create selection mask\n",
" mask = torch.arange(n_length, dtype=torch.float32) == index\n",
" mask = mask.view(1, -1, 1)\n",
"\n",
" # Broadcast mask\n",
" masked = torch.multiply(hidden, mask)\n",
"\n",
" # Reduce along 1st dimension\n",
" hidden = torch.sum(masked, 1, keepdims=True)\n",
"\n",
" # Compute final linear layer for token probabilities\n",
" logits = F.linear(\n",
" hidden,\n",
" self.weight,\n",
" bias=self.bias\n",
" )\n",
" return logits\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### PaddedGenerator - GenerationMixin Class\n",
"\n",
"\n",
"On text generation tasks, HuggingFace Transformers defines a [GenerationMixin](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin) base class which provides standard methods and algorithms to generate text. For this tutorial, you will be using the beam search algorithm on encoder/decoder architectures.\n",
"\n",
"To be able to use these methods, you will be defining your own class derived from the GenerationMixin class to run a beam search. This will invoke the encoder and decoder layers in a way that is compatible with fixed sized inputs and traced modules. This means you must import the base class and the output objects ([Seq2SeqLMOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.Seq2SeqLMOutput), [BaseModelOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.BaseModelOutput)) used by the [beam_search](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_search) algorithm.\n",
"\n",
"The `GenerationMixin:generate` method will use `GenerationMixin:beam_search` which requires that you to define your own class implementation that invokes the `PaddedEncoder` and `PaddedDecoder` modules using padded inputs. The standard generator model implementation will not work by default because it is intended to infer with variable-sized (growing) input tensors. \n",
"\n",
"The `from_model` method is defined to create the `PaddedGenerator` from an existing pretrained generator class.\n",
"\n",
"To invoke the Encoder and Decoder traced modules in a way that is compatible with the `GenerationMixin:beam_search` implementation, the `get_encoder`, `__call__`, and `prepare_inputs_for_generation` methods are overriden.\n",
"\n",
"Lastly, the class defines methods for serialization so that the model can be easily saved and loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from transformers import GenerationMixin, AutoConfig\n",
"from transformers.modeling_outputs import Seq2SeqLMOutput, BaseModelOutput\n",
"from transformers.modeling_utils import PreTrainedModel\n",
"\n",
"\n",
"class PaddedGenerator(PreTrainedModel, GenerationMixin):\n",
"\n",
" @classmethod\n",
" def from_model(cls, model):\n",
" generator = cls(model.config)\n",
" generator.encoder = PaddedEncoder(model)\n",
" generator.decoder = PaddedDecoder(model)\n",
" return generator\n",
" \n",
" def prepare_inputs_for_generation(\n",
" self,\n",
" input_ids,\n",
" encoder_outputs=None,\n",
" attention_mask=None,\n",
" **kwargs,\n",
" ):\n",
" # Pad the inputs for Neuron\n",
" current_length = input_ids.shape[1]\n",
" pad_size = self.config.max_length - current_length\n",
" return dict(\n",
" input_ids=F.pad(input_ids, (0, pad_size)),\n",
" attention_mask=attention_mask,\n",
" encoder_outputs=encoder_outputs.last_hidden_state,\n",
" current_length=torch.tensor(current_length - 1),\n",
" )\n",
"\n",
" def get_encoder(self):\n",
" def encode(input_ids, attention_mask, **kwargs): \n",
" output, = self.encoder(input_ids, attention_mask)\n",
" return BaseModelOutput(\n",
" last_hidden_state=output,\n",
" )\n",
" return encode\n",
"\n",
" def forward(self, input_ids, attention_mask, encoder_outputs, current_length, **kwargs):\n",
" logits = self.decoder(input_ids, attention_mask, encoder_outputs, current_length)\n",
" return Seq2SeqLMOutput(logits=logits)\n",
"\n",
" @property\n",
" def device(self): # Attribute required by beam search\n",
" return torch.device('cpu')\n",
" \n",
" def save_pretrained(self, directory):\n",
" if os.path.isfile(directory):\n",
" print(f\"Provided path ({directory}) should be a directory, not a file\")\n",
" return\n",
" os.makedirs(directory, exist_ok=True)\n",
" torch.jit.save(self.encoder, os.path.join(directory, 'encoder.pt'))\n",
" torch.jit.save(self.decoder, os.path.join(directory, 'decoder.pt'))\n",
" self.config.save_pretrained(directory)\n",
"\n",
" @classmethod\n",
" def from_pretrained(cls, directory):\n",
" config = AutoConfig.from_pretrained(directory)\n",
" obj = cls(config)\n",
" obj.encoder = torch.jit.load(os.path.join(directory, 'encoder.pt'))\n",
" obj.decoder = torch.jit.load(os.path.join(directory, 'decoder.pt'))\n",
" setattr(obj.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
" return obj\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded CPU Inference\n",
"To start, it is important to ensure that the transformations we have made to the model were successful. Using the classes defined above we can test that the padded model execution on CPU is identical to the original output also running on CPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_cpu = PaddedGenerator.from_model(model_cpu)\n",
"infer(padded_model_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded Neuron Tracing & Inference\n",
"\n",
"Now that the padded version of model is confirmed to produce the same outputs as the non-padded version, the model can be compiled for Neuron."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"\n",
"\n",
"def trace(model, num_texts, num_beams, max_decoder_length, max_encoder_length):\n",
" \"\"\"\n",
" Traces the encoder and decoder modules for use on Neuron.\n",
"\n",
" This function fixes the network to the given sizes. Once the model has been\n",
" compiled to a given size, the inputs to these networks must always be of\n",
" fixed size.\n",
"\n",
" Args:\n",
" model (PaddedGenerator): The padded generator to compile for Neuron\n",
" num_texts (int): The number of input texts to translate at once\n",
" num_beams (int): The number of beams to compute per text\n",
" max_decoder_length (int): The maximum number of tokens to be generated\n",
" max_encoder_length (int): The maximum number of input tokens that will be encoded\n",
" \"\"\"\n",
"\n",
" # Trace the encoder\n",
" inputs = (\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" )\n",
" encoder = torch_neuron.trace(model.encoder, inputs)\n",
"\n",
" # Trace the decoder (with expanded inputs)\n",
" batch_size = num_texts * num_beams\n",
" inputs = (\n",
" torch.ones((batch_size, max_decoder_length), dtype=torch.long),\n",
" torch.ones((batch_size, max_encoder_length), dtype=torch.long),\n",
" torch.ones((batch_size, max_encoder_length, model.config.d_model), dtype=torch.float),\n",
" torch.tensor(0),\n",
" )\n",
" decoder = torch_neuron.trace(model.decoder, inputs)\n",
" \n",
" traced = PaddedGenerator(model.config)\n",
" traced.encoder = encoder\n",
" traced.decoder = decoder\n",
" setattr(encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
" return traced"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_neuron = trace(padded_model_cpu, num_texts, num_beams, max_decoder_length, max_encoder_length)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Comparing the Neuron execution to the original CPU implementation, you will see the exact same generated text.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU execution for comparison\n",
"infer(padded_model_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded Neuron Serialization\n",
"Finally, we can test that we can serialize and reload the model so that it can be used later in its precompiled format."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_neuron.save_pretrained('NeuronPaddedMarianMT')\n",
"padded_model_loaded = PaddedGenerator.from_pretrained('NeuronPaddedMarianMT')\n",
"infer(padded_model_loaded, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Greedy Unrolled Model\n",
"An unrolled version of the model can achieve better performance in some cases since all operations will be executed on the Neuron hardware without returning to CPU. The consequence of this type of model is that since the generation loop execution never returns to CPU, the entire sequence up to `max_decoder_length` is performed in a single forward pass.\n",
"\n",
"The following module performs greedy text generation. Unlike the original beam search text generation, this implementation always selects the most probable token and does not generate multiple result texts.\n",
"\n",
"### GreedyUnrolledGenerator Module"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class GreedyUnrolledGenerator(torch.nn.Module):\n",
" \n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.config = model.config\n",
" self.model = model\n",
" \n",
" def forward(self, input_ids, attention_mask):\n",
" \n",
" # Generate the encoder state for the input tokens. This is only done once and the state is reused.\n",
" encoder_outputs, = self.model.model.encoder(input_ids, attention_mask=attention_mask, return_dict=False)\n",
" \n",
" # Set the intial state for the decode loop. This will grow per decoder iteration\n",
" tokens = torch.full((input_ids.size(0), 2), self.config.decoder_start_token_id)\n",
" \n",
" # Iteratively invoke the decoder on incrementally generated `tokens` to generate a `next_token`.\n",
" # Note that unlike the GeneratorMixin.generate function, there is no early-exit if the stop token \n",
" # has been reached. This will always run a fixed number of iterations.\n",
" for i in range(self.config.max_length):\n",
" \n",
" hidden, = self.model.model.decoder(\n",
" input_ids=tokens,\n",
" encoder_hidden_states=encoder_outputs,\n",
" encoder_attention_mask=attention_mask,\n",
" return_dict=False,\n",
" use_cache=False,\n",
" ) # size: [batch, current_length, vocab_size]\n",
" \n",
" logits = F.linear(\n",
" hidden[:, -1, :],\n",
" self.model.model.shared.weight,\n",
" bias=self.model.final_logits_bias\n",
" )\n",
" next_tokens = torch.argmax(logits, dim=1, keepdims=True)\n",
" tokens = torch.cat([tokens, next_tokens], dim=1)\n",
" \n",
" return tokens"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy CPU Inference\n",
"The inference code must be updated since the `generate` method is no longer used. This is because the entire generative inference loop occurs within the `GreedyUnrolledGenerator.forward` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def infer_greedy(model, tokenizer, text):\n",
" batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors=\"pt\")\n",
" inputs = batch['input_ids'], batch['attention_mask']\n",
" tokens = greedy_cpu(*inputs)\n",
" print('Texts:')\n",
" for i, t in enumerate(tokens):\n",
" result = tokenizer.decode(t, skip_special_tokens=True)\n",
" print(i + 1, result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like in previous section of this tutorial, first the greedy model is executed on CPU to validate that the correct results were produced. In this example, the generated text matches the first result of the original beam search."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_cpu.config.max_length = 8 # This controls the number of decoder loops. Reduced to improve compilation speed.\n",
"greedy_cpu = GreedyUnrolledGenerator(model_cpu)\n",
"infer_greedy(greedy_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy Neuron Tracing & Inference\n",
"Similarly the tracing is simplified since the now the `GreedyUnrolledGenerator.forward` can be compiled as a single unit. \n",
"\n",
"For compilation efficiency, two changes will be made compared to normal compilaition:\n",
"- `torch.jit.freeze` is used because it can *sometimes* speed up compilation by in the case where a module is re-used multiple times. In this case, it is more efficient because the `self.model.model.decoder` is used in a loop. \n",
"- The `torch_neuron.trace` option `fallback` is set to `False`. This forces all operations to execute on Neuron. Most of the time this is not recommended or efficient. In this case, it is more efficient because it means a single subgraph is produced rather than many. Usually one subgraph would be produced per decoder iteration since `aten::embedding` is executed in a loop. The `aten::embedding` operation is otherwise exected on CPU by default since this is usually more efficient than executing on Neuron.\n",
"\n",
"You may notice that compilation will take significantly longer with the unrolled model since the model inserts new operations into the compute graph for every single decoder iteration. This creates a much larger model graph even though the weights are re-used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"example = (\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
")\n",
"greedy_cpu.eval()\n",
"greedy_trace = torch.jit.trace(greedy_cpu, example)\n",
"greedy_frozen = torch.jit.freeze(greedy_trace)\n",
"greedy_neuron = torch_neuron.trace(greedy_frozen, example, fallback=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infer_greedy(greedy_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy Neuron Serialization\n",
"Unlike the previous version of the model that used the `GenerationMixin` base class. This greedy version of the model can be serialized using the regular `torch.jit.save` and `torch.jit.load` utilities since it is a pure torchscript module."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"torch.jit.save(greedy_neuron, 'greedy_neuron.pt')\n",
"loaded_greedy_neuron = torch.jit.load('greedy_neuron.pt')\n",
"infer_greedy(loaded_greedy_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appendix\n",
"### BART (Mask Filling Task)\n",
"\n",
"These `PaddedGenerator` class can be applied to the BART model for the task of filling in mask tokens.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from transformers import BartForConditionalGeneration, BartTokenizer\n",
"bart_name = \"facebook/bart-large\"\n",
"bart_model = BartForConditionalGeneration.from_pretrained(bart_name)\n",
"bart_model.config.max_length = max_decoder_length\n",
"bart_tokenizer = BartTokenizer.from_pretrained(bart_name)\n",
"bart_text = \"UN Chief Says There Is No <mask> in Syria\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU Execution\n",
"infer(bart_model, bart_tokenizer, bart_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Neuron Execution\n",
"paddded_bart = PaddedGenerator.from_model(bart_model)\n",
"bart_neuron = trace(paddded_bart, num_texts, num_beams, max_decoder_length, max_encoder_length)\n",
"infer(bart_neuron, bart_tokenizer, bart_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pegasus (Summarization Task)\n",
"\n",
"These `PaddedGenerator` class can be applied to the Pegasus model for summarization.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from transformers import PegasusForConditionalGeneration, PegasusTokenizer\n",
"pegasus_name = 'google/pegasus-xsum'\n",
"pegasus_model = PegasusForConditionalGeneration.from_pretrained(pegasus_name)\n",
"pegasus_model.config.max_length = max_decoder_length\n",
"pegasus_tokenizer = PegasusTokenizer.from_pretrained(pegasus_name)\n",
"pegasus_text = \"PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU Execution\n",
"infer(pegasus_model, pegasus_tokenizer, pegasus_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Neuron Execution\n",
"paddded_pegasus = PaddedGenerator.from_model(pegasus_model)\n",
"pegasus_neuron = trace(paddded_pegasus, num_texts, num_beams, max_decoder_length, max_encoder_length)\n",
"infer(pegasus_neuron, pegasus_tokenizer, pegasus_text)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Transformers MarianMT Tutorial\n",
"\n",
"In this tutorial, you will deploy the [HuggingFace MarianMT](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html) model for text translation.\n",
"\n",
"This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page.\n",
"\n",
"To generate text, you will be using the beam search algorithm to incrementally generate token candidates until the full output text has been created. Unlike simple single-pass models, this algorithm divides the work into two distinct phases:\n",
"\n",
"- **Encoder**: Convert the input text into an encoded representation. (Executed once)\n",
"- **Decoder**: Use the encoded representation of the input text and the current output tokens to incrementally generate the set of next best candidate tokens. (Executed many times)\n",
"\n",
"In this tutorial you will perform the following steps:\n",
"\n",
"- **Compile**: Compile both the Encoder and Decoder for Neuron using simplified interfaces for inference.\n",
"- **Infer**: Run on CPU and Neuron and compare results.\n",
"\n",
"Finally, a completely unrolled decoder will be built which simplifies the implementation at the cost of performing fixed-length inferences."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"\n",
"This tutorial has the following dependencies:\n",
"\n",
"- `transformers==4.25.1`\n",
"- `torch-neuron`\n",
"- `sentencepiece`\n",
"- `neuron-cc[tensorflow]`\n",
"\n",
"The following will install the required `transformers` version. Note that encoder/decoder API changes across different minor versions requires that you are specific about the version used. Also note that the `torch-neuron` version is pinned due to `transformer` compatibility issues."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install sentencepiece transformers==4.26.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Parameters\n",
"\n",
"The parameters of a generative model can be tuned for different use-cases. In this example, you'll tailor the parameters to a single inference beam search for an on-demand inference use-case. See the [MarianConfig](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html#marianconfig) for parameter details.\n",
"\n",
"Rather than varying the encoder/decoder token sizes at runtime, you must define these parameters prior to compilation. The encoder/decoder token sizes are important tunable parameters as a large token sequence will offer greater sentence length flexibility but perform worse than a small token sequence.\n",
"\n",
"To maximize performance on Neuron, the `num_beams`, `max_encode_length` and `max_decoder_length` should be made as small as possible for the use-case.\n",
"\n",
"For this tutorial you will use a model that translates sentences of up to 32 token from English to German."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"model_name = \"Helsinki-NLP/opus-mt-en-de\" # English -> German model\n",
"num_texts = 1 # Number of input texts to decode\n",
"num_beams = 4 # Number of beams per input text\n",
"max_encoder_length = 32 # Maximum input token length\n",
"max_decoder_length = 32 # Maximum output token length"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CPU Model Inference\n",
"\n",
"Start by executing the model on CPU to test its execution.\n",
"\n",
"The following defines the inference function which will be used to compare the Neuron and CPU output. In this example you will display all beam search sequences that were generated. For a real on-demand use case, set the `num_beams` to `1` to return only the top result."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def infer(model, tokenizer, text):\n",
"\n",
" # Truncate and pad the max length to ensure that the token size is compatible with fixed-sized encoder (Not necessary for pure CPU execution)\n",
" batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors=\"pt\")\n",
" output = model.generate(**batch, max_length=max_decoder_length, num_beams=num_beams, num_return_sequences=num_beams)\n",
" results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]\n",
"\n",
" print('Texts:')\n",
" for i, summary in enumerate(results):\n",
" print(i + 1, summary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that after loading the model, we also set the maximum length. This will later be used to limit the size of the compiled model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import MarianMTModel, MarianTokenizer\n",
"\n",
"model_cpu = MarianMTModel.from_pretrained(model_name)\n",
"model_cpu.config.max_length = max_decoder_length\n",
"model_cpu.eval()\n",
"\n",
"tokenizer = MarianTokenizer.from_pretrained(model_name)\n",
"\n",
"sample_text = \"I am a small frog.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infer(model_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Padded Model\n",
"In order to perform inference on Neuron, the model must be changed in a way that it supports tracing and fixed-sized inputs. One way in which this is possible is to use a pad the model inputs to the maximum possible tensor sizes. The benefit of using a padded model is that it supports variable length text generation up to a specified length `max_decoder_length`. A consequence of padding is that it can negatively impact performance due to large data transfers.\n",
"\n",
"### PaddedEncoder & PaddedDecoder Modules\n",
"Here you will define wrappers around the encoder and decoder portions of the generation model that are compatible with `torch.jit.trace` as well as fixed-sized inputs.\n",
"\n",
"The following are important features which are distinct from the default configuration:\n",
"\n",
"1. Disabled `return_dict`. When this is enabled, the network uses `dataclass` type outputs which are not compatible with `torch.jit.trace`.\n",
"2. Disabled `use_cache`. When this option is enabled, the network expects a collection of cache tensors which grow upon each iteration. Since Neuron requires fixed sized inputs, this must be disabled.\n",
"3. The `GenerationMixin:beam_search` implementation uses only the logits for the current iteration index from the original decoder layer output. Since inputs must be padded, performance can be improved by selecting only a subset of the hidden state prior to the final linear layer. For efficiency on Neuron, this reduction uses an elementwise-multiply to mask out the unused hidden values and then sums along an axis.\n",
"4. Since a reduction step is insterted between the decoder output and the final logit calculation, the original `model` attribute is not used. Instead the `PaddedDecoder` class combines the decoder, reducer, and linear layers into a combined forward pass. In the original model there is a clear distinction between the decoder layer and the final linear layer. These layers are fused together to get one large fully optimized graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from torch.nn import functional as F\n",
"\n",
"\n",
"class PaddedEncoder(torch.nn.Module):\n",
"\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.encoder = model.model.encoder\n",
" self.main_input_name = 'input_ids'\n",
" \n",
" def forward(self, input_ids, attention_mask):\n",
" return self.encoder(input_ids, attention_mask=attention_mask, return_dict=False)\n",
"\n",
"\n",
"class PaddedDecoder(torch.nn.Module):\n",
"\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.weight = model.model.shared.weight.clone().detach()\n",
" self.bias = model.final_logits_bias.clone().detach()\n",
" self.decoder = model.model.decoder\n",
"\n",
" def forward(self, input_ids, attention_mask, encoder_outputs, index):\n",
"\n",
" # Invoke the decoder\n",
" hidden, = self.decoder(\n",
" input_ids=input_ids,\n",
" encoder_hidden_states=encoder_outputs,\n",
" encoder_attention_mask=attention_mask,\n",
" return_dict=False,\n",
" use_cache=False,\n",
" )\n",
"\n",
" _, n_length, _ = hidden.shape\n",
"\n",
" # Create selection mask\n",
" mask = torch.arange(n_length, dtype=torch.float32) == index\n",
" mask = mask.view(1, -1, 1)\n",
"\n",
" # Broadcast mask\n",
" masked = torch.multiply(hidden, mask)\n",
"\n",
" # Reduce along 1st dimension\n",
" hidden = torch.sum(masked, 1, keepdims=True)\n",
"\n",
" # Compute final linear layer for token probabilities\n",
" logits = F.linear(\n",
" hidden,\n",
" self.weight,\n",
" bias=self.bias\n",
" )\n",
" return logits\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### PaddedGenerator - GenerationMixin Class\n",
"\n",
"\n",
"On text generation tasks, HuggingFace Transformers defines a [GenerationMixin](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin) base class which provides standard methods and algorithms to generate text. For this tutorial, you will be using the beam search algorithm on encoder/decoder architectures.\n",
"\n",
"To be able to use these methods, you will be defining your own class derived from the GenerationMixin class to run a beam search. This will invoke the encoder and decoder layers in a way that is compatible with fixed sized inputs and traced modules. This means you must import the base class and the output objects ([Seq2SeqLMOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.Seq2SeqLMOutput), [BaseModelOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.BaseModelOutput)) used by the [beam_search](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_search) algorithm.\n",
"\n",
"The `GenerationMixin:generate` method will use `GenerationMixin:beam_search` which requires that you to define your own class implementation that invokes the `PaddedEncoder` and `PaddedDecoder` modules using padded inputs. The standard generator model implementation will not work by default because it is intended to infer with variable-sized (growing) input tensors. \n",
"\n",
"The `from_model` method is defined to create the `PaddedGenerator` from an existing pretrained generator class.\n",
"\n",
"To invoke the Encoder and Decoder traced modules in a way that is compatible with the `GenerationMixin:beam_search` implementation, the `get_encoder`, `__call__`, and `prepare_inputs_for_generation` methods are overriden.\n",
"\n",
"Lastly, the class defines methods for serialization so that the model can be easily saved and loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from transformers import GenerationMixin, AutoConfig\n",
"from transformers.modeling_outputs import Seq2SeqLMOutput, BaseModelOutput\n",
"from transformers.modeling_utils import PreTrainedModel\n",
"\n",
"\n",
"class PaddedGenerator(PreTrainedModel, GenerationMixin):\n",
"\n",
" @classmethod\n",
" def from_model(cls, model):\n",
" generator = cls(model.config)\n",
" generator.encoder = PaddedEncoder(model)\n",
" generator.decoder = PaddedDecoder(model)\n",
" return generator\n",
" \n",
" def prepare_inputs_for_generation(\n",
" self,\n",
" input_ids,\n",
" encoder_outputs=None,\n",
" attention_mask=None,\n",
" **kwargs,\n",
" ):\n",
" # Pad the inputs for Neuron\n",
" current_length = input_ids.shape[1]\n",
" pad_size = self.config.max_length - current_length\n",
" return dict(\n",
" input_ids=F.pad(input_ids, (0, pad_size)),\n",
" attention_mask=attention_mask,\n",
" encoder_outputs=encoder_outputs.last_hidden_state,\n",
" current_length=torch.tensor(current_length - 1),\n",
" )\n",
"\n",
" def get_encoder(self):\n",
" def encode(input_ids, attention_mask, **kwargs): \n",
" output, = self.encoder(input_ids, attention_mask)\n",
" return BaseModelOutput(\n",
" last_hidden_state=output,\n",
" )\n",
" return encode\n",
"\n",
" def forward(self, input_ids, attention_mask, encoder_outputs, current_length, **kwargs):\n",
" logits = self.decoder(input_ids, attention_mask, encoder_outputs, current_length)\n",
" return Seq2SeqLMOutput(logits=logits)\n",
"\n",
" @property\n",
" def device(self): # Attribute required by beam search\n",
" return torch.device('cpu')\n",
" \n",
" def save_pretrained(self, directory):\n",
" if os.path.isfile(directory):\n",
" print(f\"Provided path ({directory}) should be a directory, not a file\")\n",
" return\n",
" os.makedirs(directory, exist_ok=True)\n",
" torch.jit.save(self.encoder, os.path.join(directory, 'encoder.pt'))\n",
" torch.jit.save(self.decoder, os.path.join(directory, 'decoder.pt'))\n",
" self.config.save_pretrained(directory)\n",
"\n",
" @classmethod\n",
" def from_pretrained(cls, directory):\n",
" config = AutoConfig.from_pretrained(directory)\n",
" obj = cls(config)\n",
" obj.encoder = torch.jit.load(os.path.join(directory, 'encoder.pt'))\n",
" obj.decoder = torch.jit.load(os.path.join(directory, 'decoder.pt'))\n",
" setattr(obj.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
" return obj\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded CPU Inference\n",
"To start, it is important to ensure that the transformations we have made to the model were successful. Using the classes defined above we can test that the padded model execution on CPU is identical to the original output also running on CPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_cpu = PaddedGenerator.from_model(model_cpu)\n",
"infer(padded_model_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded Neuron Tracing & Inference\n",
"\n",
"Now that the padded version of model is confirmed to produce the same outputs as the non-padded version, the model can be compiled for Neuron."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"\n",
"\n",
"def trace(model, num_texts, num_beams, max_decoder_length, max_encoder_length):\n",
" \"\"\"\n",
" Traces the encoder and decoder modules for use on Neuron.\n",
"\n",
" This function fixes the network to the given sizes. Once the model has been\n",
" compiled to a given size, the inputs to these networks must always be of\n",
" fixed size.\n",
"\n",
" Args:\n",
" model (PaddedGenerator): The padded generator to compile for Neuron\n",
" num_texts (int): The number of input texts to translate at once\n",
" num_beams (int): The number of beams to compute per text\n",
" max_decoder_length (int): The maximum number of tokens to be generated\n",
" max_encoder_length (int): The maximum number of input tokens that will be encoded\n",
" \"\"\"\n",
"\n",
" # Trace the encoder\n",
" inputs = (\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" )\n",
" encoder = torch_neuron.trace(model.encoder, inputs)\n",
"\n",
" # Trace the decoder (with expanded inputs)\n",
" batch_size = num_texts * num_beams\n",
" inputs = (\n",
" torch.ones((batch_size, max_decoder_length), dtype=torch.long),\n",
" torch.ones((batch_size, max_encoder_length), dtype=torch.long),\n",
" torch.ones((batch_size, max_encoder_length, model.config.d_model), dtype=torch.float),\n",
" torch.tensor(0),\n",
" )\n",
" decoder = torch_neuron.trace(model.decoder, inputs)\n",
" \n",
" traced = PaddedGenerator(model.config)\n",
" traced.encoder = encoder\n",
" traced.decoder = decoder\n",
" setattr(encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
" return traced"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_neuron = trace(padded_model_cpu, num_texts, num_beams, max_decoder_length, max_encoder_length)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Comparing the Neuron execution to the original CPU implementation, you will see the exact same generated text.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU execution for comparison\n",
"infer(padded_model_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Padded Neuron Serialization\n",
"Finally, we can test that we can serialize and reload the model so that it can be used later in its precompiled format."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"padded_model_neuron.save_pretrained('NeuronPaddedMarianMT')\n",
"padded_model_loaded = PaddedGenerator.from_pretrained('NeuronPaddedMarianMT')\n",
"infer(padded_model_loaded, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Greedy Unrolled Model\n",
"An unrolled version of the model can achieve better performance in some cases since all operations will be executed on the Neuron hardware without returning to CPU. The consequence of this type of model is that since the generation loop execution never returns to CPU, the entire sequence up to `max_decoder_length` is performed in a single forward pass.\n",
"\n",
"The following module performs greedy text generation. Unlike the original beam search text generation, this implementation always selects the most probable token and does not generate multiple result texts.\n",
"\n",
"### GreedyUnrolledGenerator Module"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class GreedyUnrolledGenerator(torch.nn.Module):\n",
" \n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.config = model.config\n",
" self.model = model\n",
" \n",
" def forward(self, input_ids, attention_mask):\n",
" \n",
" # Generate the encoder state for the input tokens. This is only done once and the state is reused.\n",
" encoder_outputs, = self.model.model.encoder(input_ids, attention_mask=attention_mask, return_dict=False)\n",
" \n",
" # Set the intial state for the decode loop. This will grow per decoder iteration\n",
" tokens = torch.full((input_ids.size(0), 2), self.config.decoder_start_token_id)\n",
" \n",
" # Iteratively invoke the decoder on incrementally generated `tokens` to generate a `next_token`.\n",
" # Note that unlike the GeneratorMixin.generate function, there is no early-exit if the stop token \n",
" # has been reached. This will always run a fixed number of iterations.\n",
" for i in range(self.config.max_length):\n",
" \n",
" hidden, = self.model.model.decoder(\n",
" input_ids=tokens,\n",
" encoder_hidden_states=encoder_outputs,\n",
" encoder_attention_mask=attention_mask,\n",
" return_dict=False,\n",
" use_cache=False,\n",
" ) # size: [batch, current_length, vocab_size]\n",
" \n",
" logits = F.linear(\n",
" hidden[:, -1, :],\n",
" self.model.model.shared.weight,\n",
" bias=self.model.final_logits_bias\n",
" )\n",
" next_tokens = torch.argmax(logits, dim=1, keepdims=True)\n",
" tokens = torch.cat([tokens, next_tokens], dim=1)\n",
" \n",
" return tokens"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy CPU Inference\n",
"The inference code must be updated since the `generate` method is no longer used. This is because the entire generative inference loop occurs within the `GreedyUnrolledGenerator.forward` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def infer_greedy(model, tokenizer, text):\n",
" batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors=\"pt\")\n",
" inputs = batch['input_ids'], batch['attention_mask']\n",
" tokens = greedy_cpu(*inputs)\n",
" print('Texts:')\n",
" for i, t in enumerate(tokens):\n",
" result = tokenizer.decode(t, skip_special_tokens=True)\n",
" print(i + 1, result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like in previous section of this tutorial, first the greedy model is executed on CPU to validate that the correct results were produced. In this example, the generated text matches the first result of the original beam search."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_cpu.config.max_length = 8 # This controls the number of decoder loops. Reduced to improve compilation speed.\n",
"greedy_cpu = GreedyUnrolledGenerator(model_cpu)\n",
"infer_greedy(greedy_cpu, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy Neuron Tracing & Inference\n",
"Similarly the tracing is simplified since the now the `GreedyUnrolledGenerator.forward` can be compiled as a single unit. \n",
"\n",
"For compilation efficiency, two changes will be made compared to normal compilaition:\n",
"- `torch.jit.freeze` is used because it can *sometimes* speed up compilation by in the case where a module is re-used multiple times. In this case, it is more efficient because the `self.model.model.decoder` is used in a loop. \n",
"- The `torch_neuron.trace` option `fallback` is set to `False`. This forces all operations to execute on Neuron. Most of the time this is not recommended or efficient. In this case, it is more efficient because it means a single subgraph is produced rather than many. Usually one subgraph would be produced per decoder iteration since `aten::embedding` is executed in a loop. The `aten::embedding` operation is otherwise exected on CPU by default since this is usually more efficient than executing on Neuron.\n",
"\n",
"You may notice that compilation will take significantly longer with the unrolled model since the model inserts new operations into the compute graph for every single decoder iteration. This creates a much larger model graph even though the weights are re-used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"example = (\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
" torch.ones((num_texts, max_encoder_length), dtype=torch.long),\n",
")\n",
"greedy_cpu.eval()\n",
"greedy_trace = torch.jit.trace(greedy_cpu, example)\n",
"greedy_frozen = torch.jit.freeze(greedy_trace)\n",
"greedy_neuron = torch_neuron.trace(greedy_frozen, example, fallback=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infer_greedy(greedy_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Greedy Neuron Serialization\n",
"Unlike the previous version of the model that used the `GenerationMixin` base class. This greedy version of the model can be serialized using the regular `torch.jit.save` and `torch.jit.load` utilities since it is a pure torchscript module."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"torch.jit.save(greedy_neuron, 'greedy_neuron.pt')\n",
"loaded_greedy_neuron = torch.jit.load('greedy_neuron.pt')\n",
"infer_greedy(loaded_greedy_neuron, tokenizer, sample_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appendix\n",
"### BART (Mask Filling Task)\n",
"\n",
"These `PaddedGenerator` class can be applied to the BART model for the task of filling in mask tokens.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from transformers import BartForConditionalGeneration, BartTokenizer\n",
"bart_name = \"facebook/bart-large\"\n",
"bart_model = BartForConditionalGeneration.from_pretrained(bart_name)\n",
"bart_model.config.max_length = max_decoder_length\n",
"bart_tokenizer = BartTokenizer.from_pretrained(bart_name)\n",
"bart_text = \"UN Chief Says There Is No <mask> in Syria\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU Execution\n",
"infer(bart_model, bart_tokenizer, bart_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Neuron Execution\n",
"paddded_bart = PaddedGenerator.from_model(bart_model)\n",
"bart_neuron = trace(paddded_bart, num_texts, num_beams, max_decoder_length, max_encoder_length)\n",
"infer(bart_neuron, bart_tokenizer, bart_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pegasus (Summarization Task)\n",
"\n",
"These `PaddedGenerator` class can be applied to the Pegasus model for summarization.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from transformers import PegasusForConditionalGeneration, PegasusTokenizer\n",
"pegasus_name = 'google/pegasus-xsum'\n",
"pegasus_model = PegasusForConditionalGeneration.from_pretrained(pegasus_name)\n",
"pegasus_model.config.max_length = max_decoder_length\n",
"pegasus_tokenizer = PegasusTokenizer.from_pretrained(pegasus_name)\n",
"pegasus_text = \"PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# CPU Execution\n",
"infer(pegasus_model, pegasus_tokenizer, pegasus_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Neuron Execution\n",
"paddded_pegasus = PaddedGenerator.from_model(pegasus_model)\n",
"pegasus_neuron = trace(paddded_pegasus, num_texts, num_beams, max_decoder_length, max_encoder_length)\n",
"infer(pegasus_neuron, pegasus_tokenizer, pegasus_text)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
</pre></body></html> | 2023-09-29T20:55:26.001Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/openpose_demo/openpose.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "caff04ba",
"metadata": {},
"source": [
"# Running OpenPose on Inferentia\n"
]
},
{
"cell_type": "markdown",
"id": "09b2919a",
"metadata": {},
"source": [
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"id": "4dcf9bb1",
"metadata": {},
"source": [
"## Introduction:\n",
"\n",
"In this tutorial we will compile and deploy Openpose model for Inferentia. This jupyter notebook should run on an inf1.6xlarge instance for compilation and inference. The inference part of this tutorial requires inf1.6xlarge and not the compilation itself. For simplicity we will run this tutorial on a single instance but in real life scenario the compilation can be done on a compute c5.4xlarge instance and the deployment on the inf1 instance family.\n",
"\n",
"In this tutorial we provide two main sections:\n",
"1. Compile the OpenPose model on inf1x6large.\n",
"2. Infer the same compiled model on inf1x6large.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n"
]
},
{
"cell_type": "markdown",
"id": "04ae0838",
"metadata": {},
"source": [
"## Acknowledgement:\n",
"\n",
"Many thanks to https://github.com/ildoonet for providing pretrained model as well as the image preprocessing/pose estimating infrastructure."
]
},
{
"cell_type": "markdown",
"id": "d0d6d08e",
"metadata": {},
"source": [
"## Download tensorflow pose net frozen graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1926d4e3",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"!wget -c --tries=2 $( wget -q -O - http://www.mediafire.com/file/qlzzr20mpocnpa3/graph_opt.pb | grep -o 'http*://download[^\"]*' | tail -n 1 ) -O graph_opt.pb\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "83eb578b",
"metadata": {},
"source": [
"## Compile\n",
"Compile the pose net frozen graph into AWS Neuron compatible form. Network input image resolution is adjustable with argument --net_resolution (e. g., --net_resolution=656x368). The compiled model can accept arbitrary batch size input at runtime."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "362f322e",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Usage: python convert_graph_opt.py /path/to/graph_opt.pb /path/to/graph_opt_neuron.pb\n",
"\"\"\"\n",
"#import argparse\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto\n",
"import tensorflow.neuron as tfn\n",
"\n",
"\n",
"def compile():\n",
" #parser = argparse.ArgumentParser()\n",
" #parser.add_argument('input_pb_path', help='Input serialized GraphDef protobuf')\n",
" #parser.add_argument('output_pb_path', help='Ouput serialized GraphDef protobuf')\n",
" #parser.add_argument('--net_resolution', default='656x368', help='Network resolution in WxH format, e. g., --net_resolution=656x368')\n",
" #parser.add_argument('--debug_verify', action='store_true')\n",
" #args = parser.parse_args()\n",
" \n",
" input_pb_path = './graph_opt.pb'\n",
" net_resolution = '656x368'\n",
" output_pb_path = './graph_opt_neuron_' + net_resolution + '.pb'\n",
" \n",
" debug_verify = 'store_true'\n",
" dim_w, dim_h = net_resolution.split('x')\n",
" dim_w = int(dim_w)\n",
" dim_h = int(dim_h)\n",
" graph_def = tf.GraphDef()\n",
" with open(input_pb_path, 'rb') as f:\n",
" graph_def.ParseFromString(f.read())\n",
"\n",
" if debug_verify:\n",
" np.random.seed(0)\n",
" feed_dict = {'image:0': np.random.rand(1, dim_h, dim_w, 3)}\n",
" output_name = 'Openpose/concat_stage7:0'\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" result_reference = sess.run(output_name, feed_dict)\n",
"\n",
" preprocessing_ops = {'preprocess_divide', 'preprocess_divide/y', 'preprocess_subtract', 'preprocess_subtract/y'}\n",
" graph_def = nhwc_to_nchw(graph_def, preprocessing_ops)\n",
" graph_def = inline_float32_to_float16(graph_def, preprocessing_ops)\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" no_fuse_ops = preprocessing_ops.union({'Openpose/concat_stage7'})\n",
" infer_graph = tfn.graph_util.inference_graph_from_session(\n",
" sess, shape_feed_dict={'image:0': [1, dim_h, dim_w, 3]}, output_tensors=['Openpose/concat_stage7:0'],\n",
" no_fuse_ops=no_fuse_ops, dynamic_batch_size=True,\n",
" )\n",
" with open(output_pb_path, 'wb') as f:\n",
" f.write(infer_graph.as_graph_def().SerializeToString())\n",
"\n",
" if debug_verify:\n",
" with tf.Session(graph=infer_graph) as sess:\n",
" result_compiled = sess.run(output_name, feed_dict)\n",
" np.testing.assert_allclose(result_compiled, result_reference, rtol=1e-2, atol=1e-3)\n",
"\n",
"\n",
"def inline_float32_to_float16(graph_def, preprocessing_ops):\n",
" float32_enum = tf.float32.as_datatype_enum\n",
" float16_enum = tf.float16.as_datatype_enum\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(graph_def, name='')\n",
" graph_def = graph.as_graph_def()\n",
" for node in graph_def.node:\n",
" if node.name in preprocessing_ops or node.op == 'Placeholder':\n",
" cast_input_node_name = node.name\n",
" continue\n",
" if node.op == 'Const':\n",
" if node.attr['dtype'].type == float32_enum:\n",
" node.attr['dtype'].type = float16_enum\n",
" tensor_def = node.attr['value'].tensor\n",
" tensor_def.dtype = float16_enum\n",
" if tensor_def.tensor_content:\n",
" const_np = np.frombuffer(tensor_def.tensor_content, dtype=np.float32).astype(np.float16)\n",
" tensor_def.tensor_content = const_np.tobytes()\n",
" elif len(tensor_def.float_val):\n",
" const_np = np.array(tensor_def.float_val).astype(np.float16).view(np.uint16)\n",
" tensor_def.float_val[:] = []\n",
" tensor_def.half_val[:] = list(const_np)\n",
" else:\n",
" raise NotImplementedError\n",
" elif 'T' in node.attr and node.attr['T'].type == float32_enum:\n",
" node.attr['T'].type = float16_enum\n",
" for node in graph_def.node:\n",
" if node.name == cast_input_node_name:\n",
" node.name = '{}_PreCastFloat32ToFlot16'.format(node.name)\n",
" input_node = node\n",
" break\n",
" cast_input_node = _gen_cast_node_def(cast_input_node_name, tf.float16, input_node)\n",
"\n",
" output_node = graph_def.node[-1]\n",
" cast_output_node_name = output_node.name\n",
" output_node.name = '{}_PreCastFloat16ToFlot32'.format(output_node.name)\n",
" cast_output_node = _gen_cast_node_def(cast_output_node_name, tf.float32, output_node)\n",
"\n",
" preprocessing_ops.add(input_node.name)\n",
" new_graph_def = tf.GraphDef()\n",
" new_graph_def.node.extend(graph_def.node)\n",
" new_graph_def.node.append(cast_input_node)\n",
" new_graph_def.node.append(cast_output_node)\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(new_graph_def, name='')\n",
" return graph.as_graph_def()\n",
"\n",
"\n",
"def nhwc_to_nchw(graph_def, preprocessing_ops):\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(graph_def, name='')\n",
" graph_def = graph.as_graph_def()\n",
" node_name_to_node = {node.name: node for node in graph_def.node}\n",
" for node in graph_def.node:\n",
" if node.name in preprocessing_ops or node.op == 'Placeholder':\n",
" transpose_input_node_name = node.name\n",
" continue\n",
" if node.op == 'Conv2D':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" strides = node.attr['strides'].list.i\n",
" strides[:] = [strides[0], strides[3], strides[1], strides[2]]\n",
" elif node.op == 'BiasAdd':\n",
" if node.name != 'probs/BiasAdd':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" elif node.op == 'MaxPool':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" ksize = node.attr['ksize'].list.i\n",
" ksize[:] = [ksize[0], ksize[3], ksize[1], ksize[2]]\n",
" strides = node.attr['strides'].list.i\n",
" strides[:] = [strides[0], strides[3], strides[1], strides[2]]\n",
" elif node.op in {'Concat', 'ConcatV2'}:\n",
" node_axes = node_name_to_node[node.input[-1]]\n",
" node_axes.attr['value'].tensor.int_val[:] = [1]\n",
" for node in graph_def.node:\n",
" if node.name == transpose_input_node_name:\n",
" node.name = '{}_PreTransposeNHWC2NCHW'.format(node.name)\n",
" input_node = node\n",
" break\n",
" transpose_input_node, transpose_input_perm_node = _gen_transpose_def(transpose_input_node_name, [0, 3, 1, 2], input_node)\n",
"\n",
" output_node = graph_def.node[-1]\n",
" transpose_output_node_name = output_node.name\n",
" output_node.name = '{}_PreTransposeNCHW2NHWC'.format(output_node.name)\n",
" transpose_output_node, transpose_output_perm_node = _gen_transpose_def(transpose_output_node_name, [0, 2, 3, 1], output_node)\n",
"\n",
" preprocessing_ops.add(input_node.name)\n",
" preprocessing_ops.add(transpose_input_perm_node.name)\n",
" new_graph_def = tf.GraphDef()\n",
" new_graph_def.node.extend(graph_def.node)\n",
" new_graph_def.node.append(transpose_input_perm_node)\n",
" new_graph_def.node.append(transpose_input_node)\n",
" new_graph_def.node.append(transpose_output_perm_node)\n",
" new_graph_def.node.append(transpose_output_node)\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(new_graph_def, name='')\n",
" return graph.as_graph_def()\n",
"\n",
"\n",
"def _gen_cast_node_def(name, target_dtype, input_node):\n",
" cast_node = tf.NodeDef(name=name, op='Cast')\n",
" cast_node.input.append(input_node.name)\n",
" cast_node.attr['DstT'].type = target_dtype.as_datatype_enum\n",
" cast_node.attr['SrcT'].type = input_node.attr['T'].type\n",
" cast_node.attr['Truncate'].b = False\n",
" return cast_node\n",
"\n",
"\n",
"def _gen_transpose_def(name, perm, input_node):\n",
" perm_node = tf.NodeDef(name='{}/perm'.format(name), op='Const')\n",
" perm_node.attr['dtype'].type = tf.int32.as_datatype_enum\n",
" tensor_def = perm_node.attr['value'].tensor\n",
" tensor_def.dtype = tf.int32.as_datatype_enum\n",
" tensor_def.tensor_shape.dim.append(TensorShapeProto.Dim(size=4))\n",
" tensor_def.tensor_content = np.array(perm, dtype=np.int32).tobytes()\n",
" transpose_node = tf.NodeDef(name=name, op='Transpose')\n",
" transpose_node.input.append(input_node.name)\n",
" transpose_node.input.append(perm_node.name)\n",
" transpose_node.attr['T'].type = input_node.attr['T'].type\n",
" transpose_node.attr['Tperm'].type = tf.int32.as_datatype_enum\n",
" return transpose_node, perm_node\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88c41e01",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"compile()\n",
"\n",
"# Sample output will look like below:\n",
"# WARNING:tensorflow:From <ipython-input-3-27d3844cd753>:47: inference_graph_from_session (from tensorflow_neuron.python.graph_util) is deprecated and will be removed in a future version.\n",
"# Instructions for updating:\n",
"# Please refer to AWS documentation on Neuron integrated TensorFlow 2.0.\n",
"# INFO:tensorflow:Froze 0 variables.\n",
"# INFO:tensorflow:Converted 0 variables to const ops.\n",
"# INFO:tensorflow:fusing subgraph {subgraph neuron_op_ed41d2deb8c54255 with input tensors [\"<tf.Tensor 'preprocess_subtract0/_0:0' shape=(1, 3, 368, 656) dtype=float16>\"], output tensors [\"<tf.Tensor 'Openpose/concat_stage7_PreCastFloat16ToFlot32:0' shape=(1, 46, 82, 57) dtype=float16>\"]} with neuron-cc\n",
"# INFO:tensorflow:Number of operations in TensorFlow session: 474\n",
"# INFO:tensorflow:Number of operations after tf.neuron optimizations: 474\n",
"# INFO:tensorflow:Number of operations placed on Neuron runtime: 465"
]
},
{
"cell_type": "markdown",
"id": "5a9af0c7",
"metadata": {},
"source": [
"## Deploy\n",
"Using same instance to deploy the model.\n",
"In case of different deployment instance, launch a deployment inf1 instance and copy the AWS Neuron optimized tensorflow frozen graph graph_opt_neuron_656x368.pb to the deployment inf1 instance. The smallest instance type inf1.xlarge is sufficient for this demo.\n",
"\n",
"Your graph_opt_neuron_656x368.pb can now be plugged into https://github.com/ildoonet seemlessly if you have tensorflow-neuron installed. When it is used at runtime, please ensure that the image resolution is the same as compile-time image resolution, i. e., 656x368.\n",
"\n",
"Measure performance on the compiled frozen graph using dummy inputs.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0481d049",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Copyright (C) 2020, Amazon.com. All Rights Reserved\n",
"\"\"\"\n",
"import os\n",
"import atexit\n",
"import time\n",
"import math\n",
"import json\n",
"from collections import OrderedDict, Counter\n",
"from contextlib import contextmanager, ContextDecorator\n",
"from functools import wraps\n",
"from tensorflow.python.client import session\n",
"from tensorflow.python.platform import tf_logging as logging\n",
"\n",
"\n",
"class measure_performance(ContextDecorator):\n",
" \"\"\"Convenient tool for performance measurements.\n",
" Can be apply on tensorflow session.run, tf-serving unary gRPC calls, or a given custom function.\n",
" Usage:\n",
" To generate performance report for the entire Python or gRPC-client process, insert\n",
" the following function call before running inferences:\n",
" `tfn.measure_performance()`\n",
" Then latency/throughput report will be generated when the process terminates.\n",
" Alternatively, it is possible to use `tfn.measure_performance` programmatically\n",
" as a context manager. Performance measurement will be done for all inferences\n",
" happening under this context. Report will be displayed as INFO level log when exiting\n",
" the context. It is also possible to obtain a JSON format report in Python.\n",
" For example:\n",
" ```\n",
" with tfn.measure_performance() as perf:\n",
" ... (run some inferences) ...\n",
" report_json = perf.report()\n",
" report_full_json = perf.report(verbosity=1)\n",
" ```\n",
" \"\"\"\n",
"\n",
" def __init__(self, func=None, window_size=1):\n",
" self.perf_tracker = PerformanceTracker(window_size)\n",
" atexit.register(self.perf_tracker.report)\n",
" self._original_run = session.Session.run\n",
" self._original_grpc_call = None\n",
" if callable(func):\n",
" self.perf_tracker.register_func(self._track_performance(func))\n",
" else:\n",
" session.Session.run = self._track_performance(session.Session.run)\n",
" try:\n",
" import grpc\n",
" from tensorflow_serving.apis import prediction_service_pb2_grpc\n",
" dummy_stub = prediction_service_pb2_grpc.PredictionServiceStub(grpc.insecure_channel(''))\n",
" self._grpc_callable_type = type(dummy_stub.Predict)\n",
" self._original_grpc_call = self._grpc_callable_type.__call__\n",
" except ImportError:\n",
" pass\n",
" if callable(self._original_grpc_call):\n",
" self._grpc_callable_type.__call__ = self._track_performance(\n",
" grpc._channel._UnaryUnaryMultiCallable.__call__\n",
" )\n",
"\n",
" def __enter__(self):\n",
" return self.perf_tracker\n",
"\n",
" def __exit__(self, *exc):\n",
" atexit.unregister(self.perf_tracker.report)\n",
" self.perf_tracker.report()\n",
" session.Session.run = self._original_run\n",
" if self._original_grpc_call is not None:\n",
" self._grpc_callable_type.__call__ = self._original_grpc_call\n",
" return False\n",
"\n",
" def _track_performance(self, func):\n",
" @wraps(func)\n",
" def wrapper(*args, **kwargs):\n",
" start = time.time()\n",
" result = func(*args, **kwargs)\n",
" end = time.time()\n",
" self.perf_tracker.add_timestamps(start, end)\n",
" return result\n",
" return wrapper\n",
"\n",
"\n",
"class PerformanceTracker(ContextDecorator):\n",
"\n",
" description = (\n",
" \"Latency unit: second. Throughput unit: number of batched inferences per second. \"\n",
" \"Reported throughput is a lower bound of the actual throughput as inferences \"\n",
" \"spanning across window boundaries are not counted towards any of the windows. \"\n",
" \"'Quiet' periods (i. e., window buckets where the inference function is not called) \"\n",
" \"are not counted towards the reported average throughput.\"\n",
" )\n",
"\n",
" def __init__(self, window_size):\n",
" self.window_size = window_size\n",
" self.timestamps_list = []\n",
" self._func = None\n",
"\n",
" def __call__(self, *args, **kwargs):\n",
" return self._func(*args, **kwargs)\n",
"\n",
" def register_func(self, func):\n",
" self._func = func\n",
"\n",
" def add_timestamps(self, start, end):\n",
" self.timestamps_list.append([start, end])\n",
"\n",
" def report(self, verbosity=0):\n",
" if self.timestamps_list:\n",
" latency_list = [end - start for start, end in self.timestamps_list]\n",
" latency_json = {\n",
" 'p50': percentile(latency_list, 50),\n",
" 'p90': percentile(latency_list, 90),\n",
" 'p99': percentile(latency_list, 99),\n",
" 'p100': percentile(latency_list, 100),\n",
" }\n",
" bucketed_timestamps = [self._get_bucket(start, end) for start, end in self.timestamps_list]\n",
" counted_buckets = Counter(item for item in bucketed_timestamps if item is not None)\n",
" bucket_throughputs = [(key, value / self.window_size) for key, value in sorted(counted_buckets.items())]\n",
" busy_throughputs = list(OrderedDict((key, value) for key, value in bucket_throughputs).values())\n",
" throughput_json = {\n",
" 'peak': max(busy_throughputs),\n",
" 'median': percentile(busy_throughputs, 50),\n",
" 'average': sum(busy_throughputs) / len(busy_throughputs),\n",
" }\n",
" if verbosity > 0:\n",
" throughput_json['trend'] = busy_throughputs\n",
" report_json = {\n",
" 'pid': os.getpid(),\n",
" 'throughput': throughput_json,\n",
" 'latency': latency_json,\n",
" 'description': PerformanceTracker.description,\n",
" }\n",
" with _logging_show_info():\n",
" logging.info('performance report:\\n{}'.format(json.dumps(report_json, indent=4)))\n",
" return report_json\n",
"\n",
" def _get_bucket(self, start, end):\n",
" bucketed_start = math.floor(start / self.window_size) * self.window_size\n",
" bucketed_end = math.ceil(end / self.window_size) * self.window_size\n",
" if bucketed_end - bucketed_start == self.window_size:\n",
" return bucketed_start\n",
" else:\n",
" return None\n",
"\n",
"\n",
"def percentile(number_list, percent):\n",
" pos_float = len(number_list) * percent / 100\n",
" max_pos = len(number_list) - 1\n",
" pos_floor = min(math.floor(pos_float), max_pos)\n",
" pos_ceil = min(math.ceil(pos_float), max_pos)\n",
" number_list = sorted(number_list)\n",
" return number_list[pos_ceil] if pos_float - pos_floor > 0.5 else number_list[pos_floor]\n",
"\n",
"\n",
"@contextmanager\n",
"def _logging_show_info():\n",
" try:\n",
" verbosity = logging.get_verbosity()\n",
" logging.set_verbosity(logging.INFO)\n",
" yield\n",
" finally:\n",
" logging.set_verbosity(verbosity)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "960c6aa9",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Below are the inputs for compiled frozen graph \n",
"\n",
"pb_path is a /path/graph_opt_neuron_656x368.pb\n",
"num_thread = 8 ( Number of threads that work on each tensorflow session ) \n",
"batch_size =1 ( batch_size )\n",
"net_resolution ,default=656x368\n",
"num_inferences = 200\n",
"\"\"\"\n",
"import os\n",
"from concurrent import futures\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"\n",
"def run_with_dummy(sess, dummy_feed_dict, num_inferences):\n",
" for _ in range(num_inferences):\n",
" sess.run('Openpose/concat_stage7:0', dummy_feed_dict)\n",
" \n",
"def main():\n",
" NUM_NEURON_CORES = 16\n",
" pb_path = './graph_opt_neuron_656x368.pb'\n",
" num_thread = 8\n",
" batch_size = 1\n",
" net_resolution = '656x368'\n",
" num_inferences = 200\n",
" dim_w, dim_h = net_resolution.split('x')\n",
" dim_w = int(dim_w)\n",
" dim_h = int(dim_h)\n",
" graph_def = tf.GraphDef()\n",
" with open(pb_path, 'rb') as f:\n",
" graph_def.ParseFromString(f.read())\n",
" \n",
" graph_def = tfn.graph_util.tag_multicore(graph_def, NUM_NEURON_CORES)\n",
" \n",
" with tfn.measure_performance() as perf:\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" input_name = 'image:0'\n",
" input_shape = sess.graph.get_tensor_by_name(input_name).shape.as_list()\n",
" input_shape[0] = batch_size\n",
" input_shape[1] = dim_h\n",
" input_shape[2] = dim_w\n",
" dummy_feed_dict = {input_name: np.zeros(input_shape).astype(np.float32)}\n",
" with futures.ThreadPoolExecutor(max_workers=num_thread) as executor:\n",
" fut_list = [executor.submit(run_with_dummy, sess, dummy_feed_dict, num_inferences) for _ in range(num_thread)]\n",
" res_list = [fut.result() for fut in fut_list] \n",
"\n",
"main()\n",
"\n",
"# Sample output will look like below:\n",
"# INFO:tensorflow:performance report:\n",
"# {\n",
"# \"pid\": 17713,\n",
"# \"throughput\": {\n",
"# \"peak\": 66.0,\n",
"# \"median\": 64.0,\n",
"# \"average\": 61.56521739130435\n",
"# },\n",
"# \"latency\": {\n",
"# \"p50\": 0.1106414794921875,\n",
"# \"p90\": 0.11212301254272461,\n",
"# \"p99\": 0.11337876319885254,\n",
"# \"p100\": 7.08282732963562\n",
"# },\n",
"# \"description\": \"Latency unit: second. Throughput unit: number of batched inferences per second. Reported throughput is a lower bound of the actual throughput as inferences spanning across window boundaries are not counted towards any of the windows. 'Quiet' periods (i. e., window buckets where the inference function is not called) are not counted towards the reported average throughput.\"\n",
"# }"
]
},
{
"cell_type": "raw",
"id": "4f15e776",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "caff04ba",
"metadata": {},
"source": [
"# Running OpenPose on Inferentia\n"
]
},
{
"cell_type": "markdown",
"id": "09b2919a",
"metadata": {},
"source": [
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"id": "4dcf9bb1",
"metadata": {},
"source": [
"## Introduction:\n",
"\n",
"In this tutorial we will compile and deploy Openpose model for Inferentia. This jupyter notebook should run on an inf1.6xlarge instance for compilation and inference. The inference part of this tutorial requires inf1.6xlarge and not the compilation itself. For simplicity we will run this tutorial on a single instance but in real life scenario the compilation can be done on a compute c5.4xlarge instance and the deployment on the inf1 instance family.\n",
"\n",
"In this tutorial we provide two main sections:\n",
"1. Compile the OpenPose model on inf1x6large.\n",
"2. Infer the same compiled model on inf1x6large.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n"
]
},
{
"cell_type": "markdown",
"id": "04ae0838",
"metadata": {},
"source": [
"## Acknowledgement:\n",
"\n",
"Many thanks to https://github.com/ildoonet for providing pretrained model as well as the image preprocessing/pose estimating infrastructure."
]
},
{
"cell_type": "markdown",
"id": "d0d6d08e",
"metadata": {},
"source": [
"## Download tensorflow pose net frozen graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1926d4e3",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"!wget -c --tries=2 $( wget -q -O - http://www.mediafire.com/file/qlzzr20mpocnpa3/graph_opt.pb | grep -o 'http*://download[^\"]*' | tail -n 1 ) -O graph_opt.pb\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "83eb578b",
"metadata": {},
"source": [
"## Compile\n",
"Compile the pose net frozen graph into AWS Neuron compatible form. Network input image resolution is adjustable with argument --net_resolution (e. g., --net_resolution=656x368). The compiled model can accept arbitrary batch size input at runtime."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "362f322e",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Usage: python convert_graph_opt.py /path/to/graph_opt.pb /path/to/graph_opt_neuron.pb\n",
"\"\"\"\n",
"#import argparse\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto\n",
"import tensorflow.neuron as tfn\n",
"\n",
"\n",
"def compile():\n",
" #parser = argparse.ArgumentParser()\n",
" #parser.add_argument('input_pb_path', help='Input serialized GraphDef protobuf')\n",
" #parser.add_argument('output_pb_path', help='Ouput serialized GraphDef protobuf')\n",
" #parser.add_argument('--net_resolution', default='656x368', help='Network resolution in WxH format, e. g., --net_resolution=656x368')\n",
" #parser.add_argument('--debug_verify', action='store_true')\n",
" #args = parser.parse_args()\n",
" \n",
" input_pb_path = './graph_opt.pb'\n",
" net_resolution = '656x368'\n",
" output_pb_path = './graph_opt_neuron_' + net_resolution + '.pb'\n",
" \n",
" debug_verify = 'store_true'\n",
" dim_w, dim_h = net_resolution.split('x')\n",
" dim_w = int(dim_w)\n",
" dim_h = int(dim_h)\n",
" graph_def = tf.GraphDef()\n",
" with open(input_pb_path, 'rb') as f:\n",
" graph_def.ParseFromString(f.read())\n",
"\n",
" if debug_verify:\n",
" np.random.seed(0)\n",
" feed_dict = {'image:0': np.random.rand(1, dim_h, dim_w, 3)}\n",
" output_name = 'Openpose/concat_stage7:0'\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" result_reference = sess.run(output_name, feed_dict)\n",
"\n",
" preprocessing_ops = {'preprocess_divide', 'preprocess_divide/y', 'preprocess_subtract', 'preprocess_subtract/y'}\n",
" graph_def = nhwc_to_nchw(graph_def, preprocessing_ops)\n",
" graph_def = inline_float32_to_float16(graph_def, preprocessing_ops)\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" no_fuse_ops = preprocessing_ops.union({'Openpose/concat_stage7'})\n",
" infer_graph = tfn.graph_util.inference_graph_from_session(\n",
" sess, shape_feed_dict={'image:0': [1, dim_h, dim_w, 3]}, output_tensors=['Openpose/concat_stage7:0'],\n",
" no_fuse_ops=no_fuse_ops, dynamic_batch_size=True,\n",
" )\n",
" with open(output_pb_path, 'wb') as f:\n",
" f.write(infer_graph.as_graph_def().SerializeToString())\n",
"\n",
" if debug_verify:\n",
" with tf.Session(graph=infer_graph) as sess:\n",
" result_compiled = sess.run(output_name, feed_dict)\n",
" np.testing.assert_allclose(result_compiled, result_reference, rtol=1e-2, atol=1e-3)\n",
"\n",
"\n",
"def inline_float32_to_float16(graph_def, preprocessing_ops):\n",
" float32_enum = tf.float32.as_datatype_enum\n",
" float16_enum = tf.float16.as_datatype_enum\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(graph_def, name='')\n",
" graph_def = graph.as_graph_def()\n",
" for node in graph_def.node:\n",
" if node.name in preprocessing_ops or node.op == 'Placeholder':\n",
" cast_input_node_name = node.name\n",
" continue\n",
" if node.op == 'Const':\n",
" if node.attr['dtype'].type == float32_enum:\n",
" node.attr['dtype'].type = float16_enum\n",
" tensor_def = node.attr['value'].tensor\n",
" tensor_def.dtype = float16_enum\n",
" if tensor_def.tensor_content:\n",
" const_np = np.frombuffer(tensor_def.tensor_content, dtype=np.float32).astype(np.float16)\n",
" tensor_def.tensor_content = const_np.tobytes()\n",
" elif len(tensor_def.float_val):\n",
" const_np = np.array(tensor_def.float_val).astype(np.float16).view(np.uint16)\n",
" tensor_def.float_val[:] = []\n",
" tensor_def.half_val[:] = list(const_np)\n",
" else:\n",
" raise NotImplementedError\n",
" elif 'T' in node.attr and node.attr['T'].type == float32_enum:\n",
" node.attr['T'].type = float16_enum\n",
" for node in graph_def.node:\n",
" if node.name == cast_input_node_name:\n",
" node.name = '{}_PreCastFloat32ToFlot16'.format(node.name)\n",
" input_node = node\n",
" break\n",
" cast_input_node = _gen_cast_node_def(cast_input_node_name, tf.float16, input_node)\n",
"\n",
" output_node = graph_def.node[-1]\n",
" cast_output_node_name = output_node.name\n",
" output_node.name = '{}_PreCastFloat16ToFlot32'.format(output_node.name)\n",
" cast_output_node = _gen_cast_node_def(cast_output_node_name, tf.float32, output_node)\n",
"\n",
" preprocessing_ops.add(input_node.name)\n",
" new_graph_def = tf.GraphDef()\n",
" new_graph_def.node.extend(graph_def.node)\n",
" new_graph_def.node.append(cast_input_node)\n",
" new_graph_def.node.append(cast_output_node)\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(new_graph_def, name='')\n",
" return graph.as_graph_def()\n",
"\n",
"\n",
"def nhwc_to_nchw(graph_def, preprocessing_ops):\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(graph_def, name='')\n",
" graph_def = graph.as_graph_def()\n",
" node_name_to_node = {node.name: node for node in graph_def.node}\n",
" for node in graph_def.node:\n",
" if node.name in preprocessing_ops or node.op == 'Placeholder':\n",
" transpose_input_node_name = node.name\n",
" continue\n",
" if node.op == 'Conv2D':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" strides = node.attr['strides'].list.i\n",
" strides[:] = [strides[0], strides[3], strides[1], strides[2]]\n",
" elif node.op == 'BiasAdd':\n",
" if node.name != 'probs/BiasAdd':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" elif node.op == 'MaxPool':\n",
" node.attr['data_format'].s = b'NCHW'\n",
" ksize = node.attr['ksize'].list.i\n",
" ksize[:] = [ksize[0], ksize[3], ksize[1], ksize[2]]\n",
" strides = node.attr['strides'].list.i\n",
" strides[:] = [strides[0], strides[3], strides[1], strides[2]]\n",
" elif node.op in {'Concat', 'ConcatV2'}:\n",
" node_axes = node_name_to_node[node.input[-1]]\n",
" node_axes.attr['value'].tensor.int_val[:] = [1]\n",
" for node in graph_def.node:\n",
" if node.name == transpose_input_node_name:\n",
" node.name = '{}_PreTransposeNHWC2NCHW'.format(node.name)\n",
" input_node = node\n",
" break\n",
" transpose_input_node, transpose_input_perm_node = _gen_transpose_def(transpose_input_node_name, [0, 3, 1, 2], input_node)\n",
"\n",
" output_node = graph_def.node[-1]\n",
" transpose_output_node_name = output_node.name\n",
" output_node.name = '{}_PreTransposeNCHW2NHWC'.format(output_node.name)\n",
" transpose_output_node, transpose_output_perm_node = _gen_transpose_def(transpose_output_node_name, [0, 2, 3, 1], output_node)\n",
"\n",
" preprocessing_ops.add(input_node.name)\n",
" preprocessing_ops.add(transpose_input_perm_node.name)\n",
" new_graph_def = tf.GraphDef()\n",
" new_graph_def.node.extend(graph_def.node)\n",
" new_graph_def.node.append(transpose_input_perm_node)\n",
" new_graph_def.node.append(transpose_input_node)\n",
" new_graph_def.node.append(transpose_output_perm_node)\n",
" new_graph_def.node.append(transpose_output_node)\n",
" graph = tf.Graph()\n",
" with graph.as_default():\n",
" tf.import_graph_def(new_graph_def, name='')\n",
" return graph.as_graph_def()\n",
"\n",
"\n",
"def _gen_cast_node_def(name, target_dtype, input_node):\n",
" cast_node = tf.NodeDef(name=name, op='Cast')\n",
" cast_node.input.append(input_node.name)\n",
" cast_node.attr['DstT'].type = target_dtype.as_datatype_enum\n",
" cast_node.attr['SrcT'].type = input_node.attr['T'].type\n",
" cast_node.attr['Truncate'].b = False\n",
" return cast_node\n",
"\n",
"\n",
"def _gen_transpose_def(name, perm, input_node):\n",
" perm_node = tf.NodeDef(name='{}/perm'.format(name), op='Const')\n",
" perm_node.attr['dtype'].type = tf.int32.as_datatype_enum\n",
" tensor_def = perm_node.attr['value'].tensor\n",
" tensor_def.dtype = tf.int32.as_datatype_enum\n",
" tensor_def.tensor_shape.dim.append(TensorShapeProto.Dim(size=4))\n",
" tensor_def.tensor_content = np.array(perm, dtype=np.int32).tobytes()\n",
" transpose_node = tf.NodeDef(name=name, op='Transpose')\n",
" transpose_node.input.append(input_node.name)\n",
" transpose_node.input.append(perm_node.name)\n",
" transpose_node.attr['T'].type = input_node.attr['T'].type\n",
" transpose_node.attr['Tperm'].type = tf.int32.as_datatype_enum\n",
" return transpose_node, perm_node\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88c41e01",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"compile()\n",
"\n",
"# Sample output will look like below:\n",
"# WARNING:tensorflow:From <ipython-input-3-27d3844cd753>:47: inference_graph_from_session (from tensorflow_neuron.python.graph_util) is deprecated and will be removed in a future version.\n",
"# Instructions for updating:\n",
"# Please refer to AWS documentation on Neuron integrated TensorFlow 2.0.\n",
"# INFO:tensorflow:Froze 0 variables.\n",
"# INFO:tensorflow:Converted 0 variables to const ops.\n",
"# INFO:tensorflow:fusing subgraph {subgraph neuron_op_ed41d2deb8c54255 with input tensors [\"<tf.Tensor 'preprocess_subtract0/_0:0' shape=(1, 3, 368, 656) dtype=float16>\"], output tensors [\"<tf.Tensor 'Openpose/concat_stage7_PreCastFloat16ToFlot32:0' shape=(1, 46, 82, 57) dtype=float16>\"]} with neuron-cc\n",
"# INFO:tensorflow:Number of operations in TensorFlow session: 474\n",
"# INFO:tensorflow:Number of operations after tf.neuron optimizations: 474\n",
"# INFO:tensorflow:Number of operations placed on Neuron runtime: 465"
]
},
{
"cell_type": "markdown",
"id": "5a9af0c7",
"metadata": {},
"source": [
"## Deploy\n",
"Using same instance to deploy the model.\n",
"In case of different deployment instance, launch a deployment inf1 instance and copy the AWS Neuron optimized tensorflow frozen graph graph_opt_neuron_656x368.pb to the deployment inf1 instance. The smallest instance type inf1.xlarge is sufficient for this demo.\n",
"\n",
"Your graph_opt_neuron_656x368.pb can now be plugged into https://github.com/ildoonet seemlessly if you have tensorflow-neuron installed. When it is used at runtime, please ensure that the image resolution is the same as compile-time image resolution, i. e., 656x368.\n",
"\n",
"Measure performance on the compiled frozen graph using dummy inputs.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0481d049",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Copyright (C) 2020, Amazon.com. All Rights Reserved\n",
"\"\"\"\n",
"import os\n",
"import atexit\n",
"import time\n",
"import math\n",
"import json\n",
"from collections import OrderedDict, Counter\n",
"from contextlib import contextmanager, ContextDecorator\n",
"from functools import wraps\n",
"from tensorflow.python.client import session\n",
"from tensorflow.python.platform import tf_logging as logging\n",
"\n",
"\n",
"class measure_performance(ContextDecorator):\n",
" \"\"\"Convenient tool for performance measurements.\n",
" Can be apply on tensorflow session.run, tf-serving unary gRPC calls, or a given custom function.\n",
" Usage:\n",
" To generate performance report for the entire Python or gRPC-client process, insert\n",
" the following function call before running inferences:\n",
" `tfn.measure_performance()`\n",
" Then latency/throughput report will be generated when the process terminates.\n",
" Alternatively, it is possible to use `tfn.measure_performance` programmatically\n",
" as a context manager. Performance measurement will be done for all inferences\n",
" happening under this context. Report will be displayed as INFO level log when exiting\n",
" the context. It is also possible to obtain a JSON format report in Python.\n",
" For example:\n",
" ```\n",
" with tfn.measure_performance() as perf:\n",
" ... (run some inferences) ...\n",
" report_json = perf.report()\n",
" report_full_json = perf.report(verbosity=1)\n",
" ```\n",
" \"\"\"\n",
"\n",
" def __init__(self, func=None, window_size=1):\n",
" self.perf_tracker = PerformanceTracker(window_size)\n",
" atexit.register(self.perf_tracker.report)\n",
" self._original_run = session.Session.run\n",
" self._original_grpc_call = None\n",
" if callable(func):\n",
" self.perf_tracker.register_func(self._track_performance(func))\n",
" else:\n",
" session.Session.run = self._track_performance(session.Session.run)\n",
" try:\n",
" import grpc\n",
" from tensorflow_serving.apis import prediction_service_pb2_grpc\n",
" dummy_stub = prediction_service_pb2_grpc.PredictionServiceStub(grpc.insecure_channel(''))\n",
" self._grpc_callable_type = type(dummy_stub.Predict)\n",
" self._original_grpc_call = self._grpc_callable_type.__call__\n",
" except ImportError:\n",
" pass\n",
" if callable(self._original_grpc_call):\n",
" self._grpc_callable_type.__call__ = self._track_performance(\n",
" grpc._channel._UnaryUnaryMultiCallable.__call__\n",
" )\n",
"\n",
" def __enter__(self):\n",
" return self.perf_tracker\n",
"\n",
" def __exit__(self, *exc):\n",
" atexit.unregister(self.perf_tracker.report)\n",
" self.perf_tracker.report()\n",
" session.Session.run = self._original_run\n",
" if self._original_grpc_call is not None:\n",
" self._grpc_callable_type.__call__ = self._original_grpc_call\n",
" return False\n",
"\n",
" def _track_performance(self, func):\n",
" @wraps(func)\n",
" def wrapper(*args, **kwargs):\n",
" start = time.time()\n",
" result = func(*args, **kwargs)\n",
" end = time.time()\n",
" self.perf_tracker.add_timestamps(start, end)\n",
" return result\n",
" return wrapper\n",
"\n",
"\n",
"class PerformanceTracker(ContextDecorator):\n",
"\n",
" description = (\n",
" \"Latency unit: second. Throughput unit: number of batched inferences per second. \"\n",
" \"Reported throughput is a lower bound of the actual throughput as inferences \"\n",
" \"spanning across window boundaries are not counted towards any of the windows. \"\n",
" \"'Quiet' periods (i. e., window buckets where the inference function is not called) \"\n",
" \"are not counted towards the reported average throughput.\"\n",
" )\n",
"\n",
" def __init__(self, window_size):\n",
" self.window_size = window_size\n",
" self.timestamps_list = []\n",
" self._func = None\n",
"\n",
" def __call__(self, *args, **kwargs):\n",
" return self._func(*args, **kwargs)\n",
"\n",
" def register_func(self, func):\n",
" self._func = func\n",
"\n",
" def add_timestamps(self, start, end):\n",
" self.timestamps_list.append([start, end])\n",
"\n",
" def report(self, verbosity=0):\n",
" if self.timestamps_list:\n",
" latency_list = [end - start for start, end in self.timestamps_list]\n",
" latency_json = {\n",
" 'p50': percentile(latency_list, 50),\n",
" 'p90': percentile(latency_list, 90),\n",
" 'p99': percentile(latency_list, 99),\n",
" 'p100': percentile(latency_list, 100),\n",
" }\n",
" bucketed_timestamps = [self._get_bucket(start, end) for start, end in self.timestamps_list]\n",
" counted_buckets = Counter(item for item in bucketed_timestamps if item is not None)\n",
" bucket_throughputs = [(key, value / self.window_size) for key, value in sorted(counted_buckets.items())]\n",
" busy_throughputs = list(OrderedDict((key, value) for key, value in bucket_throughputs).values())\n",
" throughput_json = {\n",
" 'peak': max(busy_throughputs),\n",
" 'median': percentile(busy_throughputs, 50),\n",
" 'average': sum(busy_throughputs) / len(busy_throughputs),\n",
" }\n",
" if verbosity > 0:\n",
" throughput_json['trend'] = busy_throughputs\n",
" report_json = {\n",
" 'pid': os.getpid(),\n",
" 'throughput': throughput_json,\n",
" 'latency': latency_json,\n",
" 'description': PerformanceTracker.description,\n",
" }\n",
" with _logging_show_info():\n",
" logging.info('performance report:\\n{}'.format(json.dumps(report_json, indent=4)))\n",
" return report_json\n",
"\n",
" def _get_bucket(self, start, end):\n",
" bucketed_start = math.floor(start / self.window_size) * self.window_size\n",
" bucketed_end = math.ceil(end / self.window_size) * self.window_size\n",
" if bucketed_end - bucketed_start == self.window_size:\n",
" return bucketed_start\n",
" else:\n",
" return None\n",
"\n",
"\n",
"def percentile(number_list, percent):\n",
" pos_float = len(number_list) * percent / 100\n",
" max_pos = len(number_list) - 1\n",
" pos_floor = min(math.floor(pos_float), max_pos)\n",
" pos_ceil = min(math.ceil(pos_float), max_pos)\n",
" number_list = sorted(number_list)\n",
" return number_list[pos_ceil] if pos_float - pos_floor > 0.5 else number_list[pos_floor]\n",
"\n",
"\n",
"@contextmanager\n",
"def _logging_show_info():\n",
" try:\n",
" verbosity = logging.get_verbosity()\n",
" logging.set_verbosity(logging.INFO)\n",
" yield\n",
" finally:\n",
" logging.set_verbosity(verbosity)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "960c6aa9",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Below are the inputs for compiled frozen graph \n",
"\n",
"pb_path is a /path/graph_opt_neuron_656x368.pb\n",
"num_thread = 8 ( Number of threads that work on each tensorflow session ) \n",
"batch_size =1 ( batch_size )\n",
"net_resolution ,default=656x368\n",
"num_inferences = 200\n",
"\"\"\"\n",
"import os\n",
"from concurrent import futures\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"\n",
"def run_with_dummy(sess, dummy_feed_dict, num_inferences):\n",
" for _ in range(num_inferences):\n",
" sess.run('Openpose/concat_stage7:0', dummy_feed_dict)\n",
" \n",
"def main():\n",
" NUM_NEURON_CORES = 16\n",
" pb_path = './graph_opt_neuron_656x368.pb'\n",
" num_thread = 8\n",
" batch_size = 1\n",
" net_resolution = '656x368'\n",
" num_inferences = 200\n",
" dim_w, dim_h = net_resolution.split('x')\n",
" dim_w = int(dim_w)\n",
" dim_h = int(dim_h)\n",
" graph_def = tf.GraphDef()\n",
" with open(pb_path, 'rb') as f:\n",
" graph_def.ParseFromString(f.read())\n",
" \n",
" graph_def = tfn.graph_util.tag_multicore(graph_def, NUM_NEURON_CORES)\n",
" \n",
" with tfn.measure_performance() as perf:\n",
" with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.import_graph_def(graph_def, name='')\n",
" input_name = 'image:0'\n",
" input_shape = sess.graph.get_tensor_by_name(input_name).shape.as_list()\n",
" input_shape[0] = batch_size\n",
" input_shape[1] = dim_h\n",
" input_shape[2] = dim_w\n",
" dummy_feed_dict = {input_name: np.zeros(input_shape).astype(np.float32)}\n",
" with futures.ThreadPoolExecutor(max_workers=num_thread) as executor:\n",
" fut_list = [executor.submit(run_with_dummy, sess, dummy_feed_dict, num_inferences) for _ in range(num_thread)]\n",
" res_list = [fut.result() for fut in fut_list] \n",
"\n",
"main()\n",
"\n",
"# Sample output will look like below:\n",
"# INFO:tensorflow:performance report:\n",
"# {\n",
"# \"pid\": 17713,\n",
"# \"throughput\": {\n",
"# \"peak\": 66.0,\n",
"# \"median\": 64.0,\n",
"# \"average\": 61.56521739130435\n",
"# },\n",
"# \"latency\": {\n",
"# \"p50\": 0.1106414794921875,\n",
"# \"p90\": 0.11212301254272461,\n",
"# \"p99\": 0.11337876319885254,\n",
"# \"p100\": 7.08282732963562\n",
"# },\n",
"# \"description\": \"Latency unit: second. Throughput unit: number of batched inferences per second. Reported throughput is a lower bound of the actual throughput as inferences spanning across window boundaries are not counted towards any of the windows. 'Quiet' periods (i. e., window buckets where the inference function is not called) are not counted towards the reported average throughput.\"\n",
"# }"
]
},
{
"cell_type": "raw",
"id": "4f15e776",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:26.014Z | |
Using NeuronCore Pipeline with PyTorch Tutorial — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/tutorials/neuroncore_pipeline_pytorch.html#pytorch-tutorials-neuroncore-pipeline-pytorch | # Using NeuronCore Pipeline with PyTorch Tutorial — AWS Neuron Documentation
Toggle in-page Table of Contents
_This document is relevant for_: `Inf1`
## Using NeuronCore Pipeline with PyTorch Tutorial[#](#using-neuroncore-pipeline-with-pytorch-tutorial "Permalink to this headline")
Table of Contents
- [Overview](#overview)
- [Setup The Environment](#setup-the-environment)
- [Run The Tutorial](#run-the-tutorial)
- [Clean up your instance/s](#clean-up-your-instance-s)
## [Overview](#id1)[#](#overview "Permalink to this headline")
In this tutorial we will benchmark latency of a Hugging Face Transformers model deployed in model pipeline paralle mode using the NeuronCore Pipeline feature. We will compare the results with the usual data parallel (multi-worker) deployment. We compile a pretrained BERT base model and run the benchmarking locally.
To enable faster enviroment setup, We will run both compilation and deployment (inference) on an single inf1.6xlarge instance. You can take similar steps to recreate the benchmark on other instance sizes, such as inf1.xlarge.
If you already have an Inf1 instance environment ready, this tutorial is availabe as a Jupyter notebook at [neuroncore\_pipeline\_pytorch.ipynb](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb) and instructions can be viewed at:
- [Using NeuronCore Pipeline with PyTorch](../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html)
Instructions of how to setup the environment and run the tutorial are available in the next sections.
## [Setup The Environment](#id2)[#](#setup-the-environment "Permalink to this headline")
Launch an Inf1 instance by following the below steps, please make sure to choose an inf1.6xlarge instance.
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/).
- When choosing an Amazon Machine Image (AMI) make sure to select [Deep Learning AMI with Conda Options](https://docs.aws.amazon.com/dlami/latest/devguide/conda.html). Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
## [Run The Tutorial](#id3)[#](#run-the-tutorial "Permalink to this headline")
After connecting to the instance from the terminal, clone the Neuron Github repository to the EC2 instance and then change the working directory to the tutorial directory:
```
git clone https://github.com/aws/aws-neuron-sdk.git
cd aws-neuron-sdk/src/examples/pytorch
```
The Jupyter notebook is available as a file with the name [neuroncore\_pipeline\_pytorch.ipynb](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb), you can either run the Jupyter notebook from a browser or run it as a script from terminal:
- **Running tutorial from browser**
- First setup and launch the Jupyter notebook on your local browser by following instructions at [Jupyter Notebook QuickStart](../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#running-jupyter-notebook-browser)
- Open the Jupyter notebook from the menu and follow the instructions
You can also view the Jupyter notebook at:
- [Using NeuronCore Pipeline with PyTorch](../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html)
## [Clean up your instance/s](#id4)[#](#clean-up-your-instance-s "Permalink to this headline")
After you’ve finished with the instance/s that you created for this tutorial, you should clean up by terminating the instance/s, please follow instructions at [Clean up your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-clean-up-your-instance).
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Using NeuronCore Pipeline with PyTorch Tutorial — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/tutorials/neuroncore_pipeline_pytorch", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/tutorials/neuroncore_pipeline_pytorch.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/tutorials/neuroncore_pipeline_pytorch.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/torch/torch-neuron/tutorials/neuroncore_pipeline_pytorch.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-the-environment">
Setup The Environment
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-the-tutorial">
Run The Tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#clean-up-your-instance-s">
Clean up your instance/s
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Using NeuronCore Pipeline with PyTorch Tutorial</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-the-environment">
Setup The Environment
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-the-tutorial">
Run The Tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#clean-up-your-instance-s">
Clean up your instance/s
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="using-neuroncore-pipeline-with-pytorch-tutorial">
<span id="pytorch-tutorials-neuroncore-pipeline-pytorch"></span><h1>Using NeuronCore Pipeline with PyTorch Tutorial<a class="headerlink" href="#using-neuroncore-pipeline-with-pytorch-tutorial" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#overview" id="id1">Overview</a></p></li>
<li><p><a class="reference internal" href="#setup-the-environment" id="id2">Setup The Environment</a></p></li>
<li><p><a class="reference internal" href="#run-the-tutorial" id="id3">Run The Tutorial</a></p></li>
<li><p><a class="reference internal" href="#clean-up-your-instance-s" id="id4">Clean up your instance/s</a></p></li>
</ul>
</div>
<div class="section" id="overview">
<h2><a class="toc-backref" href="#id1">Overview</a><a class="headerlink" href="#overview" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we will benchmark latency of a Hugging Face Transformers model deployed in model pipeline paralle mode using the NeuronCore Pipeline feature. We will compare the results with the usual data parallel (multi-worker) deployment. We compile a pretrained BERT base model and run the benchmarking locally.</p>
<p>To enable faster enviroment setup, We will run both compilation and deployment (inference) on an single inf1.6xlarge instance. You can take similar steps to recreate the benchmark on other instance sizes, such as inf1.xlarge.</p>
<p>If you already have an Inf1 instance environment ready, this tutorial is availabe as a Jupyter notebook at <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb">neuroncore_pipeline_pytorch.ipynb</a> and instructions can be viewed at:</p>
<div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">Using NeuronCore Pipeline with PyTorch</a></li>
</ul>
</div>
<p>Instructions of how to setup the environment and run the tutorial are available in the next sections.</p>
</div>
<div class="section" id="setup-the-environment">
<span id="pytorch-neuroncore-pipeline-pytorch-env-setup"></span><h2><a class="toc-backref" href="#id2">Setup The Environment</a><a class="headerlink" href="#setup-the-environment" title="Permalink to this headline">#</a></h2>
<p>Launch an Inf1 instance by following the below steps, please make sure to choose an inf1.6xlarge instance.</p>
<ul class="simple">
<li><p>Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a>.</p></li>
<li><p>When choosing an Amazon Machine Image (AMI) make sure to select <a class="reference external" href="https://docs.aws.amazon.com/dlami/latest/devguide/conda.html">Deep Learning AMI with Conda Options</a>. Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.</p></li>
<li><p>After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>You can also launch the instance from AWS CLI, please see <a class="reference internal" href="../../../../general/setup/install-templates/inf1/launch-inf1-dlami-aws-cli.html#launch-inf1-dlami-aws-cli"><span class="std std-ref">AWS CLI commands to launch inf1 instances</span></a>.</p>
</div>
</div>
<div class="section" id="run-the-tutorial">
<span id="pytorch-neuroncore-pipeline-pytorch-run-tutorial"></span><h2><a class="toc-backref" href="#id3">Run The Tutorial</a><a class="headerlink" href="#run-the-tutorial" title="Permalink to this headline">#</a></h2>
<p>After connecting to the instance from the terminal, clone the Neuron Github repository to the EC2 instance and then change the working directory to the tutorial directory:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">aws</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">.</span><span class="n">git</span>
<span class="n">cd</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">/</span><span class="n">src</span><span class="o">/</span><span class="n">examples</span><span class="o">/</span><span class="n">pytorch</span>
</pre></div>
</div>
<p>The Jupyter notebook is available as a file with the name <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb">neuroncore_pipeline_pytorch.ipynb</a>, you can either run the Jupyter notebook from a browser or run it as a script from terminal:</p>
<ul class="simple">
<li><p><strong>Running tutorial from browser</strong></p>
<ul>
<li><p>First setup and launch the Jupyter notebook on your local browser by following instructions at <a class="reference internal" href="../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#running-jupyter-notebook-browser"><span class="std std-ref">Jupyter Notebook QuickStart</span></a></p></li>
<li><p>Open the Jupyter notebook from the menu and follow the instructions</p></li>
</ul>
</li>
</ul>
<p>You can also view the Jupyter notebook at:</p>
<div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">Using NeuronCore Pipeline with PyTorch</a></li>
</ul>
</div>
</div>
<div class="section" id="clean-up-your-instance-s">
<span id="pytorch-neuroncore-pipeline-pytorch-cleanup-instances"></span><h2><a class="toc-backref" href="#id4">Clean up your instance/s</a><a class="headerlink" href="#clean-up-your-instance-s" title="Permalink to this headline">#</a></h2>
<p>After you’ve finished with the instance/s that you created for this tutorial, you should clean up by terminating the instance/s, please follow instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-clean-up-your-instance">Clean up your instance</a>.</p>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:26.089Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.rst.txt | ```
.. _tensorflow-yolo4:
Working with YOLO v4 using AWS Neuron SDK
=========================================
The :ref:`/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb` notebook contains an example on how to take an open
source YOLO v4 models, and run it on AWS Inferentia.
Optimizing image pre-processing and post-processing for object detection models
-------------------------------------------------------------------------------
End-to-end object detection pipelines usually contain image
pre-post-processing operators that cannot run efficiently on Inferentia.
DecodeJPEG and NonMaxSuppression are typical examples. In practice, we
may simply place these operators on CPU using the AWS Neuron machine
learning framework integration. However, Inferentia is such a high
performance machine learning accelerator that, once the model
successfully compiles and runs, these simple pre-post-processing
operators can become the new performance bottleneck! In this tutorial,
we explain some commonly used tensorflow techniques for optimizing the
performance of these pre-post-processing operators so that we can fully
unleash the potential of Inferentia.
1. Write JPEG decoding and image shifting/scaling as tensorflow
operators.
In ``yolo_v4_coco_saved_model.py``, you may find the following code
snippet.
.. code:: python
import tensorflow as tf
...
def YOLOv4(...
...
x, image_shape = layers.Lambda(lambda t: preprocessor(t, input_shape))(inputs)
# cspdarknet53
x = conv2d_unit(x, i32, 3, strides=1, padding='same')
...
def decode_jpeg_resize(input_tensor, image_size):
tensor = tf.image.decode_png(input_tensor, channels=3)
shape = tf.shape(tensor)
tensor = tf.cast(tensor, tf.float32)
tensor = tf.image.resize(tensor, image_size)
tensor /= 255.0
return tf.cast(tensor, tf.float16), shape
def preprocessor(input_tensor, image_size):
with tf.name_scope('Preprocessor'):
tensor = tf.map_fn(
partial(decode_jpeg_resize, image_size=image_size), input_tensor,
dtype=(tf.float16, tf.int32), back_prop=False, parallel_iterations=16)
return tensor
Comparing with the implementation in `the original
repo <https://github.com/miemie2013/Keras-YOLOv4/blob/f0a6b379a362dc3f2d1ef5bd0e58933ed6490ff3/model/yolov4.py>`__,
our difference is the use of ``tf.image.decode_png`` and
``tf.image.resize``, along with a small number of scaling/casting
operators. After this modification, the generated tensorflow SavedModel
now takes JPEG image raw bytes as input, instead of a float32 array
representing the image. When the image resolution is 608x608, this
technique effectively reduces the input image size from 4.4 MB to the
size of a typical JPEG image, which can be as little as hundreds of KB.
When the tensorflow SavedModel is deployed through
`tensorflow/serving <https://github.com/tensorflow/serving>`__, this
technique can very effectively reduce the gRPC transfer overhead of
input images.
2. Replace non-max suppression (NMS) operations by
``tf.image.combined_non_max_suppression``.
Another difference of our implementation is the treatment of non-max
suppression, a commmonly used operation for removing redundant bounding
boxes that overlap with other boxes. In an object detection scenario
represented by the COCO dataset where the number of output classes is
large, the hand-fused :literal:`\`tf.image.combined_non_max_suppression`
<https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/combined_non_max_suppression>`_\_
operator can parallelize multi-class NMS on CPU in a very efficient
manner. With proper use of this operator, the bounding box
post-processing step has a less chance of becoming the performance
bottleneck in the end-to-end object detection pipeline.
The following sample code (from ``yolo_v4_coco_saved_model.py``)
demonstrates our method of writing the bounding box post-processing step
using efficient tensorflow operations.
.. code:: python
...
def filter_boxes(outputs):
boxes_l, boxes_m, boxes_s, box_scores_l, box_scores_m, box_scores_s, image_shape = outputs
boxes_l, box_scores_l = filter_boxes_one_size(boxes_l, box_scores_l)
boxes_m, box_scores_m = filter_boxes_one_size(boxes_m, box_scores_m)
boxes_s, box_scores_s = filter_boxes_one_size(boxes_s, box_scores_s)
boxes = tf.concat([boxes_l, boxes_m, boxes_s], axis=0)
box_scores = tf.concat([box_scores_l, box_scores_m, box_scores_s], axis=0)
image_shape_wh = image_shape[1::-1]
image_shape_whwh = tf.concat([image_shape_wh, image_shape_wh], axis=-1)
image_shape_whwh = tf.cast(image_shape_whwh, tf.float32)
boxes *= image_shape_whwh
boxes = tf.expand_dims(boxes, 0)
box_scores = tf.expand_dims(box_scores, 0)
boxes = tf.expand_dims(boxes, 2)
nms_boxes, nms_scores, nms_classes, valid_detections = tf.image.combined_non_max_suppression(
boxes,
box_scores,
max_output_size_per_class=nms_top_k,
max_total_size=nms_top_k,
iou_threshold=nms_thresh,
score_threshold=conf_thresh,
pad_per_class=False,
clip_boxes=False,
name='CombinedNonMaxSuppression',
)
return nms_boxes[0], nms_scores[0], nms_classes[0]
def filter_boxes_one_size(boxes, box_scores):
box_class_scores = tf.reduce_max(box_scores, axis=-1)
keep = box_class_scores > conf_thresh
boxes = boxes[keep]
box_scores = box_scores[keep]
return boxes, box_scores
def batch_yolo_out(outputs):
with tf.name_scope('yolo_out'):
b_output_lr, b_output_mr, b_output_sr, b_image_shape = outputs
with tf.name_scope('process_feats'):
b_boxes_l, b_box_scores_l = batch_process_feats(b_output_lr, anchors, masks[0])
with tf.name_scope('process_feats'):
b_boxes_m, b_box_scores_m = batch_process_feats(b_output_mr, anchors, masks[1])
with tf.name_scope('process_feats'):
b_boxes_s, b_box_scores_s = batch_process_feats(b_output_sr, anchors, masks[2])
with tf.name_scope('filter_boxes'):
b_nms_boxes, b_nms_scores, b_nms_classes = tf.map_fn(
filter_boxes, [b_boxes_l, b_boxes_m, b_boxes_s, b_box_scores_l, b_box_scores_m, b_box_scores_s, b_image_shape],
dtype=(tf.float32, tf.float32, tf.float32), back_prop=False, parallel_iterations=16)
return b_nms_boxes, b_nms_scores, b_nms_classes
boxes_scores_classes = layers.Lambda(batch_yolo_out)([output_lr, output_mr, output_sr, image_shape])
...
For other advanced data input/output pipeline optimization techniques,
please refer to
https://www.tensorflow.org/guide/data#preprocessing_data.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-yolo4:
Working with YOLO v4 using AWS Neuron SDK
=========================================
The :ref:`/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb` notebook contains an example on how to take an open
source YOLO v4 models, and run it on AWS Inferentia.
Optimizing image pre-processing and post-processing for object detection models
-------------------------------------------------------------------------------
End-to-end object detection pipelines usually contain image
pre-post-processing operators that cannot run efficiently on Inferentia.
DecodeJPEG and NonMaxSuppression are typical examples. In practice, we
may simply place these operators on CPU using the AWS Neuron machine
learning framework integration. However, Inferentia is such a high
performance machine learning accelerator that, once the model
successfully compiles and runs, these simple pre-post-processing
operators can become the new performance bottleneck! In this tutorial,
we explain some commonly used tensorflow techniques for optimizing the
performance of these pre-post-processing operators so that we can fully
unleash the potential of Inferentia.
1. Write JPEG decoding and image shifting/scaling as tensorflow
operators.
In ``yolo_v4_coco_saved_model.py``, you may find the following code
snippet.
.. code:: python
import tensorflow as tf
...
def YOLOv4(...
...
x, image_shape = layers.Lambda(lambda t: preprocessor(t, input_shape))(inputs)
# cspdarknet53
x = conv2d_unit(x, i32, 3, strides=1, padding='same')
...
def decode_jpeg_resize(input_tensor, image_size):
tensor = tf.image.decode_png(input_tensor, channels=3)
shape = tf.shape(tensor)
tensor = tf.cast(tensor, tf.float32)
tensor = tf.image.resize(tensor, image_size)
tensor /= 255.0
return tf.cast(tensor, tf.float16), shape
def preprocessor(input_tensor, image_size):
with tf.name_scope('Preprocessor'):
tensor = tf.map_fn(
partial(decode_jpeg_resize, image_size=image_size), input_tensor,
dtype=(tf.float16, tf.int32), back_prop=False, parallel_iterations=16)
return tensor
Comparing with the implementation in `the original
repo <https://github.com/miemie2013/Keras-YOLOv4/blob/f0a6b379a362dc3f2d1ef5bd0e58933ed6490ff3/model/yolov4.py>`__,
our difference is the use of ``tf.image.decode_png`` and
``tf.image.resize``, along with a small number of scaling/casting
operators. After this modification, the generated tensorflow SavedModel
now takes JPEG image raw bytes as input, instead of a float32 array
representing the image. When the image resolution is 608x608, this
technique effectively reduces the input image size from 4.4 MB to the
size of a typical JPEG image, which can be as little as hundreds of KB.
When the tensorflow SavedModel is deployed through
`tensorflow/serving <https://github.com/tensorflow/serving>`__, this
technique can very effectively reduce the gRPC transfer overhead of
input images.
2. Replace non-max suppression (NMS) operations by
``tf.image.combined_non_max_suppression``.
Another difference of our implementation is the treatment of non-max
suppression, a commmonly used operation for removing redundant bounding
boxes that overlap with other boxes. In an object detection scenario
represented by the COCO dataset where the number of output classes is
large, the hand-fused :literal:`\`tf.image.combined_non_max_suppression`
<https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/combined_non_max_suppression>`_\_
operator can parallelize multi-class NMS on CPU in a very efficient
manner. With proper use of this operator, the bounding box
post-processing step has a less chance of becoming the performance
bottleneck in the end-to-end object detection pipeline.
The following sample code (from ``yolo_v4_coco_saved_model.py``)
demonstrates our method of writing the bounding box post-processing step
using efficient tensorflow operations.
.. code:: python
...
def filter_boxes(outputs):
boxes_l, boxes_m, boxes_s, box_scores_l, box_scores_m, box_scores_s, image_shape = outputs
boxes_l, box_scores_l = filter_boxes_one_size(boxes_l, box_scores_l)
boxes_m, box_scores_m = filter_boxes_one_size(boxes_m, box_scores_m)
boxes_s, box_scores_s = filter_boxes_one_size(boxes_s, box_scores_s)
boxes = tf.concat([boxes_l, boxes_m, boxes_s], axis=0)
box_scores = tf.concat([box_scores_l, box_scores_m, box_scores_s], axis=0)
image_shape_wh = image_shape[1::-1]
image_shape_whwh = tf.concat([image_shape_wh, image_shape_wh], axis=-1)
image_shape_whwh = tf.cast(image_shape_whwh, tf.float32)
boxes *= image_shape_whwh
boxes = tf.expand_dims(boxes, 0)
box_scores = tf.expand_dims(box_scores, 0)
boxes = tf.expand_dims(boxes, 2)
nms_boxes, nms_scores, nms_classes, valid_detections = tf.image.combined_non_max_suppression(
boxes,
box_scores,
max_output_size_per_class=nms_top_k,
max_total_size=nms_top_k,
iou_threshold=nms_thresh,
score_threshold=conf_thresh,
pad_per_class=False,
clip_boxes=False,
name='CombinedNonMaxSuppression',
)
return nms_boxes[0], nms_scores[0], nms_classes[0]
def filter_boxes_one_size(boxes, box_scores):
box_class_scores = tf.reduce_max(box_scores, axis=-1)
keep = box_class_scores > conf_thresh
boxes = boxes[keep]
box_scores = box_scores[keep]
return boxes, box_scores
def batch_yolo_out(outputs):
with tf.name_scope('yolo_out'):
b_output_lr, b_output_mr, b_output_sr, b_image_shape = outputs
with tf.name_scope('process_feats'):
b_boxes_l, b_box_scores_l = batch_process_feats(b_output_lr, anchors, masks[0])
with tf.name_scope('process_feats'):
b_boxes_m, b_box_scores_m = batch_process_feats(b_output_mr, anchors, masks[1])
with tf.name_scope('process_feats'):
b_boxes_s, b_box_scores_s = batch_process_feats(b_output_sr, anchors, masks[2])
with tf.name_scope('filter_boxes'):
b_nms_boxes, b_nms_scores, b_nms_classes = tf.map_fn(
filter_boxes, [b_boxes_l, b_boxes_m, b_boxes_s, b_box_scores_l, b_box_scores_m, b_box_scores_s, b_image_shape],
dtype=(tf.float32, tf.float32, tf.float32), back_prop=False, parallel_iterations=16)
return b_nms_boxes, b_nms_scores, b_nms_classes
boxes_scores_classes = layers.Lambda(batch_yolo_out)([output_lr, output_mr, output_sr, image_shape])
...
For other advanced data input/output pipeline optimization techniques,
please refer to
https://www.tensorflow.org/guide/data#preprocessing_data.
</pre></body></html> | 2023-09-29T20:55:26.241Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/setup/pytorch-install.rst.txt | ```
.. _pytorch-neuronx-install:
Install PyTorch Neuron (``torch-neuronx``)
===========================================
.. contents:: Table of Contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 13
:end-line: 18
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 8
:end-line: 9
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 19
:end-line: 24
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 11
:end-line: 12
.. tab-item:: Amazon Linux 2 DLAMI Pytorch
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 25
:end-line: 29
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 50
:end-line: 51
.. tab-item:: Ubuntu 20 DLAMI Pytorch
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 30
:end-line: 35
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 53
:end-line: 54
.. tab-item:: Amazon Linux 2
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 1
:end-line: 3
.. tab-item:: Ubuntu 20
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 4
:end-line: 6
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuronx-install:
Install PyTorch Neuron (``torch-neuronx``)
===========================================
.. contents:: Table of Contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 13
:end-line: 18
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 8
:end-line: 9
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 19
:end-line: 24
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 11
:end-line: 12
.. tab-item:: Amazon Linux 2 DLAMI Pytorch
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 25
:end-line: 29
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 50
:end-line: 51
.. tab-item:: Ubuntu 20 DLAMI Pytorch
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 30
:end-line: 35
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 53
:end-line: 54
.. tab-item:: Amazon Linux 2
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 1
:end-line: 3
.. tab-item:: Ubuntu 20
.. include :: /general/setup/install-templates/trn1/dlami-notes.rst
:start-line: 4
:end-line: 6</pre></body></html> | 2023-09-29T20:55:26.251Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "variable-character",
"metadata": {},
"source": [
"# Using NeuronCore Pipeline with PyTorch"
]
},
{
"cell_type": "markdown",
"id": "valued-economics",
"metadata": {},
"source": [
"In this tutorial you compile a pretrained BERT base model from HuggingFace 🤗 Transformers, using the NeuronCore Pipeline feature of the AWS Neuron SDK. You benchmark model latency of the pipeline parallel mode and compare with the usual data parallel (multi-worker) deployment.\n",
"\n",
"This tutorial is intended to run in an inf1.6xlarge, running the latest AWS Deep Learning AMI (DLAMI). The inf1.6xlarge instance size has AWS Inferentia chips for a total of 16 NeuronCores.\n",
"\n",
"Verify that this Jupyter notebook is running the Python or Conda kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page.\n",
"\n",
"> __Note:__ Do not execute this tutorial using \"Run -> Run all cells\" option. "
]
},
{
"cell_type": "markdown",
"id": "private-authentication",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuron`\n",
"- `neuron-cc[tensorflow]`\n",
"- `transformers`\n",
"\n",
"Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional HuggingFace 🤗 Transformers dependency must be installed here."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "romantic-accident",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install --upgrade \"transformers==4.6.0\""
]
},
{
"cell_type": "markdown",
"id": "prompt-australian",
"metadata": {},
"source": [
"## Compiling a BERT base model for a single NeuronCore"
]
},
{
"cell_type": "markdown",
"id": "aging-biodiversity",
"metadata": {},
"source": [
"To run a HuggingFace [BERTModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) on Inferentia, you only need to add a single extra line of code to the usual 🤗 Transformers PyTorch implementation, after importing the torch_neuron framework. \n",
"\n",
"Add the argument `return_dict=False` to the BERT transformers model so it can be traced with [TorchScript](https://pytorch.org/docs/stable/jit.html). TorchScript is a way to create serializable and optimizable models from PyTorch code. \n",
"\n",
"Enable padding to a maximum sequence length of 128, to test the model's performance with a realistic payload size. You can adapt this sequence length to your application's requirement. \n",
"\n",
"You can adapt the original example on the [BertModel forward pass docstring](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel.forward) according to the following cell\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "stretch-preview",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer, BertModel\n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "conceptual-aberdeen",
"metadata": {},
"source": [
"The one extra line required is the call to torch.neuron.trace() method. This call compiles the model and returns the forwad method of the torch `nn.Model` method, which you can use to run inference. \n",
"\n",
"The compiled graph can be saved using the `torch.jit.save` function and restored using `torch.jit.load` function for inference on Inf1 instances. During inference, the previously compiled artifacts will be loaded into the Neuron Runtime for inference execution.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "secondary-exclusive",
"metadata": {},
"outputs": [],
"source": [
"neuron_model = torch.neuron.trace(model, \n",
" example_inputs = (inputs['input_ids'],inputs['attention_mask']),\n",
" verbose=1)\n"
]
},
{
"cell_type": "markdown",
"id": "atmospheric-stewart",
"metadata": {},
"source": [
"## Running the BERT base model on a single NeuronCore\n",
"With the model already available in memory, you can time one execution and check for the latency on the single inference call. You will load the model into Inferentia with a single inference call. A large \"wall time\" is expected when you first run the next cell, running the cell twice will show the actual inference latency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "approved-reputation",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The following line tests inference and should be executed on Inf1 instance family. \n",
"outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask']))"
]
},
{
"cell_type": "markdown",
"id": "great-collective",
"metadata": {},
"source": [
"You can also check for the throughput of the single model running on a single NeuronCore.\n",
"\n",
"The sequential inference test (for loop) does not measure all the performance one can achieve in an instance with multiple NeuronCores. To improve hardwar utilization you can run parallel inference requests over multiple model workers, which you'll test in the Data Parallel Bonus Section below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "framed-reference",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"for _ in tqdm(range(100)):\n",
" outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask'])) "
]
},
{
"cell_type": "markdown",
"id": "super-innocent",
"metadata": {},
"source": [
"Save the compiled model for later use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "express-greensboro",
"metadata": {},
"outputs": [],
"source": [
"neuron_model.save('bert-base-uncased-neuron.pt')"
]
},
{
"cell_type": "markdown",
"id": "modified-government",
"metadata": {},
"source": [
"## Compiling a BERT base model for 16 NeuronCores\n",
"\n",
"Our next step is to compile the same model for all 16 NeuronCores available in the inf1.6xlarge and check the performance difference when running pipeline parallel inferences.. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "compound-initial",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer, BertModel\n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "universal-desperate",
"metadata": {},
"source": [
"To enable pipeline mode during compilation, you need only to add the compiler flag `--neuroncore-pipeline-cores` and set the number of desired cores. The cell below sets up a `neuroncore_pipeline_cores` string, which you can set for the available number of NeuronCores on the instance: _inf1.6xlarge_ has 16 NeuronCores in 4 Inferentia chips. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "passing-masters",
"metadata": {},
"outputs": [],
"source": [
"# Number of Cores in the Pipeline Mode\n",
"neuroncore_pipeline_cores = 16 # This string should be '4' on an inf1.xlarge\n",
"\n",
"# Compiling for neuroncore-pipeline-cores='16'\n",
"neuron_pipeline_model = torch.neuron.trace(model,\n",
" example_inputs = (inputs['input_ids'],inputs['attention_mask']),\n",
" verbose=1,\n",
" compiler_args = ['--neuroncore-pipeline-cores', str(neuroncore_pipeline_cores)]\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "enhanced-swedish",
"metadata": {},
"source": [
"## Running the BERT base model on 16 NeuronCores\n",
"Next, time one execution and check for the latency on the single inference call over 16 cores. You will load the model into Inferentia with a single inference call. A large \"wall time\" is expected when you first run the next cell, running the cell twice will show the actual inference latency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "expressed-trinity",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The following line tests inference and should be executed on Inf1 instance family. \n",
"outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))"
]
},
{
"cell_type": "markdown",
"id": "located-graphic",
"metadata": {},
"source": [
"Check also for the throughput of the single model running over a 16 NeuronCores. \n",
"\n",
"The sequential inference test (for loop) does not measure all the performance one can achieve with Pipeline mode. As the inference runs in streaming fashion, at least 15 cores are waiting for a new call until the last one processes the first call. This results in low NeuronCore utilization. To improve hardware utilization you will require parallel inference requests, which you'll test in the next section."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "hydraulic-calcium",
"metadata": {},
"outputs": [],
"source": [
"for _ in tqdm(range(100)):\n",
" outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))\n",
" "
]
},
{
"cell_type": "markdown",
"id": "patent-victoria",
"metadata": {},
"source": [
"## Load Testing the Pipeline Parallel Mode\n",
"\n",
"To put the 16 NeuronCores group to test, a client has to run concurrent requests to the model. In this Notebook setup you achieve it by creating a thread pool with `Joblib.Parallel`, with all workers on the pool runing one inference call. \n",
"\n",
"You can define a new method called `inference_latency()` so that you measure the amount of time each inference calls take."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "appointed-adventure",
"metadata": {},
"outputs": [],
"source": [
"def inference_latency(model,*inputs):\n",
" \"\"\"\n",
" infetence_time is a simple method to return the latency of a model inference.\n",
" \n",
" Parameters:\n",
" model: torch model onbject loaded using torch.jit.load\n",
" inputs: model() args\n",
" \n",
" Returns:\n",
" latency in seconds\n",
" \"\"\"\n",
" start = time.time()\n",
" _ = model(*inputs)\n",
" return time.time() - start"
]
},
{
"cell_type": "markdown",
"id": "environmental-guinea",
"metadata": {},
"source": [
"Use `tqdm` to measure total throughput of your experiment, with a nice side-effect of \"cool progress bar!\". The total throughput is expected to be high, so set your experiment range to a large number, here 30k inferences. \n",
"\n",
"To calculate the latency statistics over the returned 30k list of latencies use `numpy.qunatile()` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "played-catch",
"metadata": {},
"outputs": [],
"source": [
"t = tqdm(range(30000), position=0, leave=True)\n",
"latency = Parallel(n_jobs=12,prefer=\"threads\")(delayed(inference_latency)(neuron_pipeline_model,*(inputs['input_ids'],inputs['attention_mask'])) for i in t)\n",
"\n",
"p50 = np.quantile(latency[-10000:],0.50) * 1000\n",
"p95 = np.quantile(latency[-10000:],0.95) * 1000\n",
"p99 = np.quantile(latency[-10000:],0.99) * 1000\n",
"avg_throughput = t.total/t.format_dict['elapsed']\n",
"print(f'Avg Throughput: :{avg_throughput:.1f}')\n",
"print(f'50th Percentile Latency:{p50:.1f} ms')\n",
"print(f'95th Percentile Latency:{p95:.1f} ms')\n",
"print(f'99th Percentile Latency:{p99:.1f} ms')"
]
},
{
"cell_type": "markdown",
"id": "exposed-northern",
"metadata": {},
"source": [
"Save compile model for later use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "imperial-complex",
"metadata": {},
"outputs": [],
"source": [
"# Save the TorchScript graph\n",
"neuron_pipeline_model.save('bert-base-uncased-neuron-pipeline.pt')"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "abroad-earthquake",
"metadata": {},
"source": [
"## Bonus Section - Load Testing Data Parallel Mode"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "therapeutic-detector",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer \n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"def inference_latency(model,*inputs):\n",
" \"\"\"\n",
" infetence_time is a simple method to return the latency of a model inference.\n",
" \n",
" Parameters:\n",
" model: torch model onbject loaded using torch.jit.load\n",
" inputs: model() args\n",
" \n",
" Returns:\n",
" latency in seconds\n",
" \"\"\"\n",
" start = time.time()\n",
" _ = model(*inputs)\n",
" return time.time() - start\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "legal-terrorist",
"metadata": {},
"source": [
"You use the `'NEURON_RT_NUM_CORES'` environment variable to define how many Neuron cores to be used. Set the environment variable to the number of individual workers you want to test in parallel.\n",
"\n",
"`torch_neuron` will load one model per NeuronCore group until it runs out of cores. At that point, if the Python process continues to spawn more model objest using `torch.jit.load`, `torch_neuron` will start stacking more than one model per core, until the Inferentia chip memory is full. \n",
"\n",
"Inferentia is able to run inference over all the loaded models, but only one at a time. The Neuron Runtime takes care of dynamically switching the model context as requests come in, no extra worker process management required. Use 1 model per NeuronCore to achieve maximum performance.\n",
"\n",
"The following cell creates a list with as many models as NeuronCore Groups and execute one single dummy inference to load the models into Inferentia. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "current-mechanics",
"metadata": {},
"outputs": [],
"source": [
"import warnings\n",
"# Number of data parallel workers\n",
"number_of_workers=16 # This number should be 4 on an inf1.xlarge\n",
"\n",
"# Setting up a data parallel group\n",
"os.environ['NEURON_RT_NUM_CORES'] = str(number_of_workers)\n",
"\n",
"# Loading 'number_of_workers' amount of models in Python memory\n",
"model_list = [torch.jit.load('bert-base-uncased-neuron.pt') for _ in range(number_of_workers)]\n",
"\n",
"# Dummy inference to load models to Inferentia\n",
"_ = [mod(*(inputs['input_ids'],inputs['attention_mask'])) for mod in model_list]\n"
]
},
{
"cell_type": "markdown",
"id": "threatened-swaziland",
"metadata": {},
"source": [
"Adapt the call to `joblib.Parallel()` iterating over a concatenated version of the `model_list`, to run 'round-robin' calls to each of the model workers. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fleet-month",
"metadata": {},
"outputs": [],
"source": [
"t = tqdm(model_list*1500,position=0, leave=True)\n",
"latency = Parallel(n_jobs=number_of_workers,prefer=\"threads\")(delayed(inference_latency)(mod,*(inputs['input_ids'],inputs['attention_mask'])) for mod in t)\n",
"\n",
"p50 = np.quantile(latency[-10000:],0.50) * 1000\n",
"p95 = np.quantile(latency[-10000:],0.95) * 1000\n",
"p99 = np.quantile(latency[-10000:],0.99) * 1000\n",
"avg_throughput = t.total/t.format_dict['elapsed']\n",
"print(f'Avg Throughput: :{avg_throughput:.1f}')\n",
"print(f'50th Percentile Latency:{p50:.1f} ms')\n",
"print(f'95th Percentile Latency:{p95:.1f} ms')\n",
"print(f'99th Percentile Latency:{p99:.1f} ms')"
]
},
{
"cell_type": "markdown",
"id": "aggressive-stevens",
"metadata": {},
"source": [
"For this model, despite the larger number of workers, the per-worker latency increases when running a single model per core, which in turn reduces the total throughput. \n",
"\n",
"This behavior may not repeat if the model memory footprint or the input payload size changes, i.e batch size > 1. We encourage you to experiment with the data parallel and pipeline parallel modes to optimize your application performance. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_pytorch_p36)",
"language": "python",
"name": "conda_aws_neuron_pytorch_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "variable-character",
"metadata": {},
"source": [
"# Using NeuronCore Pipeline with PyTorch"
]
},
{
"cell_type": "markdown",
"id": "valued-economics",
"metadata": {},
"source": [
"In this tutorial you compile a pretrained BERT base model from HuggingFace 🤗 Transformers, using the NeuronCore Pipeline feature of the AWS Neuron SDK. You benchmark model latency of the pipeline parallel mode and compare with the usual data parallel (multi-worker) deployment.\n",
"\n",
"This tutorial is intended to run in an inf1.6xlarge, running the latest AWS Deep Learning AMI (DLAMI). The inf1.6xlarge instance size has AWS Inferentia chips for a total of 16 NeuronCores.\n",
"\n",
"Verify that this Jupyter notebook is running the Python or Conda kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page.\n",
"\n",
"> __Note:__ Do not execute this tutorial using \"Run -> Run all cells\" option. "
]
},
{
"cell_type": "markdown",
"id": "private-authentication",
"metadata": {},
"source": [
"## Install Dependencies:\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuron`\n",
"- `neuron-cc[tensorflow]`\n",
"- `transformers`\n",
"\n",
"Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional HuggingFace 🤗 Transformers dependency must be installed here."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "romantic-accident",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install --upgrade \"transformers==4.6.0\""
]
},
{
"cell_type": "markdown",
"id": "prompt-australian",
"metadata": {},
"source": [
"## Compiling a BERT base model for a single NeuronCore"
]
},
{
"cell_type": "markdown",
"id": "aging-biodiversity",
"metadata": {},
"source": [
"To run a HuggingFace [BERTModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) on Inferentia, you only need to add a single extra line of code to the usual 🤗 Transformers PyTorch implementation, after importing the torch_neuron framework. \n",
"\n",
"Add the argument `return_dict=False` to the BERT transformers model so it can be traced with [TorchScript](https://pytorch.org/docs/stable/jit.html). TorchScript is a way to create serializable and optimizable models from PyTorch code. \n",
"\n",
"Enable padding to a maximum sequence length of 128, to test the model's performance with a realistic payload size. You can adapt this sequence length to your application's requirement. \n",
"\n",
"You can adapt the original example on the [BertModel forward pass docstring](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel.forward) according to the following cell\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "stretch-preview",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer, BertModel\n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "conceptual-aberdeen",
"metadata": {},
"source": [
"The one extra line required is the call to torch.neuron.trace() method. This call compiles the model and returns the forwad method of the torch `nn.Model` method, which you can use to run inference. \n",
"\n",
"The compiled graph can be saved using the `torch.jit.save` function and restored using `torch.jit.load` function for inference on Inf1 instances. During inference, the previously compiled artifacts will be loaded into the Neuron Runtime for inference execution.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "secondary-exclusive",
"metadata": {},
"outputs": [],
"source": [
"neuron_model = torch.neuron.trace(model, \n",
" example_inputs = (inputs['input_ids'],inputs['attention_mask']),\n",
" verbose=1)\n"
]
},
{
"cell_type": "markdown",
"id": "atmospheric-stewart",
"metadata": {},
"source": [
"## Running the BERT base model on a single NeuronCore\n",
"With the model already available in memory, you can time one execution and check for the latency on the single inference call. You will load the model into Inferentia with a single inference call. A large \"wall time\" is expected when you first run the next cell, running the cell twice will show the actual inference latency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "approved-reputation",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The following line tests inference and should be executed on Inf1 instance family. \n",
"outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask']))"
]
},
{
"cell_type": "markdown",
"id": "great-collective",
"metadata": {},
"source": [
"You can also check for the throughput of the single model running on a single NeuronCore.\n",
"\n",
"The sequential inference test (for loop) does not measure all the performance one can achieve in an instance with multiple NeuronCores. To improve hardwar utilization you can run parallel inference requests over multiple model workers, which you'll test in the Data Parallel Bonus Section below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "framed-reference",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"for _ in tqdm(range(100)):\n",
" outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask'])) "
]
},
{
"cell_type": "markdown",
"id": "super-innocent",
"metadata": {},
"source": [
"Save the compiled model for later use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "express-greensboro",
"metadata": {},
"outputs": [],
"source": [
"neuron_model.save('bert-base-uncased-neuron.pt')"
]
},
{
"cell_type": "markdown",
"id": "modified-government",
"metadata": {},
"source": [
"## Compiling a BERT base model for 16 NeuronCores\n",
"\n",
"Our next step is to compile the same model for all 16 NeuronCores available in the inf1.6xlarge and check the performance difference when running pipeline parallel inferences.. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "compound-initial",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer, BertModel\n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "universal-desperate",
"metadata": {},
"source": [
"To enable pipeline mode during compilation, you need only to add the compiler flag `--neuroncore-pipeline-cores` and set the number of desired cores. The cell below sets up a `neuroncore_pipeline_cores` string, which you can set for the available number of NeuronCores on the instance: _inf1.6xlarge_ has 16 NeuronCores in 4 Inferentia chips. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "passing-masters",
"metadata": {},
"outputs": [],
"source": [
"# Number of Cores in the Pipeline Mode\n",
"neuroncore_pipeline_cores = 16 # This string should be '4' on an inf1.xlarge\n",
"\n",
"# Compiling for neuroncore-pipeline-cores='16'\n",
"neuron_pipeline_model = torch.neuron.trace(model,\n",
" example_inputs = (inputs['input_ids'],inputs['attention_mask']),\n",
" verbose=1,\n",
" compiler_args = ['--neuroncore-pipeline-cores', str(neuroncore_pipeline_cores)]\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "enhanced-swedish",
"metadata": {},
"source": [
"## Running the BERT base model on 16 NeuronCores\n",
"Next, time one execution and check for the latency on the single inference call over 16 cores. You will load the model into Inferentia with a single inference call. A large \"wall time\" is expected when you first run the next cell, running the cell twice will show the actual inference latency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "expressed-trinity",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The following line tests inference and should be executed on Inf1 instance family. \n",
"outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))"
]
},
{
"cell_type": "markdown",
"id": "located-graphic",
"metadata": {},
"source": [
"Check also for the throughput of the single model running over a 16 NeuronCores. \n",
"\n",
"The sequential inference test (for loop) does not measure all the performance one can achieve with Pipeline mode. As the inference runs in streaming fashion, at least 15 cores are waiting for a new call until the last one processes the first call. This results in low NeuronCore utilization. To improve hardware utilization you will require parallel inference requests, which you'll test in the next section."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "hydraulic-calcium",
"metadata": {},
"outputs": [],
"source": [
"for _ in tqdm(range(100)):\n",
" outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))\n",
" "
]
},
{
"cell_type": "markdown",
"id": "patent-victoria",
"metadata": {},
"source": [
"## Load Testing the Pipeline Parallel Mode\n",
"\n",
"To put the 16 NeuronCores group to test, a client has to run concurrent requests to the model. In this Notebook setup you achieve it by creating a thread pool with `Joblib.Parallel`, with all workers on the pool runing one inference call. \n",
"\n",
"You can define a new method called `inference_latency()` so that you measure the amount of time each inference calls take."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "appointed-adventure",
"metadata": {},
"outputs": [],
"source": [
"def inference_latency(model,*inputs):\n",
" \"\"\"\n",
" infetence_time is a simple method to return the latency of a model inference.\n",
" \n",
" Parameters:\n",
" model: torch model onbject loaded using torch.jit.load\n",
" inputs: model() args\n",
" \n",
" Returns:\n",
" latency in seconds\n",
" \"\"\"\n",
" start = time.time()\n",
" _ = model(*inputs)\n",
" return time.time() - start"
]
},
{
"cell_type": "markdown",
"id": "environmental-guinea",
"metadata": {},
"source": [
"Use `tqdm` to measure total throughput of your experiment, with a nice side-effect of \"cool progress bar!\". The total throughput is expected to be high, so set your experiment range to a large number, here 30k inferences. \n",
"\n",
"To calculate the latency statistics over the returned 30k list of latencies use `numpy.qunatile()` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "played-catch",
"metadata": {},
"outputs": [],
"source": [
"t = tqdm(range(30000), position=0, leave=True)\n",
"latency = Parallel(n_jobs=12,prefer=\"threads\")(delayed(inference_latency)(neuron_pipeline_model,*(inputs['input_ids'],inputs['attention_mask'])) for i in t)\n",
"\n",
"p50 = np.quantile(latency[-10000:],0.50) * 1000\n",
"p95 = np.quantile(latency[-10000:],0.95) * 1000\n",
"p99 = np.quantile(latency[-10000:],0.99) * 1000\n",
"avg_throughput = t.total/t.format_dict['elapsed']\n",
"print(f'Avg Throughput: :{avg_throughput:.1f}')\n",
"print(f'50th Percentile Latency:{p50:.1f} ms')\n",
"print(f'95th Percentile Latency:{p95:.1f} ms')\n",
"print(f'99th Percentile Latency:{p99:.1f} ms')"
]
},
{
"cell_type": "markdown",
"id": "exposed-northern",
"metadata": {},
"source": [
"Save compile model for later use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "imperial-complex",
"metadata": {},
"outputs": [],
"source": [
"# Save the TorchScript graph\n",
"neuron_pipeline_model.save('bert-base-uncased-neuron-pipeline.pt')"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "abroad-earthquake",
"metadata": {},
"source": [
"## Bonus Section - Load Testing Data Parallel Mode"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "therapeutic-detector",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch_neuron\n",
"from transformers import BertTokenizer \n",
"\n",
"from joblib import Parallel, delayed \n",
"import numpy as np\n",
"from tqdm import tqdm\n",
"\n",
"import os\n",
"import time \n",
"\n",
"def inference_latency(model,*inputs):\n",
" \"\"\"\n",
" infetence_time is a simple method to return the latency of a model inference.\n",
" \n",
" Parameters:\n",
" model: torch model onbject loaded using torch.jit.load\n",
" inputs: model() args\n",
" \n",
" Returns:\n",
" latency in seconds\n",
" \"\"\"\n",
" start = time.time()\n",
" _ = model(*inputs)\n",
" return time.time() - start\n",
"\n",
"tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
"\n",
"inputs = tokenizer(\"Hello, my dog is cute\",return_tensors=\"pt\",max_length=128,padding='max_length',truncation=True)\n"
]
},
{
"cell_type": "markdown",
"id": "legal-terrorist",
"metadata": {},
"source": [
"You use the `'NEURON_RT_NUM_CORES'` environment variable to define how many Neuron cores to be used. Set the environment variable to the number of individual workers you want to test in parallel.\n",
"\n",
"`torch_neuron` will load one model per NeuronCore group until it runs out of cores. At that point, if the Python process continues to spawn more model objest using `torch.jit.load`, `torch_neuron` will start stacking more than one model per core, until the Inferentia chip memory is full. \n",
"\n",
"Inferentia is able to run inference over all the loaded models, but only one at a time. The Neuron Runtime takes care of dynamically switching the model context as requests come in, no extra worker process management required. Use 1 model per NeuronCore to achieve maximum performance.\n",
"\n",
"The following cell creates a list with as many models as NeuronCore Groups and execute one single dummy inference to load the models into Inferentia. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "current-mechanics",
"metadata": {},
"outputs": [],
"source": [
"import warnings\n",
"# Number of data parallel workers\n",
"number_of_workers=16 # This number should be 4 on an inf1.xlarge\n",
"\n",
"# Setting up a data parallel group\n",
"os.environ['NEURON_RT_NUM_CORES'] = str(number_of_workers)\n",
"\n",
"# Loading 'number_of_workers' amount of models in Python memory\n",
"model_list = [torch.jit.load('bert-base-uncased-neuron.pt') for _ in range(number_of_workers)]\n",
"\n",
"# Dummy inference to load models to Inferentia\n",
"_ = [mod(*(inputs['input_ids'],inputs['attention_mask'])) for mod in model_list]\n"
]
},
{
"cell_type": "markdown",
"id": "threatened-swaziland",
"metadata": {},
"source": [
"Adapt the call to `joblib.Parallel()` iterating over a concatenated version of the `model_list`, to run 'round-robin' calls to each of the model workers. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fleet-month",
"metadata": {},
"outputs": [],
"source": [
"t = tqdm(model_list*1500,position=0, leave=True)\n",
"latency = Parallel(n_jobs=number_of_workers,prefer=\"threads\")(delayed(inference_latency)(mod,*(inputs['input_ids'],inputs['attention_mask'])) for mod in t)\n",
"\n",
"p50 = np.quantile(latency[-10000:],0.50) * 1000\n",
"p95 = np.quantile(latency[-10000:],0.95) * 1000\n",
"p99 = np.quantile(latency[-10000:],0.99) * 1000\n",
"avg_throughput = t.total/t.format_dict['elapsed']\n",
"print(f'Avg Throughput: :{avg_throughput:.1f}')\n",
"print(f'50th Percentile Latency:{p50:.1f} ms')\n",
"print(f'95th Percentile Latency:{p95:.1f} ms')\n",
"print(f'99th Percentile Latency:{p99:.1f} ms')"
]
},
{
"cell_type": "markdown",
"id": "aggressive-stevens",
"metadata": {},
"source": [
"For this model, despite the larger number of workers, the per-worker latency increases when running a single model per core, which in turn reduces the total throughput. \n",
"\n",
"This behavior may not repeat if the model memory footprint or the input payload size changes, i.e batch size > 1. We encourage you to experiment with the data parallel and pipeline parallel modes to optimize your application performance. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_pytorch_p36)",
"language": "python",
"name": "conda_aws_neuron_pytorch_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:26.268Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "spectacular-payroll",
"metadata": {},
"source": [
"# Tensorflow ResNet 50 Optimization Tutorial"
]
},
{
"cell_type": "markdown",
"id": "equivalent-stack",
"metadata": {},
"source": [
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"id": "alpine-aside",
"metadata": {},
"source": [
"## Introduction: "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial we provide three main sections:\n",
"\n",
"* Take a Resnet 50 model and perform optimizations on it\n",
"\n",
"* Compile the model with different batch sizes and Neuroncore Group sizes (read about Neuroncore Group sizes here: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-theory-of-operation.html#neuron-core-group)\n",
"\n",
"* Run inference on our multiple compiled models to see which has the best throughput\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page."
]
},
{
"cell_type": "markdown",
"id": "opened-forty",
"metadata": {},
"source": [
"## Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "meaningful-algebra",
"metadata": {},
"outputs": [],
"source": [
"!pip install pillow requests # Necessary for loading images\n",
"!pip install 'tensorflow-neuron<2' --extra-index-url=https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "markdown",
"id": "remarkable-exercise",
"metadata": {},
"source": [
"## Compile"
]
},
{
"cell_type": "markdown",
"id": "consecutive-right",
"metadata": {},
"source": [
"The following example shows how to compile a FP16 ResNet50 network using various batching parameters to find the optimal solution. On inf1.6xlarge, run through the following steps to get a optimized Resnet 50 model.\n",
"First, extract Keras ResNet50 FP32 (resnet50_fp32_keras.pb will be generated):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "vertical-finland",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"import argparse\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"\n",
"from tensorflow.keras.applications.resnet50 import ResNet50\n",
"from tensorflow.keras.preprocessing import image\n",
"from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n",
"\n",
"from google.protobuf import text_format\n",
"import tensorflow.python.saved_model\n",
"\n",
"# set Keras global configurations\n",
"tf.keras.backend.set_learning_phase(0)\n",
"tf.keras.backend.set_image_data_format('channels_last')\n",
"\n",
"float_type = 'float32'\n",
"float_type2 = 'fp32'\n",
"tf.keras.backend.set_floatx(float_type)\n",
"\n",
"# load pre-trained model using Keras\n",
"model_name = 'resnet50_%s_keras'%float_type2\n",
"model = ResNet50(weights='imagenet')\n",
"\n",
"# various save files\n",
"frozen_file = model_name + '.pb'\n",
"opt_file = model_name + '_opt.pb'\n",
"\n",
"# obtain parameters\n",
"model_input = model.input.name.replace(':0', '')\n",
"model_output = model.output.name.replace(':0', '')\n",
"batch, height, width, channels = model.input.shape\n",
"\n",
"print (\"model, frozen file, optimized file, input size, input node, output node,\")\n",
"print (\"%s, %s, %s, %dx%dx%d, %s, %s\" %(model_name, frozen_file, opt_file, width, height, channels, model_input, model_output) ) \n",
"\n",
"# obtain the TF session\n",
"sess = tf.compat.v1.keras.backend.get_session()\n",
"\n",
"# save checkpoint files for freeze_graph\n",
"ckpt_file = '/tmp/' + model_name + '/' + model_name + '.ckpt'\n",
"graph_file = '/tmp/' + model_name + '/' + model_name + '.pb'\n",
"tf.compat.v1.train.Saver().save(sess, ckpt_file)\n",
"tf.io.write_graph(sess.graph.as_graph_def(), logdir='.', name=graph_file, as_text=False)\n",
"\n",
"print(model_output)\n",
"with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n",
" saver = tf.compat.v1.train.import_meta_graph(ckpt_file + '.meta')\n",
" saver.restore(sess, ckpt_file)\n",
" output_graph_def = tf.compat.v1.graph_util.convert_variables_to_constants(\n",
" sess, tf.compat.v1.get_default_graph().as_graph_def(), [model_output])\n",
" output_graph_def = tf.compat.v1.graph_util.remove_training_nodes(\n",
" output_graph_def, protected_nodes=[model_output])\n",
" with open(frozen_file, 'wb') as f:\n",
" f.write(output_graph_def.SerializeToString())"
]
},
{
"cell_type": "markdown",
"id": "romance-cyprus",
"metadata": {},
"source": [
"Optimize the extracted Keras ResNet50 FP32 graph for inference before casting (resnet50_fp32_keras_opt.pb will be generated) with the following transformations to the graph:\n",
"\n",
"* Remove Identity and CheckNumerics nodes\n",
"* Fold FusedBatchNorm constants into previous Conv2D weights\n",
"* Fold other constants\n",
"* Strip unused nodes\n",
"* Sort by execution order"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "higher-grant",
"metadata": {},
"outputs": [],
"source": [
"import copy\n",
"import string\n",
"\n",
"from google.protobuf import text_format\n",
"from tensorflow.core.framework import node_def_pb2\n",
"from tensorflow.core.framework import attr_value_pb2\n",
"from tensorflow.python.framework import tensor_util\n",
"from tensorflow.tools.graph_transforms import TransformGraph\n",
"\n",
"def clear_input(node):\n",
" for i in range(len(node.input)):\n",
" node.input.pop()\n",
"\n",
"def replace_name(node, name):\n",
" node.name = name\n",
" \n",
"def replace_input(node, input_name, new_name):\n",
" # node.input.replace(input_name, new_name)\n",
" temp = []\n",
" for i in node.input:\n",
" temp.extend([new_name if i == input_name else i])\n",
" clear_input(node)\n",
" for i in temp:\n",
" node.input.extend([i])\n",
"\n",
"def swap_names(node1, node2):\n",
" temp = node2.name\n",
" node2.name = node1.name\n",
" node1.name = temp\n",
"\n",
"def get_const_node(const_node_name, const_by_name):\n",
" name = re.sub(\"/read$\", \"\", const_node_name)\n",
" return const_by_name[name]\n",
"\n",
"def get_const_ndarray(const_node_name, const_by_name):\n",
" name = re.sub(\"/read$\", \"\", const_node_name)\n",
" node = const_by_name[name]\n",
" return tf.make_ndarray(node.attr.get(\"value\").tensor)\n",
"\n",
"def adjust_bias_values(bias_node, fbn_node, const_by_name):\n",
" bias_val = get_const_ndarray(bias_node.input[1], const_by_name) \n",
" gamma_val = get_const_ndarray(fbn_node.input[1], const_by_name) \n",
" mean_val = get_const_ndarray(fbn_node.input[3], const_by_name) \n",
" variance_val = get_const_ndarray(fbn_node.input[4], const_by_name) \n",
" new_bias = bias_val * gamma_val / np.sqrt(variance_val)\n",
" new_tensor = tensor_util.make_tensor_proto(new_bias, new_bias.dtype, new_bias.shape)\n",
" bias_const_node = get_const_node(bias_node.input[1], const_by_name)\n",
" bias_const_node.attr[\"value\"].CopyFrom(attr_value_pb2.AttrValue(tensor=new_tensor))\n",
"\n",
"def MoveBiasAddAfterFusedBatchNorm(graphdef):\n",
" \"\"\"fold_batch_norm function of TransformGraph is unable to fold Keras ResNet50\n",
" because of BiasAdd between Conv2D and FusedBatchNorm (BiasAdd is not needed\n",
" if FusedBatchNorm is used, but it exists in Keras ResNet50). Here, we \n",
" move BiasAdd to after FusedBatchNorm, and adjust bias value by gamma/sqrt(variance).\n",
" \"\"\"\n",
" sess = tf.compat.v1.Session(graph=tf.import_graph_def(graphdef))\n",
" output_graph_def = tf.compat.v1.GraphDef()\n",
" node_by_name = {}\n",
" const_by_name = {}\n",
" for node in graphdef.node:\n",
" # Hack: use FusedBatchNormV2 so fold_batch_norm can recognize\n",
" if node.op == \"FusedBatchNormV3\":\n",
" node.op = \"FusedBatchNorm\"\n",
" del(node.attr[\"U\"])\n",
" #import pdb; pdb.set_trace()\n",
" copied_node = node_def_pb2.NodeDef()\n",
" copied_node.CopyFrom(node)\n",
" node_by_name[node.name] = copied_node\n",
" skip_add_node = False\n",
" # Switch Mul/BiasAdd in Keras RN50 so fold_batch_norm transform would work\n",
" if node.op == \"Const\":\n",
" const_by_name[node.name] = copied_node \n",
" elif node.op.startswith(\"FusedBatchNorm\"):\n",
" inputs = node.input\n",
" for i in inputs:\n",
" input_node = node_by_name[i]\n",
" if input_node.op == \"BiasAdd\":\n",
" output_graph_def.node.remove(input_node)\n",
" input_node_input0 = input_node.input[0]\n",
" # Adjust bias values (multiply by scale/sqrt(variance))\n",
" adjust_bias_values(input_node, node, const_by_name)\n",
" # Hack: swap names to avoid changing input of activation\n",
" swap_names(copied_node, input_node)\n",
" # Fix inputs for these two ops\n",
" replace_input(copied_node, i, input_node_input0)\n",
" replace_input(input_node, input_node_input0, copied_node.name)\n",
" # Fix order in node list\n",
" output_graph_def.node.extend([copied_node])\n",
" output_graph_def.node.extend([input_node])\n",
" skip_add_node = True\n",
" # Add maybe-modified nodes if not already done\n",
" if not skip_add_node:\n",
" output_graph_def.node.extend([copied_node])\n",
" return output_graph_def\n",
"\n",
"def FoldFusedBatchNorm(graph_def):\n",
" \"\"\"Optimize training graph for inference:\n",
" - Remove Identity and CheckNumerics nodes\n",
" - Fold FusedBatchNorm constants into previous Conv2D weights\n",
" - Fold other constants\n",
" - Strip unused nodes\n",
" - Sort by execution order\n",
" \"\"\"\n",
" transformed_graph_def = TransformGraph (\n",
" graph_def,\n",
" ['input_1'],\n",
" ['probs/Softmax'],\n",
" [\n",
" 'add_default_attributes',\n",
" 'remove_nodes(op=Identity, op=CheckNumerics)',\n",
" 'fold_constants(ignore_errors=true)',\n",
" 'fold_batch_norms',\n",
" 'fold_old_batch_norms',\n",
" 'strip_unused_nodes',\n",
" 'sort_by_execution_order',\n",
" ])\n",
" return transformed_graph_def\n",
"\n",
"def load_graph(model_file):\n",
" graph_def = tf.compat.v1.GraphDef()\n",
"\n",
" with open(model_file, \"rb\") as f:\n",
" graph_def.ParseFromString(f.read())\n",
" return graph_def\n",
"\n",
"\n",
"graph_orig = load_graph('resnet50_fp32_keras.pb')\n",
"graph_mod = MoveBiasAddAfterFusedBatchNorm(graph_orig)\n",
"graph_mod2 = FoldFusedBatchNorm(graph_mod)\n",
"with tf.io.gfile.GFile('resnet50_fp32_keras_opt.pb', \"wb\") as f:\n",
" f.write(graph_mod2.SerializeToString())"
]
},
{
"cell_type": "markdown",
"id": "corresponding-acquisition",
"metadata": {},
"source": [
"Convert full graph to FP16 (resnet50_fp16_keras_opt.pb will be generated.\n",
"This will take about a minute."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "detected-training",
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.core.framework import graph_pb2\n",
"from tensorflow.python.platform import gfile\n",
"\n",
"def ConvertFP32ToOther(graphdef):\n",
" \"\"\"Converts an FP32 network by casting all constants (weights) to a lower\n",
" precision floating point type (FP16) and updating the dtypes\n",
" everywhere.\"\"\"\n",
" cast_type = \"float16\"\n",
" sess = tf.Session(graph=tf.import_graph_def(graphdef))\n",
" output_graph_def = graph_pb2.GraphDef()\n",
" dummy_tensor = sess.run(tf.constant([0.1]))\n",
" dummy_tensor_proto = tensor_util.make_tensor_proto(dummy_tensor, \\\n",
" dtype=cast_type, shape=dummy_tensor.shape)\n",
" dummy_tensor32 = sess.run(tf.constant([0.1]))\n",
" dummy_tensor_proto32 = tensor_util.make_tensor_proto(dummy_tensor, \\\n",
" dtype=tf.float32, shape=dummy_tensor.shape)\n",
" dt_float_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto32.dtype)\n",
" dt_half_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto.dtype)\n",
" for node in graphdef.node:\n",
" output_node = node_def_pb2.NodeDef()\n",
" output_node.CopyFrom(node)\n",
" if (node.op == \"Const\"):\n",
" if (node.attr[\"dtype\"] == dt_float_type_attr):\n",
" a = tensor_util.MakeNdarray(node.attr[\"value\"].tensor)\n",
" a = tf.cast(a, cast_type)\n",
" a = sess.run(a)\n",
" output_node.attr[\"dtype\"].CopyFrom(dt_half_type_attr)\n",
" output_node.attr[\"value\"].CopyFrom(\n",
" attr_value_pb2.AttrValue(\n",
" tensor=tensor_util.make_tensor_proto(a,\\\n",
" dtype=cast_type, shape=a.shape)))\n",
" else:\n",
" if (\"T\" in node.attr.keys()):\n",
" if (output_node.attr[\"T\"] == dt_float_type_attr):\n",
" output_node.attr[\"T\"].CopyFrom(dt_half_type_attr)\n",
" if (\"Tparams\" in node.attr.keys()):\n",
" if (output_node.attr[\"Tparams\"] == dt_float_type_attr):\n",
" output_node.attr[\"Tparams\"].CopyFrom(dt_half_type_attr)\n",
" if (\"dtype\" in node.attr.keys()):\n",
" if (node.attr[\"dtype\"] == dt_float_type_attr):\n",
" output_node.attr[\"dtype\"].CopyFrom(dt_half_type_attr)\n",
" if (\"SrcT\" in node.attr.keys()):\n",
" if (node.attr[\"SrcT\"] == dt_float_type_attr):\n",
" output_node.attr[\"SrcT\"].CopyFrom(dt_half_type_attr)\n",
" if (\"DstT\" in node.attr.keys()):\n",
" if (node.attr[\"DstT\"] == dt_float_type_attr):\n",
" output_node.attr[\"DstT\"].CopyFrom(dt_half_type_attr)\n",
" output_graph_def.node.extend([output_node])\n",
" return output_graph_def\n",
"\n",
"def load_graph(model_file):\n",
" graph_def = tf.GraphDef()\n",
"\n",
" with open(model_file, \"rb\") as f:\n",
" graph_def.ParseFromString(f.read())\n",
"\n",
" return graph_def\n",
"\n",
"graph_f32 = load_graph('resnet50_fp32_keras_opt.pb')\n",
"graph_f16 = ConvertFP32ToOther(graph_f32)\n",
"output_xformed_graph_name = 'resnet50_fp16_keras_opt.pb'\n",
"with gfile.GFile(output_xformed_graph_name, \"wb\") as f:\n",
" f.write(graph_f16.SerializeToString())\n"
]
},
{
"cell_type": "markdown",
"id": "correct-travel",
"metadata": {},
"source": [
"Run the compilation script to sweep through various batch sizes up to 5 and several NeuronCore Group sizes up to 16. The script calls the compilation script pb2sm_compile.py which tries to perform compilation. Some error messages are expected due to known issues (see Known Issues section in the tutorial). If you run all the configurations it will take about 45 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "shared-ratio",
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"#!/usr/bin/env bash\n",
"\n",
"echo \"\" > full_sweep.log\n",
"echo \"\" > full_sweep_results.txt\n",
"\n",
"results=()\n",
"for b in $(seq 1 5); do \n",
" for i in 1 2 4 8 12 16; do \n",
" python pb2sm_compile.py --batch_size=$b --neuroncore-pipeline-cores=$i | tee -a full_sweep.log;\n",
" results[$b]+=\", \"`tail -1 full_sweep.log`\n",
" done\n",
"done\n",
"\n",
"head=\"batch\"\n",
"for i in 1 2 4 8 12 16; do\n",
" head+=\", nc${i}\"\n",
"done \n",
"echo $head | tee -a full_sweep_results.txt\n",
"for b in $(seq 1 5); do \n",
" echo $b${results[$b]} | tee -a full_sweep_results.txt\n",
"done"
]
},
{
"cell_type": "markdown",
"id": "attached-austin",
"metadata": {},
"source": [
"You should see some output like this:\n",
"```\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"*** Batch size 1, num NeuronCores 2 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc2) ***\n",
"\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"*** Batch size 1, num NeuronCores 4 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc4) ***\n",
"\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"... (outputs removed)\n",
"\n",
"*** Batch size 5, num NeuronCores 16 (input shape: (5, 224, 224, 3), saved model dir: rn50_fp16_compiled_b5_nc16) ***\n",
"\n",
"ERROR: Compilation finished in 120 seconds with less than 50% operations placed on Inferentia (0.0%)\n",
"\n",
"INFO: Retry compilation without static weights\n",
"\n",
"ERROR: Retry compilation finished in 137 seconds with less than 50% operations placed on Inferentia (0.0%)\n",
"\n",
"0\n",
"\n",
"The file full_sweep_results.txt shows a summary of the sweep results with latest Neuron 1/27/20 release (0 means compilation unsuccessful and 0 ops mapped to Inferentia, 1 means most ops mapped to Inferentia and non-static weights, 2 means most ops mapped to Inferentia and using static weights):\n",
"\n",
"batch, nc1, nc2, nc4, nc8, nc12, nc16\n",
"1, 1, 1, 1, 2, 2, 2\n",
"2, 1, 1, 0, 1, 2, 2\n",
"3, 1, 1, 1, 1, 1, 1\n",
"4, 1, 1, 0, 1, 1, 1\n",
"5, 1, 1, 0, 0, 0, 0\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "surprised-abortion",
"metadata": {},
"source": [
"## Inference"
]
},
{
"cell_type": "markdown",
"id": "departmental-surprise",
"metadata": {},
"source": [
"Run inference over different batch sizes and Neuroncore groups to obtain throughput and latency results for ResNet50. To apply dynamic batching, the user batch size is set to 10x the compiled batch size, in order to keep input queue full and to amortize framework-to-Neuron overhead.\n",
"\n",
"Note: The results are based on the Neuron v1.12.2 (Mar 4th 2021) release. These will continue improve as we increase Neuron performance.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "requested-inspiration",
"metadata": {},
"outputs": [],
"source": [
"!cd ~/aws-neuron-sdk/src/examples/tensorflow/keras_resnet50/\n",
"!echo \"\" > batch.log\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=1 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=2 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=4 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=8 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=12 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=16 | tee -a batch.log; done"
]
},
{
"cell_type": "markdown",
"id": "split-genesis",
"metadata": {},
"source": [
"The file batch.log now contains the results for each batch size. We can look at the throughput values to get an idea of which models are performing well. The output should look something like this:\n",
"\n",
"The model best model configuration for throughput (if you run on an Inf1.6xlarge as suggested in the tutorial) is batch size 5 NeuronCore group size 2. Increasing batch size usually helps to increase throughput (up to a certain extent)."
]
},
{
"cell_type": "markdown",
"id": "filled-township",
"metadata": {},
"source": [
"```\n",
"*** Compiled batch size 5, user batch size 10, num NeuronCores 2 (input shape: (10, 224, 224, 3), saved model dir: ./rn50_fp16_compiled_b5_nc2/1) ***\n",
"\n",
"Instance type inf1.6xlarge with 16 NeuronCores\n",
"NEURON_MAX_NUM_INFERS (env): 5\n",
"NEURONCORE_GROUP_SIZES (env): 2,2,2,2,2,2,2,2\n",
"NUM THREADS: 16\n",
"NUM_LOOPS_PER_THREAD: 400\n",
"USER_BATCH_SIZE: 10\n",
"Throughput values collected:\n",
"[10680, 10700, 10660]\n",
"\n",
"(rest of outputs removed)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "189c4f0e-1a4e-4067-921f-95449c45dedd",
"metadata": {},
"source": [
"## Known Issues\n",
"\n",
"### Unable to compile with batch and num NeuronCores combination\n",
"\n",
"For some combination of batch and number of NeuronCores setting, you may\n",
"see an internal compiler error as below. Please see the sweep result\n",
"above for Neuron 1/27/20 release. Furthermore, if using auto-casting to\n",
"bfloat16 from FP32 network and batch size is larger than 1 would result\n",
"in the same error.\n",
"\n",
"\n",
"```bash\n",
"\n",
"INFO:tensorflow:fusing subgraph neuron_op_a73aed4b95ca5d5b with neuron-cc; log file is at /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neuron-cc.log\n",
" WARNING:tensorflow:Failed to fuse subgraph neuron_op_a73aed4b95ca5d5b with '/home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config \"{\\\"inputs\\\": {\\\"input_10/_0:0\\\": [[6, 224, 224, 3], \\\"float16\\\"]}, \\\"outputs\\\": [\\\"probs/Softmax:0\\\"]}\" --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True'\n",
" WARNING:tensorflow:neuron-cc error message:\n",
" WARNING:tensorflow:01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: An Internal Compiler Error has occurred\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Please contact Customer Support and provide the following details.\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error message: Non-zero exit status (134) for command: /home/ubuntu/test_venv/lib/python3.6/site-packages/neuroncc/starfish/bin/list_sch --hhir hh-tr-external-move.json --verbose 0 --sb_size 120 --arith_intensity_target 2300 --sb_watermark_low 0.250000 --sb_watermark_high 0.750000 --sb_size_tol 1 --alloc simple1 --alloc_opt --depth_diff 0.100000 --verbose_start_cycle 0 --tt_dist --mm_meet_cnt 1 --load_speed_factor 0.300000 --schir sch_tmp.json --spill_depth_limit 5 --spill_dis --true_dep --mm_order --batching_en --rematerialization_en\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error class: CompilerInternalError\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error location: job.Scheduler.3\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Command line: /home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config '{\"inputs\": {\"input_10/_0:0\": [[6, 224, 224, 3], \"float16\"]}, \"outputs\": [\"probs/Softmax:0\"]}' --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Internal details:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/Job.py\", line 207, in neuroncc.driver.Job.runSingleInputFn\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/jobs/Scheduler.py\", line 58, in neuroncc.driver.jobs.Scheduler.Scheduler.runSingleInput\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/Job.py\", line 145, in neuroncc.driver.Job.Job.shellCommand\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Version information:\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: Neuron Compiler version 1.0.6632.0+6001610955\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: HWM version 1.0.839.0-6001300654\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: NEFF version 0.6\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: TVM version 1.0.1589.0+6001610955\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: NumPy version 1.16.5\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: MXNet not available\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: TF version 1.15.0\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]:\n",
"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "gentle-census",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "spectacular-payroll",
"metadata": {},
"source": [
"# Tensorflow ResNet 50 Optimization Tutorial"
]
},
{
"cell_type": "markdown",
"id": "equivalent-stack",
"metadata": {},
"source": [
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"id": "alpine-aside",
"metadata": {},
"source": [
"## Introduction: "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial we provide three main sections:\n",
"\n",
"* Take a Resnet 50 model and perform optimizations on it\n",
"\n",
"* Compile the model with different batch sizes and Neuroncore Group sizes (read about Neuroncore Group sizes here: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-theory-of-operation.html#neuron-core-group)\n",
"\n",
"* Run inference on our multiple compiled models to see which has the best throughput\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page."
]
},
{
"cell_type": "markdown",
"id": "opened-forty",
"metadata": {},
"source": [
"## Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "meaningful-algebra",
"metadata": {},
"outputs": [],
"source": [
"!pip install pillow requests # Necessary for loading images\n",
"!pip install 'tensorflow-neuron<2' --extra-index-url=https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "markdown",
"id": "remarkable-exercise",
"metadata": {},
"source": [
"## Compile"
]
},
{
"cell_type": "markdown",
"id": "consecutive-right",
"metadata": {},
"source": [
"The following example shows how to compile a FP16 ResNet50 network using various batching parameters to find the optimal solution. On inf1.6xlarge, run through the following steps to get a optimized Resnet 50 model.\n",
"First, extract Keras ResNet50 FP32 (resnet50_fp32_keras.pb will be generated):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "vertical-finland",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"import argparse\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"\n",
"from tensorflow.keras.applications.resnet50 import ResNet50\n",
"from tensorflow.keras.preprocessing import image\n",
"from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n",
"\n",
"from google.protobuf import text_format\n",
"import tensorflow.python.saved_model\n",
"\n",
"# set Keras global configurations\n",
"tf.keras.backend.set_learning_phase(0)\n",
"tf.keras.backend.set_image_data_format('channels_last')\n",
"\n",
"float_type = 'float32'\n",
"float_type2 = 'fp32'\n",
"tf.keras.backend.set_floatx(float_type)\n",
"\n",
"# load pre-trained model using Keras\n",
"model_name = 'resnet50_%s_keras'%float_type2\n",
"model = ResNet50(weights='imagenet')\n",
"\n",
"# various save files\n",
"frozen_file = model_name + '.pb'\n",
"opt_file = model_name + '_opt.pb'\n",
"\n",
"# obtain parameters\n",
"model_input = model.input.name.replace(':0', '')\n",
"model_output = model.output.name.replace(':0', '')\n",
"batch, height, width, channels = model.input.shape\n",
"\n",
"print (\"model, frozen file, optimized file, input size, input node, output node,\")\n",
"print (\"%s, %s, %s, %dx%dx%d, %s, %s\" %(model_name, frozen_file, opt_file, width, height, channels, model_input, model_output) ) \n",
"\n",
"# obtain the TF session\n",
"sess = tf.compat.v1.keras.backend.get_session()\n",
"\n",
"# save checkpoint files for freeze_graph\n",
"ckpt_file = '/tmp/' + model_name + '/' + model_name + '.ckpt'\n",
"graph_file = '/tmp/' + model_name + '/' + model_name + '.pb'\n",
"tf.compat.v1.train.Saver().save(sess, ckpt_file)\n",
"tf.io.write_graph(sess.graph.as_graph_def(), logdir='.', name=graph_file, as_text=False)\n",
"\n",
"print(model_output)\n",
"with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n",
" saver = tf.compat.v1.train.import_meta_graph(ckpt_file + '.meta')\n",
" saver.restore(sess, ckpt_file)\n",
" output_graph_def = tf.compat.v1.graph_util.convert_variables_to_constants(\n",
" sess, tf.compat.v1.get_default_graph().as_graph_def(), [model_output])\n",
" output_graph_def = tf.compat.v1.graph_util.remove_training_nodes(\n",
" output_graph_def, protected_nodes=[model_output])\n",
" with open(frozen_file, 'wb') as f:\n",
" f.write(output_graph_def.SerializeToString())"
]
},
{
"cell_type": "markdown",
"id": "romance-cyprus",
"metadata": {},
"source": [
"Optimize the extracted Keras ResNet50 FP32 graph for inference before casting (resnet50_fp32_keras_opt.pb will be generated) with the following transformations to the graph:\n",
"\n",
"* Remove Identity and CheckNumerics nodes\n",
"* Fold FusedBatchNorm constants into previous Conv2D weights\n",
"* Fold other constants\n",
"* Strip unused nodes\n",
"* Sort by execution order"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "higher-grant",
"metadata": {},
"outputs": [],
"source": [
"import copy\n",
"import string\n",
"\n",
"from google.protobuf import text_format\n",
"from tensorflow.core.framework import node_def_pb2\n",
"from tensorflow.core.framework import attr_value_pb2\n",
"from tensorflow.python.framework import tensor_util\n",
"from tensorflow.tools.graph_transforms import TransformGraph\n",
"\n",
"def clear_input(node):\n",
" for i in range(len(node.input)):\n",
" node.input.pop()\n",
"\n",
"def replace_name(node, name):\n",
" node.name = name\n",
" \n",
"def replace_input(node, input_name, new_name):\n",
" # node.input.replace(input_name, new_name)\n",
" temp = []\n",
" for i in node.input:\n",
" temp.extend([new_name if i == input_name else i])\n",
" clear_input(node)\n",
" for i in temp:\n",
" node.input.extend([i])\n",
"\n",
"def swap_names(node1, node2):\n",
" temp = node2.name\n",
" node2.name = node1.name\n",
" node1.name = temp\n",
"\n",
"def get_const_node(const_node_name, const_by_name):\n",
" name = re.sub(\"/read$\", \"\", const_node_name)\n",
" return const_by_name[name]\n",
"\n",
"def get_const_ndarray(const_node_name, const_by_name):\n",
" name = re.sub(\"/read$\", \"\", const_node_name)\n",
" node = const_by_name[name]\n",
" return tf.make_ndarray(node.attr.get(\"value\").tensor)\n",
"\n",
"def adjust_bias_values(bias_node, fbn_node, const_by_name):\n",
" bias_val = get_const_ndarray(bias_node.input[1], const_by_name) \n",
" gamma_val = get_const_ndarray(fbn_node.input[1], const_by_name) \n",
" mean_val = get_const_ndarray(fbn_node.input[3], const_by_name) \n",
" variance_val = get_const_ndarray(fbn_node.input[4], const_by_name) \n",
" new_bias = bias_val * gamma_val / np.sqrt(variance_val)\n",
" new_tensor = tensor_util.make_tensor_proto(new_bias, new_bias.dtype, new_bias.shape)\n",
" bias_const_node = get_const_node(bias_node.input[1], const_by_name)\n",
" bias_const_node.attr[\"value\"].CopyFrom(attr_value_pb2.AttrValue(tensor=new_tensor))\n",
"\n",
"def MoveBiasAddAfterFusedBatchNorm(graphdef):\n",
" \"\"\"fold_batch_norm function of TransformGraph is unable to fold Keras ResNet50\n",
" because of BiasAdd between Conv2D and FusedBatchNorm (BiasAdd is not needed\n",
" if FusedBatchNorm is used, but it exists in Keras ResNet50). Here, we \n",
" move BiasAdd to after FusedBatchNorm, and adjust bias value by gamma/sqrt(variance).\n",
" \"\"\"\n",
" sess = tf.compat.v1.Session(graph=tf.import_graph_def(graphdef))\n",
" output_graph_def = tf.compat.v1.GraphDef()\n",
" node_by_name = {}\n",
" const_by_name = {}\n",
" for node in graphdef.node:\n",
" # Hack: use FusedBatchNormV2 so fold_batch_norm can recognize\n",
" if node.op == \"FusedBatchNormV3\":\n",
" node.op = \"FusedBatchNorm\"\n",
" del(node.attr[\"U\"])\n",
" #import pdb; pdb.set_trace()\n",
" copied_node = node_def_pb2.NodeDef()\n",
" copied_node.CopyFrom(node)\n",
" node_by_name[node.name] = copied_node\n",
" skip_add_node = False\n",
" # Switch Mul/BiasAdd in Keras RN50 so fold_batch_norm transform would work\n",
" if node.op == \"Const\":\n",
" const_by_name[node.name] = copied_node \n",
" elif node.op.startswith(\"FusedBatchNorm\"):\n",
" inputs = node.input\n",
" for i in inputs:\n",
" input_node = node_by_name[i]\n",
" if input_node.op == \"BiasAdd\":\n",
" output_graph_def.node.remove(input_node)\n",
" input_node_input0 = input_node.input[0]\n",
" # Adjust bias values (multiply by scale/sqrt(variance))\n",
" adjust_bias_values(input_node, node, const_by_name)\n",
" # Hack: swap names to avoid changing input of activation\n",
" swap_names(copied_node, input_node)\n",
" # Fix inputs for these two ops\n",
" replace_input(copied_node, i, input_node_input0)\n",
" replace_input(input_node, input_node_input0, copied_node.name)\n",
" # Fix order in node list\n",
" output_graph_def.node.extend([copied_node])\n",
" output_graph_def.node.extend([input_node])\n",
" skip_add_node = True\n",
" # Add maybe-modified nodes if not already done\n",
" if not skip_add_node:\n",
" output_graph_def.node.extend([copied_node])\n",
" return output_graph_def\n",
"\n",
"def FoldFusedBatchNorm(graph_def):\n",
" \"\"\"Optimize training graph for inference:\n",
" - Remove Identity and CheckNumerics nodes\n",
" - Fold FusedBatchNorm constants into previous Conv2D weights\n",
" - Fold other constants\n",
" - Strip unused nodes\n",
" - Sort by execution order\n",
" \"\"\"\n",
" transformed_graph_def = TransformGraph (\n",
" graph_def,\n",
" ['input_1'],\n",
" ['probs/Softmax'],\n",
" [\n",
" 'add_default_attributes',\n",
" 'remove_nodes(op=Identity, op=CheckNumerics)',\n",
" 'fold_constants(ignore_errors=true)',\n",
" 'fold_batch_norms',\n",
" 'fold_old_batch_norms',\n",
" 'strip_unused_nodes',\n",
" 'sort_by_execution_order',\n",
" ])\n",
" return transformed_graph_def\n",
"\n",
"def load_graph(model_file):\n",
" graph_def = tf.compat.v1.GraphDef()\n",
"\n",
" with open(model_file, \"rb\") as f:\n",
" graph_def.ParseFromString(f.read())\n",
" return graph_def\n",
"\n",
"\n",
"graph_orig = load_graph('resnet50_fp32_keras.pb')\n",
"graph_mod = MoveBiasAddAfterFusedBatchNorm(graph_orig)\n",
"graph_mod2 = FoldFusedBatchNorm(graph_mod)\n",
"with tf.io.gfile.GFile('resnet50_fp32_keras_opt.pb', \"wb\") as f:\n",
" f.write(graph_mod2.SerializeToString())"
]
},
{
"cell_type": "markdown",
"id": "corresponding-acquisition",
"metadata": {},
"source": [
"Convert full graph to FP16 (resnet50_fp16_keras_opt.pb will be generated.\n",
"This will take about a minute."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "detected-training",
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.core.framework import graph_pb2\n",
"from tensorflow.python.platform import gfile\n",
"\n",
"def ConvertFP32ToOther(graphdef):\n",
" \"\"\"Converts an FP32 network by casting all constants (weights) to a lower\n",
" precision floating point type (FP16) and updating the dtypes\n",
" everywhere.\"\"\"\n",
" cast_type = \"float16\"\n",
" sess = tf.Session(graph=tf.import_graph_def(graphdef))\n",
" output_graph_def = graph_pb2.GraphDef()\n",
" dummy_tensor = sess.run(tf.constant([0.1]))\n",
" dummy_tensor_proto = tensor_util.make_tensor_proto(dummy_tensor, \\\n",
" dtype=cast_type, shape=dummy_tensor.shape)\n",
" dummy_tensor32 = sess.run(tf.constant([0.1]))\n",
" dummy_tensor_proto32 = tensor_util.make_tensor_proto(dummy_tensor, \\\n",
" dtype=tf.float32, shape=dummy_tensor.shape)\n",
" dt_float_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto32.dtype)\n",
" dt_half_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto.dtype)\n",
" for node in graphdef.node:\n",
" output_node = node_def_pb2.NodeDef()\n",
" output_node.CopyFrom(node)\n",
" if (node.op == \"Const\"):\n",
" if (node.attr[\"dtype\"] == dt_float_type_attr):\n",
" a = tensor_util.MakeNdarray(node.attr[\"value\"].tensor)\n",
" a = tf.cast(a, cast_type)\n",
" a = sess.run(a)\n",
" output_node.attr[\"dtype\"].CopyFrom(dt_half_type_attr)\n",
" output_node.attr[\"value\"].CopyFrom(\n",
" attr_value_pb2.AttrValue(\n",
" tensor=tensor_util.make_tensor_proto(a,\\\n",
" dtype=cast_type, shape=a.shape)))\n",
" else:\n",
" if (\"T\" in node.attr.keys()):\n",
" if (output_node.attr[\"T\"] == dt_float_type_attr):\n",
" output_node.attr[\"T\"].CopyFrom(dt_half_type_attr)\n",
" if (\"Tparams\" in node.attr.keys()):\n",
" if (output_node.attr[\"Tparams\"] == dt_float_type_attr):\n",
" output_node.attr[\"Tparams\"].CopyFrom(dt_half_type_attr)\n",
" if (\"dtype\" in node.attr.keys()):\n",
" if (node.attr[\"dtype\"] == dt_float_type_attr):\n",
" output_node.attr[\"dtype\"].CopyFrom(dt_half_type_attr)\n",
" if (\"SrcT\" in node.attr.keys()):\n",
" if (node.attr[\"SrcT\"] == dt_float_type_attr):\n",
" output_node.attr[\"SrcT\"].CopyFrom(dt_half_type_attr)\n",
" if (\"DstT\" in node.attr.keys()):\n",
" if (node.attr[\"DstT\"] == dt_float_type_attr):\n",
" output_node.attr[\"DstT\"].CopyFrom(dt_half_type_attr)\n",
" output_graph_def.node.extend([output_node])\n",
" return output_graph_def\n",
"\n",
"def load_graph(model_file):\n",
" graph_def = tf.GraphDef()\n",
"\n",
" with open(model_file, \"rb\") as f:\n",
" graph_def.ParseFromString(f.read())\n",
"\n",
" return graph_def\n",
"\n",
"graph_f32 = load_graph('resnet50_fp32_keras_opt.pb')\n",
"graph_f16 = ConvertFP32ToOther(graph_f32)\n",
"output_xformed_graph_name = 'resnet50_fp16_keras_opt.pb'\n",
"with gfile.GFile(output_xformed_graph_name, \"wb\") as f:\n",
" f.write(graph_f16.SerializeToString())\n"
]
},
{
"cell_type": "markdown",
"id": "correct-travel",
"metadata": {},
"source": [
"Run the compilation script to sweep through various batch sizes up to 5 and several NeuronCore Group sizes up to 16. The script calls the compilation script pb2sm_compile.py which tries to perform compilation. Some error messages are expected due to known issues (see Known Issues section in the tutorial). If you run all the configurations it will take about 45 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "shared-ratio",
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"#!/usr/bin/env bash\n",
"\n",
"echo \"\" > full_sweep.log\n",
"echo \"\" > full_sweep_results.txt\n",
"\n",
"results=()\n",
"for b in $(seq 1 5); do \n",
" for i in 1 2 4 8 12 16; do \n",
" python pb2sm_compile.py --batch_size=$b --neuroncore-pipeline-cores=$i | tee -a full_sweep.log;\n",
" results[$b]+=\", \"`tail -1 full_sweep.log`\n",
" done\n",
"done\n",
"\n",
"head=\"batch\"\n",
"for i in 1 2 4 8 12 16; do\n",
" head+=\", nc${i}\"\n",
"done \n",
"echo $head | tee -a full_sweep_results.txt\n",
"for b in $(seq 1 5); do \n",
" echo $b${results[$b]} | tee -a full_sweep_results.txt\n",
"done"
]
},
{
"cell_type": "markdown",
"id": "attached-austin",
"metadata": {},
"source": [
"You should see some output like this:\n",
"```\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"*** Batch size 1, num NeuronCores 2 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc2) ***\n",
"\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"*** Batch size 1, num NeuronCores 4 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc4) ***\n",
"\n",
"INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia\n",
"\n",
"1\n",
"\n",
"... (outputs removed)\n",
"\n",
"*** Batch size 5, num NeuronCores 16 (input shape: (5, 224, 224, 3), saved model dir: rn50_fp16_compiled_b5_nc16) ***\n",
"\n",
"ERROR: Compilation finished in 120 seconds with less than 50% operations placed on Inferentia (0.0%)\n",
"\n",
"INFO: Retry compilation without static weights\n",
"\n",
"ERROR: Retry compilation finished in 137 seconds with less than 50% operations placed on Inferentia (0.0%)\n",
"\n",
"0\n",
"\n",
"The file full_sweep_results.txt shows a summary of the sweep results with latest Neuron 1/27/20 release (0 means compilation unsuccessful and 0 ops mapped to Inferentia, 1 means most ops mapped to Inferentia and non-static weights, 2 means most ops mapped to Inferentia and using static weights):\n",
"\n",
"batch, nc1, nc2, nc4, nc8, nc12, nc16\n",
"1, 1, 1, 1, 2, 2, 2\n",
"2, 1, 1, 0, 1, 2, 2\n",
"3, 1, 1, 1, 1, 1, 1\n",
"4, 1, 1, 0, 1, 1, 1\n",
"5, 1, 1, 0, 0, 0, 0\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "surprised-abortion",
"metadata": {},
"source": [
"## Inference"
]
},
{
"cell_type": "markdown",
"id": "departmental-surprise",
"metadata": {},
"source": [
"Run inference over different batch sizes and Neuroncore groups to obtain throughput and latency results for ResNet50. To apply dynamic batching, the user batch size is set to 10x the compiled batch size, in order to keep input queue full and to amortize framework-to-Neuron overhead.\n",
"\n",
"Note: The results are based on the Neuron v1.12.2 (Mar 4th 2021) release. These will continue improve as we increase Neuron performance.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "requested-inspiration",
"metadata": {},
"outputs": [],
"source": [
"!cd ~/aws-neuron-sdk/src/examples/tensorflow/keras_resnet50/\n",
"!echo \"\" > batch.log\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=1 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=2 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=4 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=8 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=12 | tee -a batch.log; done\n",
"!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=16 | tee -a batch.log; done"
]
},
{
"cell_type": "markdown",
"id": "split-genesis",
"metadata": {},
"source": [
"The file batch.log now contains the results for each batch size. We can look at the throughput values to get an idea of which models are performing well. The output should look something like this:\n",
"\n",
"The model best model configuration for throughput (if you run on an Inf1.6xlarge as suggested in the tutorial) is batch size 5 NeuronCore group size 2. Increasing batch size usually helps to increase throughput (up to a certain extent)."
]
},
{
"cell_type": "markdown",
"id": "filled-township",
"metadata": {},
"source": [
"```\n",
"*** Compiled batch size 5, user batch size 10, num NeuronCores 2 (input shape: (10, 224, 224, 3), saved model dir: ./rn50_fp16_compiled_b5_nc2/1) ***\n",
"\n",
"Instance type inf1.6xlarge with 16 NeuronCores\n",
"NEURON_MAX_NUM_INFERS (env): 5\n",
"NEURONCORE_GROUP_SIZES (env): 2,2,2,2,2,2,2,2\n",
"NUM THREADS: 16\n",
"NUM_LOOPS_PER_THREAD: 400\n",
"USER_BATCH_SIZE: 10\n",
"Throughput values collected:\n",
"[10680, 10700, 10660]\n",
"\n",
"(rest of outputs removed)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "189c4f0e-1a4e-4067-921f-95449c45dedd",
"metadata": {},
"source": [
"## Known Issues\n",
"\n",
"### Unable to compile with batch and num NeuronCores combination\n",
"\n",
"For some combination of batch and number of NeuronCores setting, you may\n",
"see an internal compiler error as below. Please see the sweep result\n",
"above for Neuron 1/27/20 release. Furthermore, if using auto-casting to\n",
"bfloat16 from FP32 network and batch size is larger than 1 would result\n",
"in the same error.\n",
"\n",
"\n",
"```bash\n",
"\n",
"INFO:tensorflow:fusing subgraph neuron_op_a73aed4b95ca5d5b with neuron-cc; log file is at /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neuron-cc.log\n",
" WARNING:tensorflow:Failed to fuse subgraph neuron_op_a73aed4b95ca5d5b with '/home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config \"{\\\"inputs\\\": {\\\"input_10/_0:0\\\": [[6, 224, 224, 3], \\\"float16\\\"]}, \\\"outputs\\\": [\\\"probs/Softmax:0\\\"]}\" --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True'\n",
" WARNING:tensorflow:neuron-cc error message:\n",
" WARNING:tensorflow:01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: An Internal Compiler Error has occurred\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Please contact Customer Support and provide the following details.\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error message: Non-zero exit status (134) for command: /home/ubuntu/test_venv/lib/python3.6/site-packages/neuroncc/starfish/bin/list_sch --hhir hh-tr-external-move.json --verbose 0 --sb_size 120 --arith_intensity_target 2300 --sb_watermark_low 0.250000 --sb_watermark_high 0.750000 --sb_size_tol 1 --alloc simple1 --alloc_opt --depth_diff 0.100000 --verbose_start_cycle 0 --tt_dist --mm_meet_cnt 1 --load_speed_factor 0.300000 --schir sch_tmp.json --spill_depth_limit 5 --spill_dis --true_dep --mm_order --batching_en --rematerialization_en\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error class: CompilerInternalError\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error location: job.Scheduler.3\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Command line: /home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config '{\"inputs\": {\"input_10/_0:0\": [[6, 224, 224, 3], \"float16\"]}, \"outputs\": [\"probs/Softmax:0\"]}' --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Internal details:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/Job.py\", line 207, in neuroncc.driver.Job.runSingleInputFn\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/jobs/Scheduler.py\", line 58, in neuroncc.driver.jobs.Scheduler.Scheduler.runSingleInput\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: File \"neuroncc/driver/Job.py\", line 145, in neuroncc.driver.Job.Job.shellCommand\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:40 AM ERROR [neuron-cc]: Version information:\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: Neuron Compiler version 1.0.6632.0+6001610955\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]:\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: HWM version 1.0.839.0-6001300654\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: NEFF version 0.6\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: TVM version 1.0.1589.0+6001610955\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: NumPy version 1.16.5\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: MXNet not available\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]: TF version 1.15.0\n",
" 01/23/2020 01:15:41 AM ERROR [neuron-cc]:\n",
"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "gentle-census",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:26.449Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/collective-communication.rst.txt | ```
.. _feature_cccom:
Neuron Collective Communication
===============================
.. contents:: Table of contents
:local:
:depth: 1
Introduction
~~~~~~~~~~~~
Collective Communications is an integral component of distributed ML
training. Multiple training nodes exchange information during ML
training via Collective Communication operators such as all-reduce.
Neuron provides hardware support for the execution of Collective
Communication with the Neuron SDK responsible for the hardware
configuration and for the execution orchestration. Neuron provides the
following Collective Communication operators:
- all-reduce
- all-gather
- reduce-scatter
Neuron also provides the following peer to peer operators:
- send
- receive
Support for additional Collective Communication operators might be added
in future releases. Neuron devices are connected via NeuronLinks within
a single instance and EFA links between instances. All NeuronLinks
transfer the data directly between Neuron device and between Neuron
devices and EFA devices bypassing the host to achieve high bandwidth and
low latency.
Collective Communication support on Neuron requires installation of 3
separate packages:
- ``aws-neuronx-runtime-lib`` - supports execution on Neuron, not
specific to Collective Communication and is always required
- ``aws-neuronx-collectives`` - supports Collective Communication
execution on a single instance and on multiple instances.
- ``efa_installer`` - low level libraries and drivers to support
Collective Communication execution over EFA, required to support
Collective Communication on multiple instances.
ML models need to be compiled by the Neuron compiler before they can be
executed on Neuron devices. The result of the compilation is a binary
object containing computational instruction and data movement
instructions. Any Collective Communication operators encountered during
compilation are converted to the place holder instructions to be filled
by the runtime/collectives libraries during load and execution. This
approach allows Neuron compiler to be unaware of the specific physical
topology connecting Neuron devices. Once a compiled mode is placed on
Neuron devices the runtime/collectives libraries generate the
appropriate data movement instructions based on the placement. For
example, a different set of instructions is generated when the next rank
is connected via NeuronLinks or via EFA. Neuron executes Collective
Communication operators using dedicated hardware that is not shared with
computational resources. That allows Neuron to execute compute and
communication in parallel. For example Neuron can all-reduce gradients
of one layer while the gradients for another layer are computed.
Overlapping compute and communication can result is lower latency and
higher performance.
.. _trn132xlarge-topology:
trn1.32xlarge topology
~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/trn1-topology.png
**Trn1.32xl 2D torus topology**
On a single trn1.32xlarge instance Neuron devices are connected in a 2D
torus topology supporting Collective Communication operators in sets of
2, 8 and 32 ranks. Other set sizes might be supported in future
releases. A single instance topology can be further extended across
multiple instances using EFA NeuronLinks.
For example an 8x4 topology on a single instance, such as 8 rank tensor
parallel and 4 ranks data parallel can be extended across multiple
instances creating a large tensor/data parallel training cluster.
.. _trn12xlarge-topology:
trn1.2xlarge topology
~~~~~~~~~~~~~~~~~~~~~
Trn1.2xlarge instance type contains a single Neuron device with two
NeuronCores. This instance type supports only 2 ranks Collective
Communication operators. EFA is not available on trn1.2xlarge and the
ranks cannot be extended beyond a single instance.
.. _inf248xlarge-topology:
inf2.48xlarge topology
~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/inf248xl-topology.png
**inf2.48xlarge topology**
On inf2.48xlarge instance Neuron devices are connected in a ring via
NeuronLink. Any **even** number of ranks for Collective
Communication operators is supported provided that the ranks occupy
consecutive Neuron devices. However, when using any number of ranks
other than 24 (full instance) full performance of the ring is not utilized.
Inf2 other instance sizes topologies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/inf224xl-topology.png
**inf2 other instance sizes topologies**
On other inf2 instance sizes Neuron devices are connected bi-directionally.
Any **even** number of ranks for Collective Communication operators is
supported provided that the ranks occupy consecutive Neuron devices.
Collective Communication performance is similar to the performance on
inf2.48xlarge when fewer than 24 ranks are used.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _feature_cccom:
Neuron Collective Communication
===============================
.. contents:: Table of contents
:local:
:depth: 1
Introduction
~~~~~~~~~~~~
Collective Communications is an integral component of distributed ML
training. Multiple training nodes exchange information during ML
training via Collective Communication operators such as all-reduce.
Neuron provides hardware support for the execution of Collective
Communication with the Neuron SDK responsible for the hardware
configuration and for the execution orchestration. Neuron provides the
following Collective Communication operators:
- all-reduce
- all-gather
- reduce-scatter
Neuron also provides the following peer to peer operators:
- send
- receive
Support for additional Collective Communication operators might be added
in future releases. Neuron devices are connected via NeuronLinks within
a single instance and EFA links between instances. All NeuronLinks
transfer the data directly between Neuron device and between Neuron
devices and EFA devices bypassing the host to achieve high bandwidth and
low latency.
Collective Communication support on Neuron requires installation of 3
separate packages:
- ``aws-neuronx-runtime-lib`` - supports execution on Neuron, not
specific to Collective Communication and is always required
- ``aws-neuronx-collectives`` - supports Collective Communication
execution on a single instance and on multiple instances.
- ``efa_installer`` - low level libraries and drivers to support
Collective Communication execution over EFA, required to support
Collective Communication on multiple instances.
ML models need to be compiled by the Neuron compiler before they can be
executed on Neuron devices. The result of the compilation is a binary
object containing computational instruction and data movement
instructions. Any Collective Communication operators encountered during
compilation are converted to the place holder instructions to be filled
by the runtime/collectives libraries during load and execution. This
approach allows Neuron compiler to be unaware of the specific physical
topology connecting Neuron devices. Once a compiled mode is placed on
Neuron devices the runtime/collectives libraries generate the
appropriate data movement instructions based on the placement. For
example, a different set of instructions is generated when the next rank
is connected via NeuronLinks or via EFA. Neuron executes Collective
Communication operators using dedicated hardware that is not shared with
computational resources. That allows Neuron to execute compute and
communication in parallel. For example Neuron can all-reduce gradients
of one layer while the gradients for another layer are computed.
Overlapping compute and communication can result is lower latency and
higher performance.
.. _trn132xlarge-topology:
trn1.32xlarge topology
~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/trn1-topology.png
**Trn1.32xl 2D torus topology**
On a single trn1.32xlarge instance Neuron devices are connected in a 2D
torus topology supporting Collective Communication operators in sets of
2, 8 and 32 ranks. Other set sizes might be supported in future
releases. A single instance topology can be further extended across
multiple instances using EFA NeuronLinks.
For example an 8x4 topology on a single instance, such as 8 rank tensor
parallel and 4 ranks data parallel can be extended across multiple
instances creating a large tensor/data parallel training cluster.
.. _trn12xlarge-topology:
trn1.2xlarge topology
~~~~~~~~~~~~~~~~~~~~~
Trn1.2xlarge instance type contains a single Neuron device with two
NeuronCores. This instance type supports only 2 ranks Collective
Communication operators. EFA is not available on trn1.2xlarge and the
ranks cannot be extended beyond a single instance.
.. _inf248xlarge-topology:
inf2.48xlarge topology
~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/inf248xl-topology.png
**inf2.48xlarge topology**
On inf2.48xlarge instance Neuron devices are connected in a ring via
NeuronLink. Any **even** number of ranks for Collective
Communication operators is supported provided that the ranks occupy
consecutive Neuron devices. However, when using any number of ranks
other than 24 (full instance) full performance of the ring is not utilized.
Inf2 other instance sizes topologies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. image:: /images/inf224xl-topology.png
**inf2 other instance sizes topologies**
On other inf2 instance sizes Neuron devices are connected bi-directionally.
Any **even** number of ranks for Collective Communication operators is
supported provided that the ranks occupy consecutive Neuron devices.
Collective Communication performance is similar to the performance on
inf2.48xlarge when fewer than 24 ranks are used.
</pre></body></html> | 2023-09-29T20:55:26.458Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "a3bskVXPvchm"
},
"source": [
"# Running ResNet50 on Inferentia\n",
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text"
},
"source": [
"## Introduction:"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Rb5rSpcZvYbX"
},
"source": [
"In this tutorial we will compile and deploy ResNet50 model for Inferentia.\n",
"In this tutorial we provide two main sections:\n",
"1. Compile the ResNet50 model.\n",
"2. Infer the same compiled model.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"\n",
"Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the [Tensorflow Quick Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html#tensorflow-tutorial-setup)\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "E8FhiMivhcYB"
},
"source": [
"## Compile for Neuron\n",
"\n",
"A trained model must be compiled to Inferentia target before it can be deployed on Inferentia instances. In this step we compile the Keras ResNet50 model and export it as a SavedModel which is an interchange format for TensorFlow models.\n",
"At the end of compilation, the compiled SavedModel is saved in resnet50_neuron local directory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"import shutil\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"import tensorflow.compat.v1.keras as keras\n",
"from tensorflow.keras.applications.resnet50 import ResNet50\n",
"from tensorflow.keras.applications.resnet50 import preprocess_input\n",
"\n",
"# Create a workspace\n",
"WORKSPACE = './ws_resnet50'\n",
"os.makedirs(WORKSPACE, exist_ok=True)\n",
"\n",
"# Prepare export directory (old one removed)\n",
"model_dir = os.path.join(WORKSPACE, 'resnet50')\n",
"compiled_model_dir = os.path.join(WORKSPACE, 'resnet50_neuron')\n",
"shutil.rmtree(model_dir, ignore_errors=True)\n",
"shutil.rmtree(compiled_model_dir, ignore_errors=True)\n",
"\n",
"# Instantiate Keras ResNet50 model\n",
"keras.backend.set_learning_phase(0)\n",
"keras.backend.set_image_data_format('channels_last')\n",
"\n",
"model = ResNet50(weights='imagenet')\n",
"\n",
"# Export SavedModel\n",
"tf.saved_model.simple_save(\n",
" session = keras.backend.get_session(),\n",
" export_dir = model_dir,\n",
" inputs = {'input': model.inputs[0]},\n",
" outputs = {'output': model.outputs[0]})\n",
"\n",
"# Compile using Neuron\n",
"tfn.saved_model.compile(model_dir, compiled_model_dir)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "I52jQOyO8vAn"
},
"source": [
"## Deploy on Inferentia\n",
"\n",
"Using same instance to deploy the model.\n",
"In case of different deployment instance, launch a deployment inf1 instance and copy compiled model to the deployment inf1 instance.\n",
"\n",
"Download the example image, and install pillow module for inference on deployement instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!curl -O https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg\n",
"!pip install pillow # Necessary for loading images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### After downloading the example image, run the inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow.keras.preprocessing import image\n",
"from tensorflow.keras.applications import resnet50\n",
"\n",
"tf.keras.backend.set_image_data_format('channels_last')\n",
"\n",
"# Create input from image\n",
"img_sgl = image.load_img('kitten_small.jpg', target_size=(224, 224))\n",
"img_arr = image.img_to_array(img_sgl)\n",
"img_arr2 = np.expand_dims(img_arr, axis=0)\n",
"img_arr3 = resnet50.preprocess_input(img_arr2)\n",
"\n",
"# Load model\n",
"COMPILED_MODEL_DIR = './ws_resnet50/resnet50_neuron/'\n",
"predictor_inferentia = tf.contrib.predictor.from_saved_model(COMPILED_MODEL_DIR)\n",
"\n",
"# Run inference\n",
"model_feed_dict={'input': img_arr3}\n",
"infa_rslts = predictor_inferentia(model_feed_dict);\n",
"\n",
"# Display results\n",
"print(resnet50.decode_predictions(infa_rslts[\"output\"], top=5)[0])\n",
"\n",
"# Sample output will look like below:\n",
"#[('n02123045', 'tabby', 0.68817204), ('n02127052', 'lynx', 0.12701613), ('n02123159', 'tiger_cat', 0.08736559), ('n02124075', 'Egyptian_cat', 0.063844085), ('n02128757', 'snow_leopard', 0.009240591)]"
]
}
],
"metadata": {
"colab": {
"default_view": {},
"name": "Untitled",
"provenance": [],
"version": "0.3.2",
"views": {}
},
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "a3bskVXPvchm"
},
"source": [
"# Running ResNet50 on Inferentia\n",
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text"
},
"source": [
"## Introduction:"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Rb5rSpcZvYbX"
},
"source": [
"In this tutorial we will compile and deploy ResNet50 model for Inferentia.\n",
"In this tutorial we provide two main sections:\n",
"1. Compile the ResNet50 model.\n",
"2. Infer the same compiled model.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"\n",
"Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the [Tensorflow Quick Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html#tensorflow-tutorial-setup)\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "E8FhiMivhcYB"
},
"source": [
"## Compile for Neuron\n",
"\n",
"A trained model must be compiled to Inferentia target before it can be deployed on Inferentia instances. In this step we compile the Keras ResNet50 model and export it as a SavedModel which is an interchange format for TensorFlow models.\n",
"At the end of compilation, the compiled SavedModel is saved in resnet50_neuron local directory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"import shutil\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"import tensorflow.compat.v1.keras as keras\n",
"from tensorflow.keras.applications.resnet50 import ResNet50\n",
"from tensorflow.keras.applications.resnet50 import preprocess_input\n",
"\n",
"# Create a workspace\n",
"WORKSPACE = './ws_resnet50'\n",
"os.makedirs(WORKSPACE, exist_ok=True)\n",
"\n",
"# Prepare export directory (old one removed)\n",
"model_dir = os.path.join(WORKSPACE, 'resnet50')\n",
"compiled_model_dir = os.path.join(WORKSPACE, 'resnet50_neuron')\n",
"shutil.rmtree(model_dir, ignore_errors=True)\n",
"shutil.rmtree(compiled_model_dir, ignore_errors=True)\n",
"\n",
"# Instantiate Keras ResNet50 model\n",
"keras.backend.set_learning_phase(0)\n",
"keras.backend.set_image_data_format('channels_last')\n",
"\n",
"model = ResNet50(weights='imagenet')\n",
"\n",
"# Export SavedModel\n",
"tf.saved_model.simple_save(\n",
" session = keras.backend.get_session(),\n",
" export_dir = model_dir,\n",
" inputs = {'input': model.inputs[0]},\n",
" outputs = {'output': model.outputs[0]})\n",
"\n",
"# Compile using Neuron\n",
"tfn.saved_model.compile(model_dir, compiled_model_dir)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "I52jQOyO8vAn"
},
"source": [
"## Deploy on Inferentia\n",
"\n",
"Using same instance to deploy the model.\n",
"In case of different deployment instance, launch a deployment inf1 instance and copy compiled model to the deployment inf1 instance.\n",
"\n",
"Download the example image, and install pillow module for inference on deployement instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!curl -O https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg\n",
"!pip install pillow # Necessary for loading images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### After downloading the example image, run the inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow.keras.preprocessing import image\n",
"from tensorflow.keras.applications import resnet50\n",
"\n",
"tf.keras.backend.set_image_data_format('channels_last')\n",
"\n",
"# Create input from image\n",
"img_sgl = image.load_img('kitten_small.jpg', target_size=(224, 224))\n",
"img_arr = image.img_to_array(img_sgl)\n",
"img_arr2 = np.expand_dims(img_arr, axis=0)\n",
"img_arr3 = resnet50.preprocess_input(img_arr2)\n",
"\n",
"# Load model\n",
"COMPILED_MODEL_DIR = './ws_resnet50/resnet50_neuron/'\n",
"predictor_inferentia = tf.contrib.predictor.from_saved_model(COMPILED_MODEL_DIR)\n",
"\n",
"# Run inference\n",
"model_feed_dict={'input': img_arr3}\n",
"infa_rslts = predictor_inferentia(model_feed_dict);\n",
"\n",
"# Display results\n",
"print(resnet50.decode_predictions(infa_rslts[\"output\"], top=5)[0])\n",
"\n",
"# Sample output will look like below:\n",
"#[('n02123045', 'tabby', 0.68817204), ('n02127052', 'lynx', 0.12701613), ('n02123159', 'tiger_cat', 0.08736559), ('n02124075', 'Egyptian_cat', 0.063844085), ('n02128757', 'snow_leopard', 0.009240591)]"
]
}
],
"metadata": {
"colab": {
"default_view": {},
"name": "Untitled",
"provenance": [],
"version": "0.3.2",
"views": {}
},
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}
</pre></body></html> | 2023-09-29T20:55:26.494Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Evaluate YOLO v3 on Inferentia\n",
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This tutorial walks through compiling and evaluating YOLO v3 model on Inferentia using the AWS Neuron SDK.\n",
"\n",
"\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1. Download Dataset and Generate Pretrained SavedModel\n",
"\n",
"2. Compile the YOLO v3 model.\n",
"\n",
"3. Deploy the same compiled model.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"\n",
"Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the Tutorial main page [Tensorflow-YOLO_v3 Tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v3_demo/yolo_v3_demo.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This demo requires the following pip packages:\n",
"\n",
"`pillow matplotlib pycocotools`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"\n",
"import sys\n",
"!{sys.executable} -m pip install pillow matplotlib pycocotools==2.0.2 --force --extra-index-url=https://pip.repos.neuron.amazonaws.com\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Download Dataset and Generate Pretrained SavedModel\n",
"### Download COCO 2017 validation dataset\n",
"\n",
"We start by downloading the COCO validation dataset, which we will use to validate our model. The COCO 2017 dataset is widely used for object-detection, segmentation and image captioning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!curl -LO http://images.cocodataset.org/zips/val2017.zip\n",
"!curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q val2017.zip\n",
"!unzip annotations_trainval2017.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Generate YOLO v3 tensorflow SavedModel (pretrained on COCO 2017 dataset)\n",
"\n",
"Script yolo_v3_coco_saved_model.py will generate a tensorflow SavedModel using pretrained weights from https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%run yolo_v3_coco_saved_model.py ./yolo_v3_coco_saved_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tensorflow SavedModel can be loaded as a tensorflow predictor. When a JPEG format image is provided as input, the output result of the tensorflow predictor contains information for drawing bounding boxes and classification results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import json\n",
"import tensorflow as tf\n",
"from PIL import Image\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as patches\n",
"\n",
"# launch predictor and run inference on an arbitrary image in the validation dataset\n",
"yolo_pred_cpu = tf.contrib.predictor.from_saved_model('./yolo_v3_coco_saved_model')\n",
"image_path = './val2017/000000581781.jpg'\n",
"with open(image_path, 'rb') as f:\n",
" feeds = {'image': [f.read()]}\n",
"results = yolo_pred_cpu(feeds)\n",
"\n",
"# load annotations to decode classification result\n",
"with open('./annotations/instances_val2017.json') as f:\n",
" annotate_json = json.load(f)\n",
"label_info = {idx+1: cat['name'] for idx, cat in enumerate(annotate_json['categories'])}\n",
"\n",
"# draw picture and bounding boxes\n",
"fig, ax = plt.subplots(figsize=(10, 10))\n",
"ax.imshow(Image.open(image_path).convert('RGB'))\n",
"wanted = results['scores'][0] > 0.1\n",
"for xyxy, label_no_bg in zip(results['boxes'][0][wanted], results['classes'][0][wanted]):\n",
" xywh = xyxy[0], xyxy[1], xyxy[2] - xyxy[0], xyxy[3] - xyxy[1]\n",
" rect = patches.Rectangle((xywh[0], xywh[1]), xywh[2], xywh[3], linewidth=1, edgecolor='g', facecolor='none')\n",
" ax.add_patch(rect)\n",
" rx, ry = rect.get_xy()\n",
" rx = rx + rect.get_width() / 2.0\n",
" ax.annotate(label_info[label_no_bg + 1], (rx, ry), color='w', backgroundcolor='g', fontsize=10,\n",
" ha='center', va='center', bbox=dict(boxstyle='square,pad=0.01', fc='g', ec='none', alpha=0.5))\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Compile the Pretrained SavedModel for Neuron\n",
"\n",
"We make use of the Python compilation API `tfn.saved_model.compile` that is available in `tensorflow-neuron<2`. For the purpose of reducing Neuron runtime overhead, it is necessary to make use of arguments `no_fuse_ops` and `minimum_segment_size`.\n",
"Compiled model is saved in ./yolo_v3_coco_saved_model_neuron."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import shutil\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"\n",
"\n",
"def no_fuse_condition(op):\n",
" return op.name.startswith('Preprocessor') or op.name.startswith('Postprocessor')\n",
"\n",
"with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.saved_model.loader.load(sess, ['serve'], './yolo_v3_coco_saved_model')\n",
" no_fuse_ops = [op.name for op in sess.graph.get_operations() if no_fuse_condition(op)]\n",
"shutil.rmtree('./yolo_v3_coco_saved_model_neuron', ignore_errors=True)\n",
"result = tfn.saved_model.compile(\n",
" './yolo_v3_coco_saved_model', './yolo_v3_coco_saved_model_neuron',\n",
" # to enforce trivial compilable subgraphs to run on CPU\n",
" no_fuse_ops=no_fuse_ops,\n",
" minimum_segment_size=100,\n",
" batch_size=2,\n",
" dynamic_batch_size=True,\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model on Inferentia\n",
"## Part 3:Evaluate Model Quality after Compilation\n",
"\n",
"### Define evaluation functions\n",
"We first define some handy helper functions for running evaluation on the COCO 2017 dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"import time\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from pycocotools.coco import COCO\n",
"from pycocotools.cocoeval import COCOeval\n",
"\n",
"\n",
"def cocoapi_eval(jsonfile,\n",
" style,\n",
" coco_gt=None,\n",
" anno_file=None,\n",
" max_dets=(100, 300, 1000)):\n",
" \"\"\"\n",
" Args:\n",
" jsonfile: Evaluation json file, eg: bbox.json, mask.json.\n",
" style: COCOeval style, can be `bbox` , `segm` and `proposal`.\n",
" coco_gt: Whether to load COCOAPI through anno_file,\n",
" eg: coco_gt = COCO(anno_file)\n",
" anno_file: COCO annotations file.\n",
" max_dets: COCO evaluation maxDets.\n",
" \"\"\"\n",
" assert coco_gt is not None or anno_file is not None\n",
"\n",
" if coco_gt is None:\n",
" coco_gt = COCO(anno_file)\n",
" print(\"Start evaluate...\")\n",
" coco_dt = coco_gt.loadRes(jsonfile)\n",
" if style == 'proposal':\n",
" coco_eval = COCOeval(coco_gt, coco_dt, 'bbox')\n",
" coco_eval.params.useCats = 0\n",
" coco_eval.params.maxDets = list(max_dets)\n",
" else:\n",
" coco_eval = COCOeval(coco_gt, coco_dt, style)\n",
" coco_eval.evaluate()\n",
" coco_eval.accumulate()\n",
" coco_eval.summarize()\n",
" return coco_eval.stats\n",
"\n",
"\n",
"def bbox_eval(anno_file, bbox_list):\n",
" coco_gt = COCO(anno_file)\n",
"\n",
" outfile = 'bbox_detections.json'\n",
" print('Generating json file...')\n",
" with open(outfile, 'w') as f:\n",
" json.dump(bbox_list, f)\n",
"\n",
" map_stats = cocoapi_eval(outfile, 'bbox', coco_gt=coco_gt)\n",
" return map_stats\n",
"\n",
"\n",
"def get_image_as_bytes(images, eval_pre_path):\n",
" batch_im_id_list = []\n",
" batch_im_name_list = []\n",
" batch_img_bytes_list = []\n",
" n = len(images)\n",
" batch_im_id = []\n",
" batch_im_name = []\n",
" batch_img_bytes = []\n",
" for i, im in enumerate(images):\n",
" im_id = im['id']\n",
" file_name = im['file_name']\n",
" if i % eval_batch_size == 0 and i != 0:\n",
" batch_im_id_list.append(batch_im_id)\n",
" batch_im_name_list.append(batch_im_name)\n",
" batch_img_bytes_list.append(batch_img_bytes)\n",
" batch_im_id = []\n",
" batch_im_name = []\n",
" batch_img_bytes = []\n",
" batch_im_id.append(im_id)\n",
" batch_im_name.append(file_name)\n",
"\n",
" with open(os.path.join(eval_pre_path, file_name), 'rb') as f:\n",
" batch_img_bytes.append(f.read())\n",
" return batch_im_id_list, batch_im_name_list, batch_img_bytes_list\n",
"\n",
"\n",
"def analyze_bbox(results, batch_im_id, _clsid2catid):\n",
" bbox_list = []\n",
" k = 0\n",
" for boxes, scores, classes in zip(results['boxes'], results['scores'], results['classes']):\n",
" if boxes is not None:\n",
" im_id = batch_im_id[k]\n",
" n = len(boxes)\n",
" for p in range(n):\n",
" clsid = classes[p]\n",
" score = scores[p]\n",
" xmin, ymin, xmax, ymax = boxes[p]\n",
" catid = (_clsid2catid[int(clsid)])\n",
" w = xmax - xmin + 1\n",
" h = ymax - ymin + 1\n",
"\n",
" bbox = [xmin, ymin, w, h]\n",
" # Round to the nearest 10th to avoid huge file sizes, as COCO suggests\n",
" bbox = [round(float(x) * 10) / 10 for x in bbox]\n",
" bbox_res = {\n",
" 'image_id': im_id,\n",
" 'category_id': catid,\n",
" 'bbox': bbox,\n",
" 'score': float(score),\n",
" }\n",
" bbox_list.append(bbox_res)\n",
" k += 1\n",
" return bbox_list"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is the actual evaluation loop. To fully utilize all four cores on one Inferentia, the optimal setup is to run multi-threaded inference using a `ThreadPoolExecutor`. The following cell is a multi-threaded adaptation of the evaluation routine at https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from concurrent import futures\n",
"\n",
"def evaluate(yolo_predictor, images, eval_pre_path, anno_file, eval_batch_size, _clsid2catid):\n",
" batch_im_id_list, batch_im_name_list, batch_img_bytes_list = get_image_as_bytes(images, eval_pre_path)\n",
"\n",
" # warm up\n",
" yolo_predictor({'image': np.array(batch_img_bytes_list[0], dtype=object)})\n",
"\n",
" with futures.ThreadPoolExecutor(4) as exe:\n",
" fut_im_list = []\n",
" fut_list = []\n",
" start_time = time.time()\n",
" for batch_im_id, batch_im_name, batch_img_bytes in zip(batch_im_id_list, batch_im_name_list, batch_img_bytes_list):\n",
" if len(batch_img_bytes) != eval_batch_size:\n",
" continue\n",
" fut = exe.submit(yolo_predictor, {'image': np.array(batch_img_bytes, dtype=object)})\n",
" fut_im_list.append((batch_im_id, batch_im_name))\n",
" fut_list.append(fut)\n",
" bbox_list = []\n",
" count = 0\n",
" for (batch_im_id, batch_im_name), fut in zip(fut_im_list, fut_list):\n",
" results = fut.result()\n",
" bbox_list.extend(analyze_bbox(results, batch_im_id, _clsid2catid))\n",
" for _ in batch_im_id:\n",
" count += 1\n",
" if count % 100 == 0:\n",
" print('Test iter {}'.format(count))\n",
" print('==================== Performance Measurement ====================')\n",
" print('Finished inference on {} images in {} seconds'.format(len(images), time.time() - start_time))\n",
" print('=================================================================')\n",
" # start evaluation\n",
" box_ap_stats = bbox_eval(anno_file, bbox_list)\n",
" return box_ap_stats"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluate mean average precision (mAP) score\n",
"Here is the code to calculate mAP scores of the YOLO v3 model. The expected mAP score is around 0.328 if we use the pretrained weights."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"yolo_pred = tf.contrib.predictor.from_saved_model('./yolo_v3_coco_saved_model_neuron')\n",
"\n",
"val_coco_root = './val2017'\n",
"val_annotate = './annotations/instances_val2017.json'\n",
"clsid2catid = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16,\n",
" 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31,\n",
" 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43,\n",
" 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56,\n",
" 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72,\n",
" 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85,\n",
" 75: 86, 76: 87, 77: 88, 78: 89, 79: 90}\n",
"eval_batch_size = 8\n",
"with open(val_annotate, 'r', encoding='utf-8') as f2:\n",
" for line in f2:\n",
" line = line.strip()\n",
" dataset = json.loads(line)\n",
" images = dataset['images']\n",
"box_ap = evaluate(yolo_pred, images, val_coco_root, val_annotate, eval_batch_size, clsid2catid)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Evaluate YOLO v3 on Inferentia\n",
"## Note: this tutorial runs on tensorflow-neuron 1.x only"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This tutorial walks through compiling and evaluating YOLO v3 model on Inferentia using the AWS Neuron SDK.\n",
"\n",
"\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1. Download Dataset and Generate Pretrained SavedModel\n",
"\n",
"2. Compile the YOLO v3 model.\n",
"\n",
"3. Deploy the same compiled model.\n",
"\n",
"Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"\n",
"Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the Tutorial main page [Tensorflow-YOLO_v3 Tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v3_demo/yolo_v3_demo.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This demo requires the following pip packages:\n",
"\n",
"`pillow matplotlib pycocotools`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"\n",
"import sys\n",
"!{sys.executable} -m pip install pillow matplotlib pycocotools==2.0.2 --force --extra-index-url=https://pip.repos.neuron.amazonaws.com\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Download Dataset and Generate Pretrained SavedModel\n",
"### Download COCO 2017 validation dataset\n",
"\n",
"We start by downloading the COCO validation dataset, which we will use to validate our model. The COCO 2017 dataset is widely used for object-detection, segmentation and image captioning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!curl -LO http://images.cocodataset.org/zips/val2017.zip\n",
"!curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q val2017.zip\n",
"!unzip annotations_trainval2017.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Generate YOLO v3 tensorflow SavedModel (pretrained on COCO 2017 dataset)\n",
"\n",
"Script yolo_v3_coco_saved_model.py will generate a tensorflow SavedModel using pretrained weights from https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%run yolo_v3_coco_saved_model.py ./yolo_v3_coco_saved_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tensorflow SavedModel can be loaded as a tensorflow predictor. When a JPEG format image is provided as input, the output result of the tensorflow predictor contains information for drawing bounding boxes and classification results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import json\n",
"import tensorflow as tf\n",
"from PIL import Image\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as patches\n",
"\n",
"# launch predictor and run inference on an arbitrary image in the validation dataset\n",
"yolo_pred_cpu = tf.contrib.predictor.from_saved_model('./yolo_v3_coco_saved_model')\n",
"image_path = './val2017/000000581781.jpg'\n",
"with open(image_path, 'rb') as f:\n",
" feeds = {'image': [f.read()]}\n",
"results = yolo_pred_cpu(feeds)\n",
"\n",
"# load annotations to decode classification result\n",
"with open('./annotations/instances_val2017.json') as f:\n",
" annotate_json = json.load(f)\n",
"label_info = {idx+1: cat['name'] for idx, cat in enumerate(annotate_json['categories'])}\n",
"\n",
"# draw picture and bounding boxes\n",
"fig, ax = plt.subplots(figsize=(10, 10))\n",
"ax.imshow(Image.open(image_path).convert('RGB'))\n",
"wanted = results['scores'][0] > 0.1\n",
"for xyxy, label_no_bg in zip(results['boxes'][0][wanted], results['classes'][0][wanted]):\n",
" xywh = xyxy[0], xyxy[1], xyxy[2] - xyxy[0], xyxy[3] - xyxy[1]\n",
" rect = patches.Rectangle((xywh[0], xywh[1]), xywh[2], xywh[3], linewidth=1, edgecolor='g', facecolor='none')\n",
" ax.add_patch(rect)\n",
" rx, ry = rect.get_xy()\n",
" rx = rx + rect.get_width() / 2.0\n",
" ax.annotate(label_info[label_no_bg + 1], (rx, ry), color='w', backgroundcolor='g', fontsize=10,\n",
" ha='center', va='center', bbox=dict(boxstyle='square,pad=0.01', fc='g', ec='none', alpha=0.5))\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Compile the Pretrained SavedModel for Neuron\n",
"\n",
"We make use of the Python compilation API `tfn.saved_model.compile` that is available in `tensorflow-neuron<2`. For the purpose of reducing Neuron runtime overhead, it is necessary to make use of arguments `no_fuse_ops` and `minimum_segment_size`.\n",
"Compiled model is saved in ./yolo_v3_coco_saved_model_neuron."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import shutil\n",
"import tensorflow as tf\n",
"import tensorflow.neuron as tfn\n",
"\n",
"\n",
"def no_fuse_condition(op):\n",
" return op.name.startswith('Preprocessor') or op.name.startswith('Postprocessor')\n",
"\n",
"with tf.Session(graph=tf.Graph()) as sess:\n",
" tf.saved_model.loader.load(sess, ['serve'], './yolo_v3_coco_saved_model')\n",
" no_fuse_ops = [op.name for op in sess.graph.get_operations() if no_fuse_condition(op)]\n",
"shutil.rmtree('./yolo_v3_coco_saved_model_neuron', ignore_errors=True)\n",
"result = tfn.saved_model.compile(\n",
" './yolo_v3_coco_saved_model', './yolo_v3_coco_saved_model_neuron',\n",
" # to enforce trivial compilable subgraphs to run on CPU\n",
" no_fuse_ops=no_fuse_ops,\n",
" minimum_segment_size=100,\n",
" batch_size=2,\n",
" dynamic_batch_size=True,\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model on Inferentia\n",
"## Part 3:Evaluate Model Quality after Compilation\n",
"\n",
"### Define evaluation functions\n",
"We first define some handy helper functions for running evaluation on the COCO 2017 dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"import time\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from pycocotools.coco import COCO\n",
"from pycocotools.cocoeval import COCOeval\n",
"\n",
"\n",
"def cocoapi_eval(jsonfile,\n",
" style,\n",
" coco_gt=None,\n",
" anno_file=None,\n",
" max_dets=(100, 300, 1000)):\n",
" \"\"\"\n",
" Args:\n",
" jsonfile: Evaluation json file, eg: bbox.json, mask.json.\n",
" style: COCOeval style, can be `bbox` , `segm` and `proposal`.\n",
" coco_gt: Whether to load COCOAPI through anno_file,\n",
" eg: coco_gt = COCO(anno_file)\n",
" anno_file: COCO annotations file.\n",
" max_dets: COCO evaluation maxDets.\n",
" \"\"\"\n",
" assert coco_gt is not None or anno_file is not None\n",
"\n",
" if coco_gt is None:\n",
" coco_gt = COCO(anno_file)\n",
" print(\"Start evaluate...\")\n",
" coco_dt = coco_gt.loadRes(jsonfile)\n",
" if style == 'proposal':\n",
" coco_eval = COCOeval(coco_gt, coco_dt, 'bbox')\n",
" coco_eval.params.useCats = 0\n",
" coco_eval.params.maxDets = list(max_dets)\n",
" else:\n",
" coco_eval = COCOeval(coco_gt, coco_dt, style)\n",
" coco_eval.evaluate()\n",
" coco_eval.accumulate()\n",
" coco_eval.summarize()\n",
" return coco_eval.stats\n",
"\n",
"\n",
"def bbox_eval(anno_file, bbox_list):\n",
" coco_gt = COCO(anno_file)\n",
"\n",
" outfile = 'bbox_detections.json'\n",
" print('Generating json file...')\n",
" with open(outfile, 'w') as f:\n",
" json.dump(bbox_list, f)\n",
"\n",
" map_stats = cocoapi_eval(outfile, 'bbox', coco_gt=coco_gt)\n",
" return map_stats\n",
"\n",
"\n",
"def get_image_as_bytes(images, eval_pre_path):\n",
" batch_im_id_list = []\n",
" batch_im_name_list = []\n",
" batch_img_bytes_list = []\n",
" n = len(images)\n",
" batch_im_id = []\n",
" batch_im_name = []\n",
" batch_img_bytes = []\n",
" for i, im in enumerate(images):\n",
" im_id = im['id']\n",
" file_name = im['file_name']\n",
" if i % eval_batch_size == 0 and i != 0:\n",
" batch_im_id_list.append(batch_im_id)\n",
" batch_im_name_list.append(batch_im_name)\n",
" batch_img_bytes_list.append(batch_img_bytes)\n",
" batch_im_id = []\n",
" batch_im_name = []\n",
" batch_img_bytes = []\n",
" batch_im_id.append(im_id)\n",
" batch_im_name.append(file_name)\n",
"\n",
" with open(os.path.join(eval_pre_path, file_name), 'rb') as f:\n",
" batch_img_bytes.append(f.read())\n",
" return batch_im_id_list, batch_im_name_list, batch_img_bytes_list\n",
"\n",
"\n",
"def analyze_bbox(results, batch_im_id, _clsid2catid):\n",
" bbox_list = []\n",
" k = 0\n",
" for boxes, scores, classes in zip(results['boxes'], results['scores'], results['classes']):\n",
" if boxes is not None:\n",
" im_id = batch_im_id[k]\n",
" n = len(boxes)\n",
" for p in range(n):\n",
" clsid = classes[p]\n",
" score = scores[p]\n",
" xmin, ymin, xmax, ymax = boxes[p]\n",
" catid = (_clsid2catid[int(clsid)])\n",
" w = xmax - xmin + 1\n",
" h = ymax - ymin + 1\n",
"\n",
" bbox = [xmin, ymin, w, h]\n",
" # Round to the nearest 10th to avoid huge file sizes, as COCO suggests\n",
" bbox = [round(float(x) * 10) / 10 for x in bbox]\n",
" bbox_res = {\n",
" 'image_id': im_id,\n",
" 'category_id': catid,\n",
" 'bbox': bbox,\n",
" 'score': float(score),\n",
" }\n",
" bbox_list.append(bbox_res)\n",
" k += 1\n",
" return bbox_list"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is the actual evaluation loop. To fully utilize all four cores on one Inferentia, the optimal setup is to run multi-threaded inference using a `ThreadPoolExecutor`. The following cell is a multi-threaded adaptation of the evaluation routine at https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from concurrent import futures\n",
"\n",
"def evaluate(yolo_predictor, images, eval_pre_path, anno_file, eval_batch_size, _clsid2catid):\n",
" batch_im_id_list, batch_im_name_list, batch_img_bytes_list = get_image_as_bytes(images, eval_pre_path)\n",
"\n",
" # warm up\n",
" yolo_predictor({'image': np.array(batch_img_bytes_list[0], dtype=object)})\n",
"\n",
" with futures.ThreadPoolExecutor(4) as exe:\n",
" fut_im_list = []\n",
" fut_list = []\n",
" start_time = time.time()\n",
" for batch_im_id, batch_im_name, batch_img_bytes in zip(batch_im_id_list, batch_im_name_list, batch_img_bytes_list):\n",
" if len(batch_img_bytes) != eval_batch_size:\n",
" continue\n",
" fut = exe.submit(yolo_predictor, {'image': np.array(batch_img_bytes, dtype=object)})\n",
" fut_im_list.append((batch_im_id, batch_im_name))\n",
" fut_list.append(fut)\n",
" bbox_list = []\n",
" count = 0\n",
" for (batch_im_id, batch_im_name), fut in zip(fut_im_list, fut_list):\n",
" results = fut.result()\n",
" bbox_list.extend(analyze_bbox(results, batch_im_id, _clsid2catid))\n",
" for _ in batch_im_id:\n",
" count += 1\n",
" if count % 100 == 0:\n",
" print('Test iter {}'.format(count))\n",
" print('==================== Performance Measurement ====================')\n",
" print('Finished inference on {} images in {} seconds'.format(len(images), time.time() - start_time))\n",
" print('=================================================================')\n",
" # start evaluation\n",
" box_ap_stats = bbox_eval(anno_file, bbox_list)\n",
" return box_ap_stats"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluate mean average precision (mAP) score\n",
"Here is the code to calculate mAP scores of the YOLO v3 model. The expected mAP score is around 0.328 if we use the pretrained weights."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"yolo_pred = tf.contrib.predictor.from_saved_model('./yolo_v3_coco_saved_model_neuron')\n",
"\n",
"val_coco_root = './val2017'\n",
"val_annotate = './annotations/instances_val2017.json'\n",
"clsid2catid = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16,\n",
" 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31,\n",
" 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43,\n",
" 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56,\n",
" 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72,\n",
" 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85,\n",
" 75: 86, 76: 87, 77: 88, 78: 89, 79: 90}\n",
"eval_batch_size = 8\n",
"with open(val_annotate, 'r', encoding='utf-8') as f2:\n",
" for line in f2:\n",
" line = line.strip()\n",
" dataset = json.loads(line)\n",
" images = dataset['images']\n",
"box_ap = evaluate(yolo_pred, images, val_coco_root, val_annotate, eval_batch_size, clsid2catid)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
</pre></body></html> | 2023-09-29T20:55:26.571Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.rst.txt | ```
.. _tensorflow-ssd300:
Running SSD300 with AWS Neuron
==============================
*Update 11/16: The model checkpoint
link*\ https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt\ *is
currently broken and the AWS Neuron team is working on providing an
alternative source.*
This demo shows a Neuron compatible SSD300 implementation that is
functionally equivalent to open source SSD300 model. This demo uses
TensorFlow-Neuron, PyTorch SSD300 model and checkpoint
(https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) and also
shows the performance achieved by the Inf1 instance.
Table of Contents
-----------------
1. Launch EC2 instance and update AWS Neuron SDK software
2. Generating Neuron compatible SSD300 TensorFlow SavedModel
- Convert open source PyTorch SSD300 model and checkpoint into
Neuron compatible SSD300 TensorFlow SavedModel
3. Evaluate the generated SSD300 TensorFlow SavedModel for both accuracy
and performance
- Running threaded inference through the COCO 2017 validation
dataset
Launch EC2 instances and update tensorflow-neuron and neuron-cc
---------------------------------------------------------------
For this demo, launch one inf1.xlarge EC2 instance. We recommend using
the latest Ubuntu 18 Deep Learning AMI (DLAMI).
Please configure your ubuntu16/ubuntu18/yum repo following the steps in
the :ref:`install-neuron-tensorflow` in order to install
``tensorflow-model-server-neuron``.
Generating Neuron compatible SSD300 TensorFlow SavedModel
---------------------------------------------------------
First connect to your inf1.xlarge instance
Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the same directory ssd300_demo, run the following:
1. Create venv and install dependencies
.. code:: bash
sudo apt update
sudo apt install g++ python3-dev python3-venv unzip
sudo apt install tensorflow-model-server-neuron
python3 -m venv env
source ./env/bin/activate
pip install pip setuptools --upgrade
pip install -r ./requirements.txt --extra-index-url=https://pip.repos.neuron.amazonaws.com
2. Clone NVIDIA's DeepLearningExamples repo that contains PyTorch
SSD300.
.. code:: bash
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples
git checkout a644350589f9abc91b203f73e686a50f5d6f3e96
cd ..
3. Download PyTorch SSD300 checkpoint file.
.. code:: bash
curl -LO https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt
4. Download COCO 2017 validation set and annotations.
.. code:: bash
curl -LO http://images.cocodataset.org/zips/val2017.zip
unzip ./val2017.zip
curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip ./annotations_trainval2017.zip
5. Convert PyTorch SSD300 model and checkpoint into a Neuron-compatible
TensorFlow SavedModel.
.. code:: bash
python ssd300_model.py --torch_checkpoint=./nvidia_ssdpyt_fp32_20190225.pt --output_saved_model=./ssd300_tf_neuron/1
This converts PyTorch SSD300 model and checkpoint to a Neuron-compatible
TensorFlow SavedModel using tensorflow-neuron and neuron-cc. The
compilation output is stored in ``./ssd300_tf_neuron``.
6. Launch the ``tensorflow-model-server-neuron`` gRPC server at default
port 8500 in the background.
.. code:: bash
tensorflow_model_server_neuron --model_base_path=$(pwd)/ssd300_tf_neuron &
7. In client, evaluate the Neuron-compatible TensorFlow SavedModel for
both accuracy and performance. Note that this client by default
assumes a ``tensorflow-model-server-neuron`` listening at
``localhost:8500``. On inf1.xlarge, the expected throughput is 100
images/second once the server is fully warmed up, and the expected
mean average precision (mAP) is 0.253.
.. code:: bash
python ssd300_evaluation_client.py --val2017=./val2017 --instances_val2017_json=./annotations/instances_val2017.json
8. After running the demo, please cleanup resources allocated in Neuron
runtime by gracefully killing the ``tensorflow_model_server_neuron``
process, e. g.,
.. code:: bash
killall tensorflow_model_server_neuron
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ssd300:
Running SSD300 with AWS Neuron
==============================
*Update 11/16: The model checkpoint
link*\ https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt\ *is
currently broken and the AWS Neuron team is working on providing an
alternative source.*
This demo shows a Neuron compatible SSD300 implementation that is
functionally equivalent to open source SSD300 model. This demo uses
TensorFlow-Neuron, PyTorch SSD300 model and checkpoint
(https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) and also
shows the performance achieved by the Inf1 instance.
Table of Contents
-----------------
1. Launch EC2 instance and update AWS Neuron SDK software
2. Generating Neuron compatible SSD300 TensorFlow SavedModel
- Convert open source PyTorch SSD300 model and checkpoint into
Neuron compatible SSD300 TensorFlow SavedModel
3. Evaluate the generated SSD300 TensorFlow SavedModel for both accuracy
and performance
- Running threaded inference through the COCO 2017 validation
dataset
Launch EC2 instances and update tensorflow-neuron and neuron-cc
---------------------------------------------------------------
For this demo, launch one inf1.xlarge EC2 instance. We recommend using
the latest Ubuntu 18 Deep Learning AMI (DLAMI).
Please configure your ubuntu16/ubuntu18/yum repo following the steps in
the :ref:`install-neuron-tensorflow` in order to install
``tensorflow-model-server-neuron``.
Generating Neuron compatible SSD300 TensorFlow SavedModel
---------------------------------------------------------
First connect to your inf1.xlarge instance
Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the same directory ssd300_demo, run the following:
1. Create venv and install dependencies
.. code:: bash
sudo apt update
sudo apt install g++ python3-dev python3-venv unzip
sudo apt install tensorflow-model-server-neuron
python3 -m venv env
source ./env/bin/activate
pip install pip setuptools --upgrade
pip install -r ./requirements.txt --extra-index-url=https://pip.repos.neuron.amazonaws.com
2. Clone NVIDIA's DeepLearningExamples repo that contains PyTorch
SSD300.
.. code:: bash
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples
git checkout a644350589f9abc91b203f73e686a50f5d6f3e96
cd ..
3. Download PyTorch SSD300 checkpoint file.
.. code:: bash
curl -LO https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt
4. Download COCO 2017 validation set and annotations.
.. code:: bash
curl -LO http://images.cocodataset.org/zips/val2017.zip
unzip ./val2017.zip
curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip ./annotations_trainval2017.zip
5. Convert PyTorch SSD300 model and checkpoint into a Neuron-compatible
TensorFlow SavedModel.
.. code:: bash
python ssd300_model.py --torch_checkpoint=./nvidia_ssdpyt_fp32_20190225.pt --output_saved_model=./ssd300_tf_neuron/1
This converts PyTorch SSD300 model and checkpoint to a Neuron-compatible
TensorFlow SavedModel using tensorflow-neuron and neuron-cc. The
compilation output is stored in ``./ssd300_tf_neuron``.
6. Launch the ``tensorflow-model-server-neuron`` gRPC server at default
port 8500 in the background.
.. code:: bash
tensorflow_model_server_neuron --model_base_path=$(pwd)/ssd300_tf_neuron &
7. In client, evaluate the Neuron-compatible TensorFlow SavedModel for
both accuracy and performance. Note that this client by default
assumes a ``tensorflow-model-server-neuron`` listening at
``localhost:8500``. On inf1.xlarge, the expected throughput is 100
images/second once the server is fully warmed up, and the expected
mean average precision (mAP) is 0.253.
.. code:: bash
python ssd300_evaluation_client.py --val2017=./val2017 --instances_val2017_json=./annotations/instances_val2017.json
8. After running the demo, please cleanup resources allocated in Neuron
runtime by gracefully killing the ``tensorflow_model_server_neuron``
process, e. g.,
.. code:: bash
killall tensorflow_model_server_neuron
</pre></body></html> | 2023-09-29T20:55:26.578Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/tensorflow_serving_tutorial.rst.txt | ```
.. _tensorflow-serving-neuronrt-visible-cores:
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
=====================================================
TensorFlow serving allows customers to scale-up inference workloads
across a network. TensorFlow Neuron Serving uses the same API as normal
TensorFlow Serving with two differences: (a) the saved model must be
compiled for Inferentia and (b) the entry point is a different binary
named ``tensorflow_model_server_neuron``. The binary is found at
``/usr/local/bin/tensorflow_model_server_neuron`` and is pre-installed
in the DLAMI or installed with APT/YUM tensorflow-model-server-neuron
package.
Install TensorFlow Model Server and Serving API
-----------------------------------------------
Follow the steps in the :ref:`install-neuron-tensorflow`.
Then ensure you install using either apt-get or yum.
If using TF 1.x, install the appropriate version (see above).:
.. code:: bash
sudo apt-get install tensorflow-model-server-neuron
or
.. code:: bash
sudo yum install tensorflow-model-server-neuron
Also, you would need TensorFlow Serving API (use --no-deps to prevent
installation of regular tensorflow). Depending on the version of Tensorflow
you wish to use:
For Tensorflow 1.x:
.. code:: bash
pip install --no-deps tensorflow_serving_api==1.15
For Tensorflow 2.x:
.. code:: bash
pip install --no-deps tensorflow_serving_api
For the example image preprocessing using Keras preprocessing, the
Python Imaging Library Pillow is required:
.. code:: bash
pip install pillow
To workaround h5py issue https://github.com/aws/aws-neuron-sdk/issues/220:
.. code:: bash
pip install "h5py<3.0.0"
Export and Compile Saved Model
------------------------------
The following example shows graph construction followed by the addition
of Neuron compilation step before exporting to saved model.
For Tensorflow 1.x:
.. code:: python
import tensorflow as tf
import tensorflow.neuron
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
model = tf.keras.applications.ResNet50(weights='imagenet')
sess = tf.keras.backend.get_session()
inputs = {'input': model.inputs[0]}
outputs = {'output': model.outputs[0]}
# save the model using tf.saved_model.simple_save
modeldir = "./resnet50/1"
tf.saved_model.simple_save(sess, modeldir, inputs, outputs)
# compile the model for Inferentia
neuron_modeldir = "./resnet50_inf1/1"
tf.neuron.saved_model.compile(modeldir, neuron_modeldir, batch_size=1)
For Tensorflow 2.x:
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
import numpy as np
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
image_sizes = [224, 224]
model = tf.keras.applications.ResNet50(weights='imagenet')
example_inputs = tf.random.uniform([1, *image_sizes, 3], dtype=tf.float32)
# run the model once to define the forward pass and allow for saving
model_neuron(example_inputs)
model_neuron = tfn.trace(model, example_inputs)
tf.keras.models.save_model(model_neuron, './resnet50_inf1/1')
Serving Saved Model
-------------------
User can now serve the saved model with the
tensorflow_model_server_neuron binary. To utilize multiple NeuronCores,
it is recommended to launch multiple tensorflow model servers that
listen to the same gRPC port:
.. code:: bash
export NEURON_RT_VISIBLE_CORES=0 # important to set this environment variable before launching model servers
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
#then to run another server on a different neuron core open another
#window and run this, except this time set NEURON_RT_VISIBLE_CORES=1
#you can keep doing this up to the number of Neuron Cores on your machine
export NEURON_RT_VISIBLE_CORES=1
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
The compiled model is staged in Inferentia DRAM by the server to prepare
for inference.
Generate inference requests to the model server
-----------------------------------------------
Now run inferences via GRPC as shown in the following sample client
code:
For Tensorflow 1.x:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.applications.resnet50 import decode_predictions
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input'].CopyFrom(
tf.contrib.util.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output'])
print(decode_predictions(prediction))
For Tensorflow 2.x:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow.keras.applications.resnet50 import decode_predictions
tf.keras.backend.set_image_data_format('channels_last')
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input_1'].CopyFrom(
tf.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output_1'])
print(decode_predictions(prediction))
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-serving-neuronrt-visible-cores:
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
=====================================================
TensorFlow serving allows customers to scale-up inference workloads
across a network. TensorFlow Neuron Serving uses the same API as normal
TensorFlow Serving with two differences: (a) the saved model must be
compiled for Inferentia and (b) the entry point is a different binary
named ``tensorflow_model_server_neuron``. The binary is found at
``/usr/local/bin/tensorflow_model_server_neuron`` and is pre-installed
in the DLAMI or installed with APT/YUM tensorflow-model-server-neuron
package.
Install TensorFlow Model Server and Serving API
-----------------------------------------------
Follow the steps in the :ref:`install-neuron-tensorflow`.
Then ensure you install using either apt-get or yum.
If using TF 1.x, install the appropriate version (see above).:
.. code:: bash
sudo apt-get install tensorflow-model-server-neuron
or
.. code:: bash
sudo yum install tensorflow-model-server-neuron
Also, you would need TensorFlow Serving API (use --no-deps to prevent
installation of regular tensorflow). Depending on the version of Tensorflow
you wish to use:
For Tensorflow 1.x:
.. code:: bash
pip install --no-deps tensorflow_serving_api==1.15
For Tensorflow 2.x:
.. code:: bash
pip install --no-deps tensorflow_serving_api
For the example image preprocessing using Keras preprocessing, the
Python Imaging Library Pillow is required:
.. code:: bash
pip install pillow
To workaround h5py issue https://github.com/aws/aws-neuron-sdk/issues/220:
.. code:: bash
pip install "h5py<3.0.0"
Export and Compile Saved Model
------------------------------
The following example shows graph construction followed by the addition
of Neuron compilation step before exporting to saved model.
For Tensorflow 1.x:
.. code:: python
import tensorflow as tf
import tensorflow.neuron
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
model = tf.keras.applications.ResNet50(weights='imagenet')
sess = tf.keras.backend.get_session()
inputs = {'input': model.inputs[0]}
outputs = {'output': model.outputs[0]}
# save the model using tf.saved_model.simple_save
modeldir = "./resnet50/1"
tf.saved_model.simple_save(sess, modeldir, inputs, outputs)
# compile the model for Inferentia
neuron_modeldir = "./resnet50_inf1/1"
tf.neuron.saved_model.compile(modeldir, neuron_modeldir, batch_size=1)
For Tensorflow 2.x:
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
import numpy as np
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
image_sizes = [224, 224]
model = tf.keras.applications.ResNet50(weights='imagenet')
example_inputs = tf.random.uniform([1, *image_sizes, 3], dtype=tf.float32)
# run the model once to define the forward pass and allow for saving
model_neuron(example_inputs)
model_neuron = tfn.trace(model, example_inputs)
tf.keras.models.save_model(model_neuron, './resnet50_inf1/1')
Serving Saved Model
-------------------
User can now serve the saved model with the
tensorflow_model_server_neuron binary. To utilize multiple NeuronCores,
it is recommended to launch multiple tensorflow model servers that
listen to the same gRPC port:
.. code:: bash
export NEURON_RT_VISIBLE_CORES=0 # important to set this environment variable before launching model servers
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
#then to run another server on a different neuron core open another
#window and run this, except this time set NEURON_RT_VISIBLE_CORES=1
#you can keep doing this up to the number of Neuron Cores on your machine
export NEURON_RT_VISIBLE_CORES=1
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
The compiled model is staged in Inferentia DRAM by the server to prepare
for inference.
Generate inference requests to the model server
-----------------------------------------------
Now run inferences via GRPC as shown in the following sample client
code:
For Tensorflow 1.x:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.applications.resnet50 import decode_predictions
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input'].CopyFrom(
tf.contrib.util.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output'])
print(decode_predictions(prediction))
For Tensorflow 2.x:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow.keras.applications.resnet50 import decode_predictions
tf.keras.backend.set_image_data_format('channels_last')
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input_1'].CopyFrom(
tf.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output_1'])
print(decode_predictions(prediction))
</pre></body></html> | 2023-09-29T20:55:26.584Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "e91cf83b",
"metadata": {},
"source": [
"# Running Huggingface DistilBERT with TensorFlow-Neuron"
]
},
{
"cell_type": "markdown",
"id": "71394e1e",
"metadata": {},
"source": [
"In this tutorial you will compile and deploy DistilBERT version of HuggingFace 🤗 Transformers BERT for Inferentia using TensorFlow-Neuron. The full list of HuggingFace's pretrained BERT models can be found in the BERT section on this page https://huggingface.co/transformers/pretrained_models.html. you can also read about HuggingFace's pipeline feature here: https://huggingface.co/transformers/main_classes/pipelines.html\n",
"\n",
"This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger, but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs."
]
},
{
"cell_type": "markdown",
"id": "828ef9bd",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "5becc549",
"metadata": {},
"source": [
"To run this tutorial please follow the instructions for [TensorFlow-Neuron Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/tensorflow-neuron.html#setup-tensorflow-neuron) and the [Jupyter Notebook Quickstart](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html) and set your kernel to \"Python (tensorflow-neuron)\" .\n",
"\n",
"Next, install some additional dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee1a3b84",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install transformers==4.30.2\n",
"!pip install ipywidgets"
]
},
{
"cell_type": "markdown",
"id": "c301cfce",
"metadata": {},
"source": [
"## Download From Huggingface and Compile for AWS-Neuron"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92e8050d",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_neuron as tfn\n",
"from transformers import DistilBertTokenizer, TFDistilBertModel\n",
"\n",
"# Create a wrapper for the roberta model that will accept inputs as a list\n",
"# instead of a dictionary. This will allow the compiled model to be saved\n",
"# to disk with the model.save() fucntion.\n",
"class DistilBertWrapper(tf.keras.Model):\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.model = model\n",
" def __call__(self, example_inputs):\n",
" return self.model({'input_ids' : example_inputs[0], 'attention_mask' : example_inputs[1]})\n",
" \n",
"\n",
"tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')\n",
"model = DistilBertWrapper(TFDistilBertModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english'))\n",
"\n",
"batch_size = 16\n",
"\n",
"# create example inputs with a batch size of 16\n",
"text = [\"Paris is the <mask> of France.\"] * batch_size\n",
"encoded_input = tokenizer(text, return_tensors='tf', padding='max_length', max_length=64)\n",
"\n",
"# turn inputs into a list\n",
"example_input = [encoded_input['input_ids'], encoded_input['attention_mask']]\n",
"\n",
"#compile\n",
"model_neuron = tfn.trace(model, example_input)\n",
"\n",
"print(\"Running on neuron:\", model_neuron(example_input))\n",
"\n",
"# save the model to disk to save recompilation time for next usage\n",
"model_neuron.save('./distilbert-neuron-b16')"
]
},
{
"cell_type": "markdown",
"id": "0f2e159a",
"metadata": {},
"source": [
"## Run Basic Inference Benchmarking"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ccf22e74",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"import concurrent.futures\n",
"import time\n",
"\n",
"reloaded_neuron_model = tf.keras.models.load_model('./distilbert-neuron-b16')\n",
"print(\"Reloaded model running on neuron:\", reloaded_neuron_model(example_input))\n",
"\n",
"num_threads = 4\n",
"num_inferences = 1000\n",
"\n",
"latency_list = []\n",
"def inference_with_latency_calculation(example_input):\n",
" global latency_list\n",
" start = time.time()\n",
" result = reloaded_neuron_model(example_input)\n",
" end = time.time()\n",
" latency_list.append((end-start) * 1000)\n",
" return result\n",
"\n",
"start = time.time()\n",
"with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\n",
" futures = []\n",
" for i in range(num_inferences):\n",
" futures.append(executor.submit(inference_with_latency_calculation, example_input))\n",
" for future in concurrent.futures.as_completed(futures):\n",
" get_result = future.result()\n",
"end = time.time()\n",
"\n",
"total_time = end - start\n",
"throughput = (num_inferences * batch_size)/total_time\n",
"\n",
"print(f\"Throughput was {throughput} samples per second.\")\n",
"print(f\"Latency p50 was {np.percentile(latency_list, 50)} ms\")\n",
"print(f\"Latency p90 was {np.percentile(latency_list, 90)} ms\")\n",
"print(f\"Latency p95 was {np.percentile(latency_list, 95)} ms\")\n",
"print(f\"Latency p99 was {np.percentile(latency_list, 99)} ms\")\n",
"assert(throughput >= 1930.0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b31b82fc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "e91cf83b",
"metadata": {},
"source": [
"# Running Huggingface DistilBERT with TensorFlow-Neuron"
]
},
{
"cell_type": "markdown",
"id": "71394e1e",
"metadata": {},
"source": [
"In this tutorial you will compile and deploy DistilBERT version of HuggingFace 🤗 Transformers BERT for Inferentia using TensorFlow-Neuron. The full list of HuggingFace's pretrained BERT models can be found in the BERT section on this page https://huggingface.co/transformers/pretrained_models.html. you can also read about HuggingFace's pipeline feature here: https://huggingface.co/transformers/main_classes/pipelines.html\n",
"\n",
"This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger, but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs."
]
},
{
"cell_type": "markdown",
"id": "828ef9bd",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "5becc549",
"metadata": {},
"source": [
"To run this tutorial please follow the instructions for [TensorFlow-Neuron Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/tensorflow-neuron.html#setup-tensorflow-neuron) and the [Jupyter Notebook Quickstart](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html) and set your kernel to \"Python (tensorflow-neuron)\" .\n",
"\n",
"Next, install some additional dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee1a3b84",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install transformers==4.30.2\n",
"!pip install ipywidgets"
]
},
{
"cell_type": "markdown",
"id": "c301cfce",
"metadata": {},
"source": [
"## Download From Huggingface and Compile for AWS-Neuron"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92e8050d",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_neuron as tfn\n",
"from transformers import DistilBertTokenizer, TFDistilBertModel\n",
"\n",
"# Create a wrapper for the roberta model that will accept inputs as a list\n",
"# instead of a dictionary. This will allow the compiled model to be saved\n",
"# to disk with the model.save() fucntion.\n",
"class DistilBertWrapper(tf.keras.Model):\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.model = model\n",
" def __call__(self, example_inputs):\n",
" return self.model({'input_ids' : example_inputs[0], 'attention_mask' : example_inputs[1]})\n",
" \n",
"\n",
"tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')\n",
"model = DistilBertWrapper(TFDistilBertModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english'))\n",
"\n",
"batch_size = 16\n",
"\n",
"# create example inputs with a batch size of 16\n",
"text = [\"Paris is the <mask> of France.\"] * batch_size\n",
"encoded_input = tokenizer(text, return_tensors='tf', padding='max_length', max_length=64)\n",
"\n",
"# turn inputs into a list\n",
"example_input = [encoded_input['input_ids'], encoded_input['attention_mask']]\n",
"\n",
"#compile\n",
"model_neuron = tfn.trace(model, example_input)\n",
"\n",
"print(\"Running on neuron:\", model_neuron(example_input))\n",
"\n",
"# save the model to disk to save recompilation time for next usage\n",
"model_neuron.save('./distilbert-neuron-b16')"
]
},
{
"cell_type": "markdown",
"id": "0f2e159a",
"metadata": {},
"source": [
"## Run Basic Inference Benchmarking"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ccf22e74",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"import concurrent.futures\n",
"import time\n",
"\n",
"reloaded_neuron_model = tf.keras.models.load_model('./distilbert-neuron-b16')\n",
"print(\"Reloaded model running on neuron:\", reloaded_neuron_model(example_input))\n",
"\n",
"num_threads = 4\n",
"num_inferences = 1000\n",
"\n",
"latency_list = []\n",
"def inference_with_latency_calculation(example_input):\n",
" global latency_list\n",
" start = time.time()\n",
" result = reloaded_neuron_model(example_input)\n",
" end = time.time()\n",
" latency_list.append((end-start) * 1000)\n",
" return result\n",
"\n",
"start = time.time()\n",
"with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\n",
" futures = []\n",
" for i in range(num_inferences):\n",
" futures.append(executor.submit(inference_with_latency_calculation, example_input))\n",
" for future in concurrent.futures.as_completed(futures):\n",
" get_result = future.result()\n",
"end = time.time()\n",
"\n",
"total_time = end - start\n",
"throughput = (num_inferences * batch_size)/total_time\n",
"\n",
"print(f\"Throughput was {throughput} samples per second.\")\n",
"print(f\"Latency p50 was {np.percentile(latency_list, 50)} ms\")\n",
"print(f\"Latency p90 was {np.percentile(latency_list, 90)} ms\")\n",
"print(f\"Latency p95 was {np.percentile(latency_list, 95)} ms\")\n",
"print(f\"Latency p99 was {np.percentile(latency_list, 99)} ms\")\n",
"assert(throughput >= 1930.0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b31b82fc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:26.779Z | |
Evaluate YOLO v4 on Inferentia — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/yolo_v4_demo/evaluate.html | # Evaluate YOLO v4 on Inferentia — AWS Neuron Documentation
```
yolo_pred = tf.contrib.predictor.from_saved_model('./yolo_v4_coco_saved_model_neuron')
val_coco_root = './val2017'
val_annotate = './annotations/instances_val2017.json'
clsid2catid = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16,
15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31,
27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43,
39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56,
51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72,
63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85,
75: 86, 76: 87, 77: 88, 78: 89, 79: 90}
eval_batch_size = 8
with open(val_annotate, 'r', encoding='utf-8') as f2:
for line in f2:
line = line.strip()
dataset = json.loads(line)
images = dataset['images']
box_ap = evaluate(yolo_pred, images, val_coco_root, val_annotate, eval_batch_size, clsid2catid)
``` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Evaluate YOLO v4 on Inferentia — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/yolo_v4_demo/evaluate", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/yolo_v4_demo/evaluate.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
Part 1: Download Dataset and Generate Pretrained SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-COCO-2017-validation-dataset">
Download COCO 2017 validation dataset
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Check-required-package-versions">
Check required package versions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Generate-YOLO-v4-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
Generate YOLO v4 tensorflow SavedModel (pretrained on COCO 2017 dataset)
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Inferentia">
Part 2: Compile the Pretrained SavedModel for Inferentia
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-3:-Evaluate-Model-Quality-after-Compilation">
Part 3: Evaluate Model Quality after Compilation
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Define-evaluation-functions">
Define evaluation functions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Evaluate-mean-average-precision-(mAP)-score">
Evaluate mean average precision (mAP) score
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Evaluate YOLO v4 on Inferentia</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
Part 1: Download Dataset and Generate Pretrained SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-COCO-2017-validation-dataset">
Download COCO 2017 validation dataset
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Check-required-package-versions">
Check required package versions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Generate-YOLO-v4-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
Generate YOLO v4 tensorflow SavedModel (pretrained on COCO 2017 dataset)
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Inferentia">
Part 2: Compile the Pretrained SavedModel for Inferentia
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-3:-Evaluate-Model-Quality-after-Compilation">
Part 3: Evaluate Model Quality after Compilation
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Define-evaluation-functions">
Define evaluation functions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Evaluate-mean-average-precision-(mAP)-score">
Evaluate mean average precision (mAP) score
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Evaluate-YOLO-v4-on-Inferentia">
<h1>Evaluate YOLO v4 on Inferentia<a class="headerlink" href="#Evaluate-YOLO-v4-on-Inferentia" title="Permalink to this headline">#</a></h1>
<div class="section" id="Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
<h2>Note: this tutorial runs on tensorflow-neuron 1.x only<a class="headerlink" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Introduction">
<h2>Introduction<a class="headerlink" href="#Introduction" title="Permalink to this headline">#</a></h2>
<p>This tutorial walks through compiling and evaluating YOLO v4 model on Inferentia using the AWS Neuron SDK 09/2020 release. We recommend running this tutorial on an EC2 <code class="docutils literal notranslate"><span class="pre">inf1.2xlarge</span></code> instance which contains one Inferentia and 8 vCPU cores, as well as 16 GB of memory.Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow">Tensorflow Installation
Guide</a> You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
</div>
<div class="section" id="Prerequisites">
<h2>Prerequisites<a class="headerlink" href="#Prerequisites" title="Permalink to this headline">#</a></h2>
<p>This demo requires the following pip packages:</p>
<p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span> <span class="pre">tensorflow-neuron<2</span> <span class="pre">requests</span> <span class="pre">pillow</span> <span class="pre">matplotlib</span> <span class="pre">pycocotools</span> <span class="pre">torch</span></code></p>
<p>and debian/rpm package <code class="docutils literal notranslate"><span class="pre">aws-neuron-runtime</span></code>.</p>
<p>On DLAMI, <code class="docutils literal notranslate"><span class="pre">aws-neuron-runtime</span></code> is already pre-installed.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span>!pip install neuron-cc 'tensorflow-neuron<2' requests pillow matplotlib pycocotools==2.0.1 numpy==1.18.2 torch~=1.5.0 --force \
--extra-index-url=https://pip.repos.neuron.amazonaws.com
</pre></div>
</div>
</div>
</div>
<div class="section" id="Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
<h2>Part 1: Download Dataset and Generate Pretrained SavedModel<a class="headerlink" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel" title="Permalink to this headline">#</a></h2>
<div class="section" id="Download-COCO-2017-validation-dataset">
<h3>Download COCO 2017 validation dataset<a class="headerlink" href="#Download-COCO-2017-validation-dataset" title="Permalink to this headline">#</a></h3>
<p>We start by downloading the COCO validation dataset, which we will use to validate our model. The COCO 2017 dataset is widely used for object-detection, segmentation and image captioning.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/zips/val2017.zip
<span class="o">!</span>curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/annotations/annotations_trainval2017.zip
<span class="o">!</span>unzip<span class="w"> </span>-q<span class="w"> </span>val2017.zip
<span class="o">!</span>unzip<span class="w"> </span>annotations_trainval2017.zip
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>ls
</pre></div>
</div>
</div>
</div>
<div class="section" id="Check-required-package-versions">
<h3>Check required package versions<a class="headerlink" href="#Check-required-package-versions" title="Permalink to this headline">#</a></h3>
<p>Here are the minimum required versions of AWS Neuron packages. We run a check.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">pkg_resources</span>
<span class="kn">from</span> <span class="nn">distutils.version</span> <span class="kn">import</span> <span class="n">LooseVersion</span>
<span class="k">assert</span> <span class="n">LooseVersion</span><span class="p">(</span><span class="n">pkg_resources</span><span class="o">.</span><span class="n">get_distribution</span><span class="p">(</span><span class="s1">'neuron-cc'</span><span class="p">)</span><span class="o">.</span><span class="n">version</span><span class="p">)</span> <span class="o">></span> <span class="n">LooseVersion</span><span class="p">(</span><span class="s1">'1.0.20000'</span><span class="p">)</span>
<span class="k">assert</span> <span class="n">LooseVersion</span><span class="p">(</span><span class="n">pkg_resources</span><span class="o">.</span><span class="n">get_distribution</span><span class="p">(</span><span class="s1">'tensorflow-neuron'</span><span class="p">)</span><span class="o">.</span><span class="n">version</span><span class="p">)</span> <span class="o">></span> <span class="n">LooseVersion</span><span class="p">(</span><span class="s1">'1.15.3.1.0.2000'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'passed package version checks'</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Generate-YOLO-v4-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
<h3>Generate YOLO v4 tensorflow SavedModel (pretrained on COCO 2017 dataset)<a class="headerlink" href="#Generate-YOLO-v4-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)" title="Permalink to this headline">#</a></h3>
<p>Script <code class="docutils literal notranslate"><span class="pre">yolo_v4_coco_saved_model.py</span></code> will generate a tensorflow SavedModel using pretrained weights from <a class="reference external" href="https://github.com/Tianxiaomo/pytorch-YOLOv4">https://github.com/Tianxiaomo/pytorch-YOLOv4</a>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>python3<span class="w"> </span>yolo_v4_coco_saved_model.py
</pre></div>
</div>
</div>
<p>This tensorflow SavedModel can be loaded as a tensorflow predictor. When a JPEG format image is provided as input, the output result of the tensorflow predictor contains information for drawing bounding boxes and classification results.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">PIL</span> <span class="kn">import</span> <span class="n">Image</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="kn">import</span> <span class="nn">matplotlib.patches</span> <span class="k">as</span> <span class="nn">patches</span>
<span class="c1"># launch predictor and run inference on an arbitrary image in the validation dataset</span>
<span class="n">yolo_pred_cpu</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">predictor</span><span class="o">.</span><span class="n">from_saved_model</span><span class="p">(</span><span class="s1">'./yolo_v4_coco_saved_model'</span><span class="p">)</span>
<span class="n">image_path</span> <span class="o">=</span> <span class="s1">'./val2017/000000581781.jpg'</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">image_path</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">feeds</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'image'</span><span class="p">:</span> <span class="p">[</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">()]}</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">yolo_pred_cpu</span><span class="p">(</span><span class="n">feeds</span><span class="p">)</span>
<span class="c1"># load annotations to decode classification result</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'./annotations/instances_val2017.json'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">annotate_json</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">label_info</span> <span class="o">=</span> <span class="p">{</span><span class="n">idx</span><span class="o">+</span><span class="mi">1</span><span class="p">:</span> <span class="n">cat</span><span class="p">[</span><span class="s1">'name'</span><span class="p">]</span> <span class="k">for</span> <span class="n">idx</span><span class="p">,</span> <span class="n">cat</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">annotate_json</span><span class="p">[</span><span class="s1">'categories'</span><span class="p">])}</span>
<span class="c1"># draw picture and bounding boxes</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">))</span>
<span class="n">ax</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">Image</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="n">image_path</span><span class="p">)</span><span class="o">.</span><span class="n">convert</span><span class="p">(</span><span class="s1">'RGB'</span><span class="p">))</span>
<span class="n">wanted</span> <span class="o">=</span> <span class="n">results</span><span class="p">[</span><span class="s1">'scores'</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span> <span class="o">></span> <span class="mf">0.1</span>
<span class="k">for</span> <span class="n">xyxy</span><span class="p">,</span> <span class="n">label_no_bg</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">results</span><span class="p">[</span><span class="s1">'boxes'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="n">wanted</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'classes'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="n">wanted</span><span class="p">]):</span>
<span class="n">xywh</span> <span class="o">=</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">-</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">3</span><span class="p">]</span> <span class="o">-</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">rect</span> <span class="o">=</span> <span class="n">patches</span><span class="o">.</span><span class="n">Rectangle</span><span class="p">((</span><span class="n">xywh</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">1</span><span class="p">]),</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">2</span><span class="p">],</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">edgecolor</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">facecolor</span><span class="o">=</span><span class="s1">'none'</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">add_patch</span><span class="p">(</span><span class="n">rect</span><span class="p">)</span>
<span class="n">rx</span><span class="p">,</span> <span class="n">ry</span> <span class="o">=</span> <span class="n">rect</span><span class="o">.</span><span class="n">get_xy</span><span class="p">()</span>
<span class="n">rx</span> <span class="o">=</span> <span class="n">rx</span> <span class="o">+</span> <span class="n">rect</span><span class="o">.</span><span class="n">get_width</span><span class="p">()</span> <span class="o">/</span> <span class="mf">2.0</span>
<span class="n">ax</span><span class="o">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">label_info</span><span class="p">[</span><span class="n">label_no_bg</span> <span class="o">+</span> <span class="mi">1</span><span class="p">],</span> <span class="p">(</span><span class="n">rx</span><span class="p">,</span> <span class="n">ry</span><span class="p">),</span> <span class="n">color</span><span class="o">=</span><span class="s1">'w'</span><span class="p">,</span> <span class="n">backgroundcolor</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span>
<span class="n">ha</span><span class="o">=</span><span class="s1">'center'</span><span class="p">,</span> <span class="n">va</span><span class="o">=</span><span class="s1">'center'</span><span class="p">,</span> <span class="n">bbox</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">boxstyle</span><span class="o">=</span><span class="s1">'square,pad=0.01'</span><span class="p">,</span> <span class="n">fc</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">ec</span><span class="o">=</span><span class="s1">'none'</span><span class="p">,</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.5</span><span class="p">))</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section" id="Part-2:-Compile-the-Pretrained-SavedModel-for-Inferentia">
<h2>Part 2: Compile the Pretrained SavedModel for Inferentia<a class="headerlink" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Inferentia" title="Permalink to this headline">#</a></h2>
<p>We make use of the Python compilation API <code class="docutils literal notranslate"><span class="pre">tfn.saved_model.compile</span></code> that is avaiable in <code class="docutils literal notranslate"><span class="pre">tensorflow-neuron<2</span></code>. For the purpose of reducing Neuron runtime overhead, it is necessary to make use of arguments <code class="docutils literal notranslate"><span class="pre">no_fuse_ops</span></code> and <code class="docutils literal notranslate"><span class="pre">minimum_segment_size</span></code>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">shutil</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="k">def</span> <span class="nf">no_fuse_condition</span><span class="p">(</span><span class="n">op</span><span class="p">):</span>
<span class="k">return</span> <span class="nb">any</span><span class="p">(</span><span class="n">op</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="n">pat</span><span class="p">)</span> <span class="k">for</span> <span class="n">pat</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">'reshape'</span><span class="p">,</span> <span class="s1">'lambda_1/Cast'</span><span class="p">,</span> <span class="s1">'lambda_2/Cast'</span><span class="p">,</span> <span class="s1">'lambda_3/Cast'</span><span class="p">])</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">loader</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="p">[</span><span class="s1">'serve'</span><span class="p">],</span> <span class="s1">'./yolo_v4_coco_saved_model'</span><span class="p">)</span>
<span class="n">no_fuse_ops</span> <span class="o">=</span> <span class="p">[</span><span class="n">op</span><span class="o">.</span><span class="n">name</span> <span class="k">for</span> <span class="n">op</span> <span class="ow">in</span> <span class="n">sess</span><span class="o">.</span><span class="n">graph</span><span class="o">.</span><span class="n">get_operations</span><span class="p">()</span> <span class="k">if</span> <span class="n">no_fuse_condition</span><span class="p">(</span><span class="n">op</span><span class="p">)]</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">rmtree</span><span class="p">(</span><span class="s1">'./yolo_v4_coco_saved_model_neuron'</span><span class="p">,</span> <span class="n">ignore_errors</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span>
<span class="s1">'./yolo_v4_coco_saved_model'</span><span class="p">,</span> <span class="s1">'./yolo_v4_coco_saved_model_neuron'</span><span class="p">,</span>
<span class="c1"># we partition the graph before casting from float16 to float32, to help reduce the output tensor size by 1/2</span>
<span class="n">no_fuse_ops</span><span class="o">=</span><span class="n">no_fuse_ops</span><span class="p">,</span>
<span class="c1"># to enforce trivial compilable subgraphs to run on CPU</span>
<span class="n">minimum_segment_size</span><span class="o">=</span><span class="mi">100</span><span class="p">,</span>
<span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">dynamic_batch_size</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Part-3:-Evaluate-Model-Quality-after-Compilation">
<h2>Part 3: Evaluate Model Quality after Compilation<a class="headerlink" href="#Part-3:-Evaluate-Model-Quality-after-Compilation" title="Permalink to this headline">#</a></h2>
<div class="section" id="Define-evaluation-functions">
<h3>Define evaluation functions<a class="headerlink" href="#Define-evaluation-functions" title="Permalink to this headline">#</a></h3>
<p>We first define some handy helper functions for running evaluation on the COCO 2017 dataset.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">pycocotools.coco</span> <span class="kn">import</span> <span class="n">COCO</span>
<span class="kn">from</span> <span class="nn">pycocotools.cocoeval</span> <span class="kn">import</span> <span class="n">COCOeval</span>
<span class="k">def</span> <span class="nf">cocoapi_eval</span><span class="p">(</span><span class="n">jsonfile</span><span class="p">,</span>
<span class="n">style</span><span class="p">,</span>
<span class="n">coco_gt</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">anno_file</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">max_dets</span><span class="o">=</span><span class="p">(</span><span class="mi">100</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">1000</span><span class="p">)):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> Args:</span>
<span class="sd"> jsonfile: Evaluation json file, eg: bbox.json, mask.json.</span>
<span class="sd"> style: COCOeval style, can be `bbox` , `segm` and `proposal`.</span>
<span class="sd"> coco_gt: Whether to load COCOAPI through anno_file,</span>
<span class="sd"> eg: coco_gt = COCO(anno_file)</span>
<span class="sd"> anno_file: COCO annotations file.</span>
<span class="sd"> max_dets: COCO evaluation maxDets.</span>
<span class="sd"> """</span>
<span class="k">assert</span> <span class="n">coco_gt</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span> <span class="ow">or</span> <span class="n">anno_file</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span>
<span class="k">if</span> <span class="n">coco_gt</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">coco_gt</span> <span class="o">=</span> <span class="n">COCO</span><span class="p">(</span><span class="n">anno_file</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Start evaluate..."</span><span class="p">)</span>
<span class="n">coco_dt</span> <span class="o">=</span> <span class="n">coco_gt</span><span class="o">.</span><span class="n">loadRes</span><span class="p">(</span><span class="n">jsonfile</span><span class="p">)</span>
<span class="k">if</span> <span class="n">style</span> <span class="o">==</span> <span class="s1">'proposal'</span><span class="p">:</span>
<span class="n">coco_eval</span> <span class="o">=</span> <span class="n">COCOeval</span><span class="p">(</span><span class="n">coco_gt</span><span class="p">,</span> <span class="n">coco_dt</span><span class="p">,</span> <span class="s1">'bbox'</span><span class="p">)</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">useCats</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">maxDets</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">max_dets</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">coco_eval</span> <span class="o">=</span> <span class="n">COCOeval</span><span class="p">(</span><span class="n">coco_gt</span><span class="p">,</span> <span class="n">coco_dt</span><span class="p">,</span> <span class="n">style</span><span class="p">)</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">evaluate</span><span class="p">()</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">accumulate</span><span class="p">()</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">summarize</span><span class="p">()</span>
<span class="k">return</span> <span class="n">coco_eval</span><span class="o">.</span><span class="n">stats</span>
<span class="k">def</span> <span class="nf">bbox_eval</span><span class="p">(</span><span class="n">anno_file</span><span class="p">,</span> <span class="n">bbox_list</span><span class="p">):</span>
<span class="n">coco_gt</span> <span class="o">=</span> <span class="n">COCO</span><span class="p">(</span><span class="n">anno_file</span><span class="p">)</span>
<span class="n">outfile</span> <span class="o">=</span> <span class="s1">'bbox_detections.json'</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Generating json file...'</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">outfile</span><span class="p">,</span> <span class="s1">'w'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">json</span><span class="o">.</span><span class="n">dump</span><span class="p">(</span><span class="n">bbox_list</span><span class="p">,</span> <span class="n">f</span><span class="p">)</span>
<span class="n">map_stats</span> <span class="o">=</span> <span class="n">cocoapi_eval</span><span class="p">(</span><span class="n">outfile</span><span class="p">,</span> <span class="s1">'bbox'</span><span class="p">,</span> <span class="n">coco_gt</span><span class="o">=</span><span class="n">coco_gt</span><span class="p">)</span>
<span class="k">return</span> <span class="n">map_stats</span>
<span class="k">def</span> <span class="nf">get_image_as_bytes</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">):</span>
<span class="n">batch_im_id_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">n</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">images</span><span class="p">)</span>
<span class="n">batch_im_id</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">im</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">images</span><span class="p">):</span>
<span class="n">im_id</span> <span class="o">=</span> <span class="n">im</span><span class="p">[</span><span class="s1">'id'</span><span class="p">]</span>
<span class="n">file_name</span> <span class="o">=</span> <span class="n">im</span><span class="p">[</span><span class="s1">'file_name'</span><span class="p">]</span>
<span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="n">eval_batch_size</span> <span class="o">==</span> <span class="mi">0</span> <span class="ow">and</span> <span class="n">i</span> <span class="o">!=</span> <span class="mi">0</span><span class="p">:</span>
<span class="n">batch_im_id_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_im_id</span><span class="p">)</span>
<span class="n">batch_im_name_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_im_name</span><span class="p">)</span>
<span class="n">batch_img_bytes_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">)</span>
<span class="n">batch_im_id</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_id</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">im_id</span><span class="p">)</span>
<span class="n">batch_im_name</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">file_name</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">eval_pre_path</span><span class="p">,</span> <span class="n">file_name</span><span class="p">),</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">batch_img_bytes</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="k">return</span> <span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span>
<span class="k">def</span> <span class="nf">analyze_bbox</span><span class="p">(</span><span class="n">results</span><span class="p">,</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">):</span>
<span class="n">bbox_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">boxes</span><span class="p">,</span> <span class="n">scores</span><span class="p">,</span> <span class="n">classes</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">results</span><span class="p">[</span><span class="s1">'boxes'</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'scores'</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'classes'</span><span class="p">]):</span>
<span class="k">if</span> <span class="n">boxes</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">im_id</span> <span class="o">=</span> <span class="n">batch_im_id</span><span class="p">[</span><span class="n">k</span><span class="p">]</span>
<span class="n">n</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">boxes</span><span class="p">)</span>
<span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">):</span>
<span class="n">clsid</span> <span class="o">=</span> <span class="n">classes</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">score</span> <span class="o">=</span> <span class="n">scores</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">xmin</span><span class="p">,</span> <span class="n">ymin</span><span class="p">,</span> <span class="n">xmax</span><span class="p">,</span> <span class="n">ymax</span> <span class="o">=</span> <span class="n">boxes</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">catid</span> <span class="o">=</span> <span class="p">(</span><span class="n">_clsid2catid</span><span class="p">[</span><span class="nb">int</span><span class="p">(</span><span class="n">clsid</span><span class="p">)])</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">xmax</span> <span class="o">-</span> <span class="n">xmin</span> <span class="o">+</span> <span class="mi">1</span>
<span class="n">h</span> <span class="o">=</span> <span class="n">ymax</span> <span class="o">-</span> <span class="n">ymin</span> <span class="o">+</span> <span class="mi">1</span>
<span class="n">bbox</span> <span class="o">=</span> <span class="p">[</span><span class="n">xmin</span><span class="p">,</span> <span class="n">ymin</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">h</span><span class="p">]</span>
<span class="c1"># Round to the nearest 10th to avoid huge file sizes, as COCO suggests</span>
<span class="n">bbox</span> <span class="o">=</span> <span class="p">[</span><span class="nb">round</span><span class="p">(</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="o">*</span> <span class="mi">10</span><span class="p">)</span> <span class="o">/</span> <span class="mi">10</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">bbox</span><span class="p">]</span>
<span class="n">bbox_res</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'image_id'</span><span class="p">:</span> <span class="n">im_id</span><span class="p">,</span>
<span class="s1">'category_id'</span><span class="p">:</span> <span class="n">catid</span><span class="p">,</span>
<span class="s1">'bbox'</span><span class="p">:</span> <span class="n">bbox</span><span class="p">,</span>
<span class="s1">'score'</span><span class="p">:</span> <span class="nb">float</span><span class="p">(</span><span class="n">score</span><span class="p">),</span>
<span class="p">}</span>
<span class="n">bbox_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">bbox_res</span><span class="p">)</span>
<span class="n">k</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">return</span> <span class="n">bbox_list</span>
</pre></div>
</div>
</div>
<p>Here is the actual evaluation loop. To fully utilize all four cores on one Inferentia, the optimal setup is to run multi-threaded inference using a <code class="docutils literal notranslate"><span class="pre">ThreadPoolExecutor</span></code>. The following cell is a multi-threaded adaptation of the evaluation routine at <a class="reference external" href="https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97">https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97</a>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">concurrent</span> <span class="kn">import</span> <span class="n">futures</span>
<span class="n">NUM_THREADS</span> <span class="o">=</span> <span class="mi">4</span>
<span class="k">def</span> <span class="nf">evaluate</span><span class="p">(</span><span class="n">yolo_predictor</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">,</span> <span class="n">anno_file</span><span class="p">,</span> <span class="n">eval_batch_size</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">):</span>
<span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span> <span class="o">=</span> <span class="n">get_image_as_bytes</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">)</span>
<span class="c1"># warm up</span>
<span class="n">yolo_predictor</span><span class="p">({</span><span class="s1">'image'</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">batch_img_bytes_list</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="nb">object</span><span class="p">)})</span>
<span class="k">def</span> <span class="nf">yolo_predictor_timer</span><span class="p">(</span><span class="n">yolo_pred</span><span class="p">,</span> <span class="n">image</span><span class="p">):</span>
<span class="n">begin</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">yolo_pred</span><span class="p">(</span><span class="n">image</span><span class="p">)</span>
<span class="n">delta</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">begin</span>
<span class="k">return</span> <span class="n">result</span><span class="p">,</span> <span class="n">delta</span>
<span class="n">latency</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">with</span> <span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="n">NUM_THREADS</span><span class="p">)</span> <span class="k">as</span> <span class="n">exe</span><span class="p">:</span>
<span class="n">fut_im_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">fut_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">start_time</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="k">for</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">,</span> <span class="n">batch_img_bytes</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span><span class="p">):</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">)</span> <span class="o">!=</span> <span class="n">eval_batch_size</span><span class="p">:</span>
<span class="k">continue</span>
<span class="n">fut</span> <span class="o">=</span> <span class="n">exe</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">yolo_predictor_timer</span><span class="p">,</span> <span class="n">yolo_predictor</span><span class="p">,</span> <span class="p">{</span><span class="s1">'image'</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="nb">object</span><span class="p">)})</span>
<span class="n">fut_im_list</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">))</span>
<span class="n">fut_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fut</span><span class="p">)</span>
<span class="n">bbox_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">sum_time</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="n">count</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="p">(</span><span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">),</span> <span class="n">fut</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">fut_im_list</span><span class="p">,</span> <span class="n">fut_list</span><span class="p">):</span>
<span class="n">results</span><span class="p">,</span> <span class="n">times</span> <span class="o">=</span> <span class="n">fut</span><span class="o">.</span><span class="n">result</span><span class="p">()</span>
<span class="c1"># Adjust latency since we are in batch</span>
<span class="n">latency</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">times</span> <span class="o">/</span> <span class="n">eval_batch_size</span><span class="p">)</span>
<span class="n">sum_time</span> <span class="o">+=</span> <span class="n">times</span>
<span class="n">bbox_list</span><span class="o">.</span><span class="n">extend</span><span class="p">(</span><span class="n">analyze_bbox</span><span class="p">(</span><span class="n">results</span><span class="p">,</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">))</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">batch_im_id</span><span class="p">:</span>
<span class="n">count</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">if</span> <span class="n">count</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Test iter </span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">count</span><span class="p">))</span>
<span class="n">throughput</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">images</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">sum_time</span> <span class="o">/</span> <span class="n">NUM_THREADS</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Average Images Per Second:'</span><span class="p">,</span> <span class="n">throughput</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Latency P50: </span><span class="si">{:.1f}</span><span class="s2"> ms"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span><span class="o">*</span><span class="mf">1000.0</span><span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Latency P90: </span><span class="si">{:.1f}</span><span class="s2"> ms"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency</span><span class="p">,</span> <span class="mi">90</span><span class="p">)</span><span class="o">*</span><span class="mf">1000.0</span><span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Latency P95: </span><span class="si">{:.1f}</span><span class="s2"> ms"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency</span><span class="p">,</span> <span class="mi">95</span><span class="p">)</span><span class="o">*</span><span class="mf">1000.0</span><span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Latency P99: </span><span class="si">{:.1f}</span><span class="s2"> ms"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency</span><span class="p">,</span> <span class="mi">99</span><span class="p">)</span><span class="o">*</span><span class="mf">1000.0</span><span class="p">))</span>
<span class="c1"># start evaluation</span>
<span class="n">box_ap_stats</span> <span class="o">=</span> <span class="n">bbox_eval</span><span class="p">(</span><span class="n">anno_file</span><span class="p">,</span> <span class="n">bbox_list</span><span class="p">)</span>
<span class="k">return</span> <span class="n">box_ap_stats</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Evaluate-mean-average-precision-(mAP)-score">
<h3>Evaluate mean average precision (mAP) score<a class="headerlink" href="#Evaluate-mean-average-precision-(mAP)-score" title="Permalink to this headline">#</a></h3>
<p>Here is the code to calculate mAP scores of the YOLO v4 model. The expected mAP score is around 0.487 if we use the pretrained weights.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">yolo_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">predictor</span><span class="o">.</span><span class="n">from_saved_model</span><span class="p">(</span><span class="s1">'./yolo_v4_coco_saved_model_neuron'</span><span class="p">)</span>
<span class="n">val_coco_root</span> <span class="o">=</span> <span class="s1">'./val2017'</span>
<span class="n">val_annotate</span> <span class="o">=</span> <span class="s1">'./annotations/instances_val2017.json'</span>
<span class="n">clsid2catid</span> <span class="o">=</span> <span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">:</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">5</span><span class="p">:</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">6</span><span class="p">:</span> <span class="mi">7</span><span class="p">,</span> <span class="mi">7</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">:</span> <span class="mi">9</span><span class="p">,</span> <span class="mi">9</span><span class="p">:</span> <span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">:</span> <span class="mi">11</span><span class="p">,</span> <span class="mi">11</span><span class="p">:</span> <span class="mi">13</span><span class="p">,</span> <span class="mi">12</span><span class="p">:</span> <span class="mi">14</span><span class="p">,</span> <span class="mi">13</span><span class="p">:</span> <span class="mi">15</span><span class="p">,</span> <span class="mi">14</span><span class="p">:</span> <span class="mi">16</span><span class="p">,</span>
<span class="mi">15</span><span class="p">:</span> <span class="mi">17</span><span class="p">,</span> <span class="mi">16</span><span class="p">:</span> <span class="mi">18</span><span class="p">,</span> <span class="mi">17</span><span class="p">:</span> <span class="mi">19</span><span class="p">,</span> <span class="mi">18</span><span class="p">:</span> <span class="mi">20</span><span class="p">,</span> <span class="mi">19</span><span class="p">:</span> <span class="mi">21</span><span class="p">,</span> <span class="mi">20</span><span class="p">:</span> <span class="mi">22</span><span class="p">,</span> <span class="mi">21</span><span class="p">:</span> <span class="mi">23</span><span class="p">,</span> <span class="mi">22</span><span class="p">:</span> <span class="mi">24</span><span class="p">,</span> <span class="mi">23</span><span class="p">:</span> <span class="mi">25</span><span class="p">,</span> <span class="mi">24</span><span class="p">:</span> <span class="mi">27</span><span class="p">,</span> <span class="mi">25</span><span class="p">:</span> <span class="mi">28</span><span class="p">,</span> <span class="mi">26</span><span class="p">:</span> <span class="mi">31</span><span class="p">,</span>
<span class="mi">27</span><span class="p">:</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">28</span><span class="p">:</span> <span class="mi">33</span><span class="p">,</span> <span class="mi">29</span><span class="p">:</span> <span class="mi">34</span><span class="p">,</span> <span class="mi">30</span><span class="p">:</span> <span class="mi">35</span><span class="p">,</span> <span class="mi">31</span><span class="p">:</span> <span class="mi">36</span><span class="p">,</span> <span class="mi">32</span><span class="p">:</span> <span class="mi">37</span><span class="p">,</span> <span class="mi">33</span><span class="p">:</span> <span class="mi">38</span><span class="p">,</span> <span class="mi">34</span><span class="p">:</span> <span class="mi">39</span><span class="p">,</span> <span class="mi">35</span><span class="p">:</span> <span class="mi">40</span><span class="p">,</span> <span class="mi">36</span><span class="p">:</span> <span class="mi">41</span><span class="p">,</span> <span class="mi">37</span><span class="p">:</span> <span class="mi">42</span><span class="p">,</span> <span class="mi">38</span><span class="p">:</span> <span class="mi">43</span><span class="p">,</span>
<span class="mi">39</span><span class="p">:</span> <span class="mi">44</span><span class="p">,</span> <span class="mi">40</span><span class="p">:</span> <span class="mi">46</span><span class="p">,</span> <span class="mi">41</span><span class="p">:</span> <span class="mi">47</span><span class="p">,</span> <span class="mi">42</span><span class="p">:</span> <span class="mi">48</span><span class="p">,</span> <span class="mi">43</span><span class="p">:</span> <span class="mi">49</span><span class="p">,</span> <span class="mi">44</span><span class="p">:</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">45</span><span class="p">:</span> <span class="mi">51</span><span class="p">,</span> <span class="mi">46</span><span class="p">:</span> <span class="mi">52</span><span class="p">,</span> <span class="mi">47</span><span class="p">:</span> <span class="mi">53</span><span class="p">,</span> <span class="mi">48</span><span class="p">:</span> <span class="mi">54</span><span class="p">,</span> <span class="mi">49</span><span class="p">:</span> <span class="mi">55</span><span class="p">,</span> <span class="mi">50</span><span class="p">:</span> <span class="mi">56</span><span class="p">,</span>
<span class="mi">51</span><span class="p">:</span> <span class="mi">57</span><span class="p">,</span> <span class="mi">52</span><span class="p">:</span> <span class="mi">58</span><span class="p">,</span> <span class="mi">53</span><span class="p">:</span> <span class="mi">59</span><span class="p">,</span> <span class="mi">54</span><span class="p">:</span> <span class="mi">60</span><span class="p">,</span> <span class="mi">55</span><span class="p">:</span> <span class="mi">61</span><span class="p">,</span> <span class="mi">56</span><span class="p">:</span> <span class="mi">62</span><span class="p">,</span> <span class="mi">57</span><span class="p">:</span> <span class="mi">63</span><span class="p">,</span> <span class="mi">58</span><span class="p">:</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">59</span><span class="p">:</span> <span class="mi">65</span><span class="p">,</span> <span class="mi">60</span><span class="p">:</span> <span class="mi">67</span><span class="p">,</span> <span class="mi">61</span><span class="p">:</span> <span class="mi">70</span><span class="p">,</span> <span class="mi">62</span><span class="p">:</span> <span class="mi">72</span><span class="p">,</span>
<span class="mi">63</span><span class="p">:</span> <span class="mi">73</span><span class="p">,</span> <span class="mi">64</span><span class="p">:</span> <span class="mi">74</span><span class="p">,</span> <span class="mi">65</span><span class="p">:</span> <span class="mi">75</span><span class="p">,</span> <span class="mi">66</span><span class="p">:</span> <span class="mi">76</span><span class="p">,</span> <span class="mi">67</span><span class="p">:</span> <span class="mi">77</span><span class="p">,</span> <span class="mi">68</span><span class="p">:</span> <span class="mi">78</span><span class="p">,</span> <span class="mi">69</span><span class="p">:</span> <span class="mi">79</span><span class="p">,</span> <span class="mi">70</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span> <span class="mi">71</span><span class="p">:</span> <span class="mi">81</span><span class="p">,</span> <span class="mi">72</span><span class="p">:</span> <span class="mi">82</span><span class="p">,</span> <span class="mi">73</span><span class="p">:</span> <span class="mi">84</span><span class="p">,</span> <span class="mi">74</span><span class="p">:</span> <span class="mi">85</span><span class="p">,</span>
<span class="mi">75</span><span class="p">:</span> <span class="mi">86</span><span class="p">,</span> <span class="mi">76</span><span class="p">:</span> <span class="mi">87</span><span class="p">,</span> <span class="mi">77</span><span class="p">:</span> <span class="mi">88</span><span class="p">,</span> <span class="mi">78</span><span class="p">:</span> <span class="mi">89</span><span class="p">,</span> <span class="mi">79</span><span class="p">:</span> <span class="mi">90</span><span class="p">}</span>
<span class="n">eval_batch_size</span> <span class="o">=</span> <span class="mi">8</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">val_annotate</span><span class="p">,</span> <span class="s1">'r'</span><span class="p">,</span> <span class="n">encoding</span><span class="o">=</span><span class="s1">'utf-8'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f2</span><span class="p">:</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">f2</span><span class="p">:</span>
<span class="n">line</span> <span class="o">=</span> <span class="n">line</span><span class="o">.</span><span class="n">strip</span><span class="p">()</span>
<span class="n">dataset</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
<span class="n">images</span> <span class="o">=</span> <span class="n">dataset</span><span class="p">[</span><span class="s1">'images'</span><span class="p">]</span>
<span class="n">box_ap</span> <span class="o">=</span> <span class="n">evaluate</span><span class="p">(</span><span class="n">yolo_pred</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">val_coco_root</span><span class="p">,</span> <span class="n">val_annotate</span><span class="p">,</span> <span class="n">eval_batch_size</span><span class="p">,</span> <span class="n">clsid2catid</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:27.060Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.rst.txt | ```
.. _tensorflow-bert-demo:
Running TensorFlow BERT-Large with AWS Neuron
=============================================
This example shows a Neuron compatible BERT-Large implementation that is
functionally equivalent to open source BERT-Large model. This demo uses
TensorFlow-Neuron, BERT-Large weights fine tuned for MRPC and also shows
the performance achieved by the Inf1 instance. For users who want to use
public BERT SavedModels please also follow the steps described :ref:`using-public-bert-savedmodels`.
Launch EC2 instances
--------------------
For this demo, launch two EC2 instances :
- a c5.4xlarge instance for compiling the BERT-Large Model and
- an inf1.xlarge instance for running inference
For both of these instances choose the latest Ubuntu 18 Deep Learning
AMI (DLAMI).
.. _compiling-neuron-compatible-bert-large:
Compiling Neuron compatible BERT-Large
--------------------------------------
First connect to a c5.4xlarge instance and update tensorflow-neuron and
neuron-cc
Update compilation EC2 instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update to the latest neuron software by executing the instructions at :ref:`install-neuron-tensorflow`.
Note: if your tensorflow-neuron version on the inference instance is
lower than 1.15.0.1.0.1333.0, you will need to run this demo on
inf1.2xlarge instead of inf1.xlarge.
Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Neuron software works with TensorFlow saved models. Users should bring
their own BERT-Large saved model for this section. This demo will run
inference for the MRPC task and the saved model should be fine tuned for
MRPC. Users who need additional help to fine-tune the model for MRPC or
to create a saved model can refer to :ref:`bert-tensorflow-demo-appendix1`.
In the same environment and directory bert_demo scripts, run the
following :
.. code:: bash
git clone https://github.com/aws/aws-neuron-sdk
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
python bert_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=6 --aggressive_optimizations
This compiles BERT-Large pointed to by $BERT_LARGE_SAVED_MODEL for an
input size of 128 and batch size of 6. The compilation output is stored
in bert-saved-model-neuron. Copy this to your Inf1 instance for
inferencing.
The bert_model.py script encapsulates all the steps necessary for this
process. For details on what is done by bert_model.py please refer to
:ref:`bert-tensorflow-demo-appendix2`.
Running the inference demo
--------------------------
Connect to your inf1.xlarge instance and update tensorflow-neuron,
aws-neuron-runtime and aws-neuron-tools.
Update inference EC2 instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update to the latest neuron software by executing the instructions at :ref:`install-neuron-tensorflow`.
Launching the BERT-Large demo server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the compiled model (bert-saved-model-neuron) from your c5.4xlarge
to your inf1.xlarge instance. Place the model in the same directory as
the bert_demo scripts. Then from the same conda environment launch the
BERT-Large demo server :
.. code:: bash
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_server.py --dir bert-saved-model-neuron --batch 6 --parallel 4
This loads 4 BERT-Large models, one into each of the 4 NeuronCores found
in an inf1.xlarge instance. For each of the 4 models, the BERT-Large
demo server opportunistically stitches together asynchronous requests
into batch 6 requests. When there are insufficient pending requests, the
server creates dummy requests for batching.
Wait for the bert_server to finish loading the BERT-Large models to
Inferentia memory. When it is ready to accept requests it will print the
inferences per second once every second. This reflects the number of
real inferences only. Dummy requests created for batching are not
credited to inferentia performance. Once the inferences are done you can send
a keyboard interrupt to print out the average throughput of your run.
Sending requests to server from multiple clients
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Wait until the bert demo server is ready to accept requests. Then on the
same inf1.xlarge instance, launch a separate linux terminal. From the
bert_demo directory execute the following commands :
.. code:: bash
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
for i in {1..96}; do python bert_client.py --cycle 128 & done
This spins up 96 clients, each of which sends 128 inference requests.
Printing latency metrics
~~~~~~~~~~~~~~~~~~~~~~~~
After all your requests have been sent to your server you can
run the following command:
.. code:: bash
python latency_printer.py
.. _using-public-bert-savedmodels:
Using public BERT SavedModels
-----------------------------
We are now providing a compilation script that has better compatibility
with various flavors of BERT SavedModels generated from
https://github.com/google-research/bert. Here are the current
limitations:
1. You did not change
`modeling.py <https://github.com/google-research/bert/blob/master/modeling.py>`__
2. BERT SavedModel is generated using ``estimator.export_saved_model``
3. BERT SavedModel uses fixed sequence length 128 (you may check by
``saved_model_cli show --dir /path/to/user/bert/savedmodel --all``)
4. ``neuron-cc`` version is at least 1.0.12000.0
5. ``aws-neuron-runtime`` version is at least 1.0.7000.0
6. The ``--batch_size`` argument specified in this script is at most 4
Example usage is shown below:
.. code:: bash
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_no_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=1
.. _bert-tensorflow-demo-appendix1:
Appendix 1
----------
Users who need help finetuning BERT-Large for MRPC and creating a saved
model may follow the instructions here.
Connect to the c5.4xlarge compilation EC2 instance you started above and
download these three items :
1. clone `this <https://github.com/google-research/bert>`__ github repo.
2. download GLUE data as described
`here <https://github.com/google-research/bert#user-content-sentence-and-sentence-pair-classification-tasks>`__.
Do not run the finetuning command.
3. download a desired pre-trained BERT-Large checkpoint from
`here <https://github.com/google-research/bert#user-content-pre-trained-models>`__.
This is the model we will fine tune.
Next edit run_classifier.py in the cloned bert repo to apply the patch
described in the following git diff.
::
diff --git a/run_classifier.py b/run_classifier.py
index 817b147..c9426bc 100644
--- a/run_classifier.py
+++ b/run_classifier.py
@@ -955,6 +955,18 @@ def main(_):
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
+ features = {
+ "input_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_ids'),
+ "input_mask": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_mask'),
+ "segment_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='segment_ids'),
+ "label_ids": tf.placeholder(shape=[None], dtype=tf.int32, name='label_ids'),
+ "is_real_example": tf.placeholder(shape=[None], dtype=tf.int32, name='is_real_example'),
+ }
+ serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(features)
+ estimator._export_to_tpu = False ## !!important to add this
+ estimator.export_saved_model(
+ export_dir_base='./bert_classifier_saved_model',
+ serving_input_receiver_fn=serving_input_fn)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
NOTE : Users who are interested may refer to this
`link <https://github.com/google-research/bert/issues/146#issuecomment-569138476>`__
for additional background information on the patch but it is not
necessary for running this demo.
Then from the bert_demo directory run the following :
.. code:: bash
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_REPO_DIR="/path/to/cloned/bert/repo/directory"
export GLUE_DIR="/path/to/glue/data/directory"
export BERT_BASE_DIR="/path/to/pre-trained/bert-large/checkpoint/directory"
./tune_save.sh
The a saved model will be created in
$BERT_REPO_DIR/bert-saved-model/*random_number*/. Where, *random_number*
is a random number generated for every run. Use this saved model to
continue with the rest of the demo.
.. _bert-tensorflow-demo-appendix2:
Appendix 2
----------
For all BERT variants, we currently need to augment the standard Neuron
compilation process for performance tuning. In the future, we intend to
automate this tuning process. This would allow users to use the standard
Neuron compilation process, which requires only a one line change in
user source code. The standard compilation process is described :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`.
The augmented Neuron compilation process is encapsulated by the
bert_model.py script, which performs the following things :
1. Define a Neuron compatible implementation of BERT-Large. For
inference, this is functionally equivalent to the open source
BERT-Large. The changes needed to create a Neuron compatible
BERT-Large implementation is described in :ref:`bert-tensorflow-demo-appendix3`.
2. Extract BERT-Large weights from the open source saved model pointed
to by --input_saved_model and associates it with the Neuron
compatible model
3. Invoke TensorFlow-Neuron to compile the Neuron compatible model for
Inferentia using the newly associated weights
4. Finally, the compiled model is saved into the location given by
--output_saved_model
.. _bert-tensorflow-demo-appendix3:
Appendix 3
----------
The Neuron compatible implementation of BERT-Large is functionally
equivalent to the open source version when used for inference. However,
the detailed implementation does differ and here are the list of changes
:
1. Data Type Casting : If the original BERT-Large an FP32 model,
bert_model.py contains manually defined cast operators to enable
mixed-precision. FP16 is used for multi-head attention and
fully-connected layers, and fp32 everywhere else. This will be
automated in a future release.
2. Remove Unused Operators: A model typically contains training
operators that are not used in inference, including a subset of the
reshape operators. Those operators do not affect inference
functionality and have been removed.
3. Reimplementation of Selected Operators : A number of operators
(mainly mask operators), has been reimplemented to bypass a known
compiler issue. This will be fixed in a planned future release.
4. Manually Partition Embedding Ops to CPU : The embedding portion of
BERT-Large has been partitioned manually to a subgraph that is
executed on the host CPU, without noticable performance impact. In
near future, we plan to implement this through compiler
auto-partitioning without the need for user intervention.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-bert-demo:
Running TensorFlow BERT-Large with AWS Neuron
=============================================
This example shows a Neuron compatible BERT-Large implementation that is
functionally equivalent to open source BERT-Large model. This demo uses
TensorFlow-Neuron, BERT-Large weights fine tuned for MRPC and also shows
the performance achieved by the Inf1 instance. For users who want to use
public BERT SavedModels please also follow the steps described :ref:`using-public-bert-savedmodels`.
Launch EC2 instances
--------------------
For this demo, launch two EC2 instances :
- a c5.4xlarge instance for compiling the BERT-Large Model and
- an inf1.xlarge instance for running inference
For both of these instances choose the latest Ubuntu 18 Deep Learning
AMI (DLAMI).
.. _compiling-neuron-compatible-bert-large:
Compiling Neuron compatible BERT-Large
--------------------------------------
First connect to a c5.4xlarge instance and update tensorflow-neuron and
neuron-cc
Update compilation EC2 instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update to the latest neuron software by executing the instructions at :ref:`install-neuron-tensorflow`.
Note: if your tensorflow-neuron version on the inference instance is
lower than 1.15.0.1.0.1333.0, you will need to run this demo on
inf1.2xlarge instead of inf1.xlarge.
Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Neuron software works with TensorFlow saved models. Users should bring
their own BERT-Large saved model for this section. This demo will run
inference for the MRPC task and the saved model should be fine tuned for
MRPC. Users who need additional help to fine-tune the model for MRPC or
to create a saved model can refer to :ref:`bert-tensorflow-demo-appendix1`.
In the same environment and directory bert_demo scripts, run the
following :
.. code:: bash
git clone https://github.com/aws/aws-neuron-sdk
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
python bert_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=6 --aggressive_optimizations
This compiles BERT-Large pointed to by $BERT_LARGE_SAVED_MODEL for an
input size of 128 and batch size of 6. The compilation output is stored
in bert-saved-model-neuron. Copy this to your Inf1 instance for
inferencing.
The bert_model.py script encapsulates all the steps necessary for this
process. For details on what is done by bert_model.py please refer to
:ref:`bert-tensorflow-demo-appendix2`.
Running the inference demo
--------------------------
Connect to your inf1.xlarge instance and update tensorflow-neuron,
aws-neuron-runtime and aws-neuron-tools.
Update inference EC2 instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update to the latest neuron software by executing the instructions at :ref:`install-neuron-tensorflow`.
Launching the BERT-Large demo server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the compiled model (bert-saved-model-neuron) from your c5.4xlarge
to your inf1.xlarge instance. Place the model in the same directory as
the bert_demo scripts. Then from the same conda environment launch the
BERT-Large demo server :
.. code:: bash
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_server.py --dir bert-saved-model-neuron --batch 6 --parallel 4
This loads 4 BERT-Large models, one into each of the 4 NeuronCores found
in an inf1.xlarge instance. For each of the 4 models, the BERT-Large
demo server opportunistically stitches together asynchronous requests
into batch 6 requests. When there are insufficient pending requests, the
server creates dummy requests for batching.
Wait for the bert_server to finish loading the BERT-Large models to
Inferentia memory. When it is ready to accept requests it will print the
inferences per second once every second. This reflects the number of
real inferences only. Dummy requests created for batching are not
credited to inferentia performance. Once the inferences are done you can send
a keyboard interrupt to print out the average throughput of your run.
Sending requests to server from multiple clients
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Wait until the bert demo server is ready to accept requests. Then on the
same inf1.xlarge instance, launch a separate linux terminal. From the
bert_demo directory execute the following commands :
.. code:: bash
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
for i in {1..96}; do python bert_client.py --cycle 128 & done
This spins up 96 clients, each of which sends 128 inference requests.
Printing latency metrics
~~~~~~~~~~~~~~~~~~~~~~~~
After all your requests have been sent to your server you can
run the following command:
.. code:: bash
python latency_printer.py
.. _using-public-bert-savedmodels:
Using public BERT SavedModels
-----------------------------
We are now providing a compilation script that has better compatibility
with various flavors of BERT SavedModels generated from
https://github.com/google-research/bert. Here are the current
limitations:
1. You did not change
`modeling.py <https://github.com/google-research/bert/blob/master/modeling.py>`__
2. BERT SavedModel is generated using ``estimator.export_saved_model``
3. BERT SavedModel uses fixed sequence length 128 (you may check by
``saved_model_cli show --dir /path/to/user/bert/savedmodel --all``)
4. ``neuron-cc`` version is at least 1.0.12000.0
5. ``aws-neuron-runtime`` version is at least 1.0.7000.0
6. The ``--batch_size`` argument specified in this script is at most 4
Example usage is shown below:
.. code:: bash
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_no_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=1
.. _bert-tensorflow-demo-appendix1:
Appendix 1
----------
Users who need help finetuning BERT-Large for MRPC and creating a saved
model may follow the instructions here.
Connect to the c5.4xlarge compilation EC2 instance you started above and
download these three items :
1. clone `this <https://github.com/google-research/bert>`__ github repo.
2. download GLUE data as described
`here <https://github.com/google-research/bert#user-content-sentence-and-sentence-pair-classification-tasks>`__.
Do not run the finetuning command.
3. download a desired pre-trained BERT-Large checkpoint from
`here <https://github.com/google-research/bert#user-content-pre-trained-models>`__.
This is the model we will fine tune.
Next edit run_classifier.py in the cloned bert repo to apply the patch
described in the following git diff.
::
diff --git a/run_classifier.py b/run_classifier.py
index 817b147..c9426bc 100644
--- a/run_classifier.py
+++ b/run_classifier.py
@@ -955,6 +955,18 @@ def main(_):
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
+ features = {
+ "input_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_ids'),
+ "input_mask": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_mask'),
+ "segment_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='segment_ids'),
+ "label_ids": tf.placeholder(shape=[None], dtype=tf.int32, name='label_ids'),
+ "is_real_example": tf.placeholder(shape=[None], dtype=tf.int32, name='is_real_example'),
+ }
+ serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(features)
+ estimator._export_to_tpu = False ## !!important to add this
+ estimator.export_saved_model(
+ export_dir_base='./bert_classifier_saved_model',
+ serving_input_receiver_fn=serving_input_fn)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
NOTE : Users who are interested may refer to this
`link <https://github.com/google-research/bert/issues/146#issuecomment-569138476>`__
for additional background information on the patch but it is not
necessary for running this demo.
Then from the bert_demo directory run the following :
.. code:: bash
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_REPO_DIR="/path/to/cloned/bert/repo/directory"
export GLUE_DIR="/path/to/glue/data/directory"
export BERT_BASE_DIR="/path/to/pre-trained/bert-large/checkpoint/directory"
./tune_save.sh
The a saved model will be created in
$BERT_REPO_DIR/bert-saved-model/*random_number*/. Where, *random_number*
is a random number generated for every run. Use this saved model to
continue with the rest of the demo.
.. _bert-tensorflow-demo-appendix2:
Appendix 2
----------
For all BERT variants, we currently need to augment the standard Neuron
compilation process for performance tuning. In the future, we intend to
automate this tuning process. This would allow users to use the standard
Neuron compilation process, which requires only a one line change in
user source code. The standard compilation process is described :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`.
The augmented Neuron compilation process is encapsulated by the
bert_model.py script, which performs the following things :
1. Define a Neuron compatible implementation of BERT-Large. For
inference, this is functionally equivalent to the open source
BERT-Large. The changes needed to create a Neuron compatible
BERT-Large implementation is described in :ref:`bert-tensorflow-demo-appendix3`.
2. Extract BERT-Large weights from the open source saved model pointed
to by --input_saved_model and associates it with the Neuron
compatible model
3. Invoke TensorFlow-Neuron to compile the Neuron compatible model for
Inferentia using the newly associated weights
4. Finally, the compiled model is saved into the location given by
--output_saved_model
.. _bert-tensorflow-demo-appendix3:
Appendix 3
----------
The Neuron compatible implementation of BERT-Large is functionally
equivalent to the open source version when used for inference. However,
the detailed implementation does differ and here are the list of changes
:
1. Data Type Casting : If the original BERT-Large an FP32 model,
bert_model.py contains manually defined cast operators to enable
mixed-precision. FP16 is used for multi-head attention and
fully-connected layers, and fp32 everywhere else. This will be
automated in a future release.
2. Remove Unused Operators: A model typically contains training
operators that are not used in inference, including a subset of the
reshape operators. Those operators do not affect inference
functionality and have been removed.
3. Reimplementation of Selected Operators : A number of operators
(mainly mask operators), has been reimplemented to bypass a known
compiler issue. This will be fixed in a planned future release.
4. Manually Partition Embedding Ops to CPU : The embedding portion of
BERT-Large has been partitioned manually to a subgraph that is
executed on the host CPU, without noticable performance impact. In
near future, we plan to implement this through compiler
auto-partitioning without the need for user intervention.
</pre></body></html> | 2023-09-29T20:55:27.094Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/mxnet/resnet50/resnet50.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "wrapped-soccer",
"metadata": {},
"source": [
"# Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia "
]
},
{
"cell_type": "markdown",
"id": "appreciated-daily",
"metadata": {},
"source": [
"## Introduction:\n",
"In this tutorial we will compile and deploy ResNet50 model for Inferentia.\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1.Compile the ResNet50 model.\n",
"\n",
"2.Infer the compiled model.\n",
"\n",
"Before running the following verify this Jupyter notebook is running “conda_aws_neuron_mxnet_p36” kernel. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"Neuron supports Python module, Symbol APIs and the C predict API. The following quick start example uses the Symbol API.\n",
"\n",
"### Warning\n",
"This tutorial was tested on MXNet-1.5\n",
"\n",
"MXNet-1.5 entered maintenance mode and require Neuron runtime 1.0, please see : [MXNet-1.5 enters maintainence mode](../../../../release-notes/maintenance.html)\n",
"\n",
"To setup development environment for MXNet-1.5 see installation instructions for Neuron 1.15.1 : [Neuron-1.15.1 MXNet install](../../../../frameworks/mxnet-neuron/setup/mxnet-install.html)"
]
},
{
"cell_type": "markdown",
"id": "advance-rebound",
"metadata": {},
"source": [
"## Compile model on Neuron\n",
"The following step will compile the resnet50 model. Compilation will take a few minutes on inf1.6xlarge. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "alpha-publication",
"metadata": {},
"outputs": [],
"source": [
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)\n",
"\n",
"# Compile for Inferentia using Neuron\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }\n",
"sym, args, aux = mx.contrib.neuron.compile(sym, args, aux, inputs)\n",
"\n",
"#save compiled model\n",
"mx.model.save_checkpoint(\"resnet-50_compiled\", 0, sym, args, aux)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "technical-reason",
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"id": "meaningful-substance",
"metadata": {},
"source": [
"## Deploy on Inferentia\n",
"Using same instance to deploy the model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cooked-jonathan",
"metadata": {},
"outputs": [],
"source": [
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"args['softmax_label'] = softmax\n",
"args['data'] = img\n",
"\n",
"# Inferentia context\n",
"ctx = mx.neuron()\n",
"\n",
"exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"exe.forward(data=img)\n",
"prob = exe.outputs[0].asnumpy()# print the top-5\n",
"prob = np.squeeze(prob)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))\n",
" \n",
"# Sample output will look like below:\n",
"#probability=0.634792, class=n02123045 tabby, tabby cat\n",
"#probability=0.193601, class=n02123159 tiger cat\n",
"#probability=0.103627, class=n02124075 Egyptian cat\n",
"#probability=0.031604, class=n02127052 lynx, catamount\n",
"#probability=0.015892, class=n02129604 tiger, Panthera tigris"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_mxnet_p36)",
"language": "python",
"name": "conda_aws_neuron_mxnet_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "wrapped-soccer",
"metadata": {},
"source": [
"# Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia "
]
},
{
"cell_type": "markdown",
"id": "appreciated-daily",
"metadata": {},
"source": [
"## Introduction:\n",
"In this tutorial we will compile and deploy ResNet50 model for Inferentia.\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1.Compile the ResNet50 model.\n",
"\n",
"2.Infer the compiled model.\n",
"\n",
"Before running the following verify this Jupyter notebook is running “conda_aws_neuron_mxnet_p36” kernel. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.\n",
"Neuron supports Python module, Symbol APIs and the C predict API. The following quick start example uses the Symbol API.\n",
"\n",
"### Warning\n",
"This tutorial was tested on MXNet-1.5\n",
"\n",
"MXNet-1.5 entered maintenance mode and require Neuron runtime 1.0, please see : [MXNet-1.5 enters maintainence mode](../../../../release-notes/maintenance.html)\n",
"\n",
"To setup development environment for MXNet-1.5 see installation instructions for Neuron 1.15.1 : [Neuron-1.15.1 MXNet install](../../../../frameworks/mxnet-neuron/setup/mxnet-install.html)"
]
},
{
"cell_type": "markdown",
"id": "advance-rebound",
"metadata": {},
"source": [
"## Compile model on Neuron\n",
"The following step will compile the resnet50 model. Compilation will take a few minutes on inf1.6xlarge. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "alpha-publication",
"metadata": {},
"outputs": [],
"source": [
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)\n",
"\n",
"# Compile for Inferentia using Neuron\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }\n",
"sym, args, aux = mx.contrib.neuron.compile(sym, args, aux, inputs)\n",
"\n",
"#save compiled model\n",
"mx.model.save_checkpoint(\"resnet-50_compiled\", 0, sym, args, aux)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "technical-reason",
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"id": "meaningful-substance",
"metadata": {},
"source": [
"## Deploy on Inferentia\n",
"Using same instance to deploy the model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cooked-jonathan",
"metadata": {},
"outputs": [],
"source": [
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"args['softmax_label'] = softmax\n",
"args['data'] = img\n",
"\n",
"# Inferentia context\n",
"ctx = mx.neuron()\n",
"\n",
"exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"exe.forward(data=img)\n",
"prob = exe.outputs[0].asnumpy()# print the top-5\n",
"prob = np.squeeze(prob)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))\n",
" \n",
"# Sample output will look like below:\n",
"#probability=0.634792, class=n02123045 tabby, tabby cat\n",
"#probability=0.193601, class=n02123159 tiger cat\n",
"#probability=0.103627, class=n02124075 Egyptian cat\n",
"#probability=0.031604, class=n02127052 lynx, catamount\n",
"#probability=0.015892, class=n02129604 tiger, Panthera tigris"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_mxnet_p36)",
"language": "python",
"name": "conda_aws_neuron_mxnet_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:27.168Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/tutorial-model-serving.rst.txt | ```
.. _mxnet-neuron-model-serving:
Tutorial: Neuron Apache MXNet (Incubating) Model Serving
=========================================================
This MXNet Neuron Model Serving (MMS) example is adapted from the MXNet
vision service example which uses pretrained squeezenet to perform image
classification:
https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet_vision.
Before starting this example, please ensure that Neuron-optimized MXNet
version mxnet-neuron is installed along with Neuron Compiler.
Warning
*******
If you are using MXNet-1.5, please note that MXNet-1.5 entered maintenance mode and require Neuron Runtime 1.x, please see :ref:`maintenance_mxnet_1_5`.
To setup development environment for MXNet-1.5 see installation instructions at :ref:`mxnet-setup`.
If using DLAMI, you can activate the environment aws_neuron_mxnet_p36
and skip the installation part in the first step below.
1. First, install Java runtime and multi-model-server:
.. code:: bash
cd ~/
# sudo yum -y install -q jre # for AML2
sudo apt-get install -y -q default-jre # for Ubuntu
pip install multi-model-server
Download the example code:
.. code:: bash
git clone https://github.com/awslabs/multi-model-server
cd ~/multi-model-server/examples/mxnet_vision
2. Compile ResNet50 model to Inferentia target by saving the following
Python script to compile_resnet50.py and run
“\ ``python compile_resnet50.py``\ ”
.. code:: python
from packaging import version
import numpy as np
import mxnet as mx
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
else:
from mxnet.contrib import neuron
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
mx.test_utils.download(path+'synset.txt')
nn_name = "resnet-50"
#Load a model
sym, args, auxs = mx.model.load_checkpoint(nn_name, 0)
#Define compilation parameters
# - input shape and dtype
inputs = {'data' : mx.nd.zeros([1,3,224,224], dtype='float32') }
# compile graph to inferentia target
csym, cargs, cauxs = neuron.compile(sym, args, auxs, inputs)
# save compiled model
mx.model.save_checkpoint(nn_name + "_compiled", 0, csym, cargs, cauxs)
3. Prepare signature file ``signature.json`` to configure the input name
and shape:
.. code:: json
{
"inputs": [
{
"data_name": "data",
"data_shape": [
1,
3,
224,
224
]
}
]
}
4. Prepare ``synset.txt`` which is a list of names for ImageNet
prediction classes:
.. code:: bash
curl -O https://s3.amazonaws.com/model-server/model_archive_1.0/examples/squeezenet_v1.1/synset.txt
5. Create custom service class following template in
model_server_template folder:
.. code:: bash
cp -r ../model_service_template/* .
Edit ``mxnet_model_service.py`` to use the appropriate context.
Make the following change:
.. code:: bash
from packaging import version
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
self.mxnet_ctx = mx.neuron()
Comment out the existing context set:
.. code:: bash
#self.mxnet_ctx = mx.cpu() if gpu_id is None else mx.gpu(gpu_id)
Also, comment out unnecessary data copy for model_input in
``mxnet_model_service.py``:
.. code:: bash
#model_input = [item.as_in_context(self.mxnet_ctx) for item in model_input]
6. Package the model with model-archiver:
.. code:: bash
cd ~/multi-model-server/examples
model-archiver --force --model-name resnet-50_compiled --model-path mxnet_vision --handler mxnet_vision_service:handle
7. Start MXNet Model Server (MMS) and load model using RESTful API.
Please ensure that Neuron RTD is running with default settings (see
:ref:`rtd-getting-started`):
.. code:: bash
cd ~/multi-model-server/
multi-model-server --start --model-store examples
# Pipe to log file if you want to keep a log of MMS
curl -v -X POST "http://localhost:8081/models?initial_workers=1&max_workers=1&synchronous=true&url=resnet-50_compiled.mar"
sleep 10 # allow sufficient time to load model
Each worker requires a NeuronCore group that can accommodate the compiled
model. Additional workers can be added by increasing max_workers
configuration as long as there are enough NeuronCores available. Use
``neuron-top`` to see which models are loaded on specific NeuronCores.
8. Test inference using an example image:
.. code:: bash
curl -O https://raw.githubusercontent.com/awslabs/multi-model-server/master/docs/images/kitten_small.jpg
curl -X POST http://127.0.0.1:8080/predictions/resnet-50_compiled -T kitten_small.jpg
You will see the following output:
.. code:: bash
[
{
"probability": 0.6375716328620911,
"class": "n02123045 tabby, tabby cat"
},
{
"probability": 0.1692783385515213,
"class": "n02123159 tiger cat"
},
{
"probability": 0.12187337130308151,
"class": "n02124075 Egyptian cat"
},
{
"probability": 0.028840631246566772,
"class": "n02127052 lynx, catamount"
},
{
"probability": 0.019691042602062225,
"class": "n02129604 tiger, Panthera tigris"
}
]
9. To cleanup after test, issue a delete command via RESTful API and
stop the model server:
.. code:: bash
curl -X DELETE http://127.0.0.1:8081/models/resnet-50_compiled
multi-model-server --stop
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet-neuron-model-serving:
Tutorial: Neuron Apache MXNet (Incubating) Model Serving
=========================================================
This MXNet Neuron Model Serving (MMS) example is adapted from the MXNet
vision service example which uses pretrained squeezenet to perform image
classification:
https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet_vision.
Before starting this example, please ensure that Neuron-optimized MXNet
version mxnet-neuron is installed along with Neuron Compiler.
Warning
*******
If you are using MXNet-1.5, please note that MXNet-1.5 entered maintenance mode and require Neuron Runtime 1.x, please see :ref:`maintenance_mxnet_1_5`.
To setup development environment for MXNet-1.5 see installation instructions at :ref:`mxnet-setup`.
If using DLAMI, you can activate the environment aws_neuron_mxnet_p36
and skip the installation part in the first step below.
1. First, install Java runtime and multi-model-server:
.. code:: bash
cd ~/
# sudo yum -y install -q jre # for AML2
sudo apt-get install -y -q default-jre # for Ubuntu
pip install multi-model-server
Download the example code:
.. code:: bash
git clone https://github.com/awslabs/multi-model-server
cd ~/multi-model-server/examples/mxnet_vision
2. Compile ResNet50 model to Inferentia target by saving the following
Python script to compile_resnet50.py and run
“\ ``python compile_resnet50.py``\ ”
.. code:: python
from packaging import version
import numpy as np
import mxnet as mx
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
else:
from mxnet.contrib import neuron
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
mx.test_utils.download(path+'synset.txt')
nn_name = "resnet-50"
#Load a model
sym, args, auxs = mx.model.load_checkpoint(nn_name, 0)
#Define compilation parameters
# - input shape and dtype
inputs = {'data' : mx.nd.zeros([1,3,224,224], dtype='float32') }
# compile graph to inferentia target
csym, cargs, cauxs = neuron.compile(sym, args, auxs, inputs)
# save compiled model
mx.model.save_checkpoint(nn_name + "_compiled", 0, csym, cargs, cauxs)
3. Prepare signature file ``signature.json`` to configure the input name
and shape:
.. code:: json
{
"inputs": [
{
"data_name": "data",
"data_shape": [
1,
3,
224,
224
]
}
]
}
4. Prepare ``synset.txt`` which is a list of names for ImageNet
prediction classes:
.. code:: bash
curl -O https://s3.amazonaws.com/model-server/model_archive_1.0/examples/squeezenet_v1.1/synset.txt
5. Create custom service class following template in
model_server_template folder:
.. code:: bash
cp -r ../model_service_template/* .
Edit ``mxnet_model_service.py`` to use the appropriate context.
Make the following change:
.. code:: bash
from packaging import version
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
self.mxnet_ctx = mx.neuron()
Comment out the existing context set:
.. code:: bash
#self.mxnet_ctx = mx.cpu() if gpu_id is None else mx.gpu(gpu_id)
Also, comment out unnecessary data copy for model_input in
``mxnet_model_service.py``:
.. code:: bash
#model_input = [item.as_in_context(self.mxnet_ctx) for item in model_input]
6. Package the model with model-archiver:
.. code:: bash
cd ~/multi-model-server/examples
model-archiver --force --model-name resnet-50_compiled --model-path mxnet_vision --handler mxnet_vision_service:handle
7. Start MXNet Model Server (MMS) and load model using RESTful API.
Please ensure that Neuron RTD is running with default settings (see
:ref:`rtd-getting-started`):
.. code:: bash
cd ~/multi-model-server/
multi-model-server --start --model-store examples
# Pipe to log file if you want to keep a log of MMS
curl -v -X POST "http://localhost:8081/models?initial_workers=1&max_workers=1&synchronous=true&url=resnet-50_compiled.mar"
sleep 10 # allow sufficient time to load model
Each worker requires a NeuronCore group that can accommodate the compiled
model. Additional workers can be added by increasing max_workers
configuration as long as there are enough NeuronCores available. Use
``neuron-top`` to see which models are loaded on specific NeuronCores.
8. Test inference using an example image:
.. code:: bash
curl -O https://raw.githubusercontent.com/awslabs/multi-model-server/master/docs/images/kitten_small.jpg
curl -X POST http://127.0.0.1:8080/predictions/resnet-50_compiled -T kitten_small.jpg
You will see the following output:
.. code:: bash
[
{
"probability": 0.6375716328620911,
"class": "n02123045 tabby, tabby cat"
},
{
"probability": 0.1692783385515213,
"class": "n02123159 tiger cat"
},
{
"probability": 0.12187337130308151,
"class": "n02124075 Egyptian cat"
},
{
"probability": 0.028840631246566772,
"class": "n02127052 lynx, catamount"
},
{
"probability": 0.019691042602062225,
"class": "n02129604 tiger, Panthera tigris"
}
]
9. To cleanup after test, issue a delete command via RESTful API and
stop the model server:
.. code:: bash
curl -X DELETE http://127.0.0.1:8081/models/resnet-50_compiled
multi-model-server --stop
</pre></body></html> | 2023-09-29T20:55:27.233Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/mxnet/mxnet-gluon-tutorial.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"id": "4dcf9bb1",
"metadata": {},
"source": [
"## MXNet 1.8: Getting Started with Gluon Tutorial\n",
"\n",
"In this tutorial you will compile and deploy resnet-50 using the newly supported MXNet 1.8 and Gluon API on an Inf1 instance. This tutorial is only supported with MXNet 1.8.\n",
"\n",
"This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.\n",
"\n",
"To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#install-neuron-mxnet). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)"
]
},
{
"cell_type": "markdown",
"id": "83eb578b",
"metadata": {},
"source": [
"## Compile\n",
"\n",
"A trained model must be compiled to Inferentia target before it can run on Inferentia. In this step we compile a pre-trained ResNet50 and export it as a compiled MXNet checkpoint.\n",
"\n",
"Compilation will take a few minutes. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory.\n",
"\n",
"To check the supported operations for the uncompiled model or information on Neuron subgraphs for the compiled model, please see [Neuron Check Model](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/tutorial-neuron-check-model.html#neuron-check-model)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88c41e01",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import os\n",
"import mxnet as mx\n",
"import mx_neuron as neuron\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"block = mx.gluon.nn.SymbolBlock.imports('resnet-50-symbol.json',\\\n",
" ['data', 'softmax_label'], 'resnet-50-0000.params', ctx=mx.cpu())\n",
"\n",
"block.hybridize()\n",
"\n",
"# Compile for Inferentia using Neuron\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32'), 'softmax_label' : mx.nd.ones([1], name='data', dtype='float32') }\n",
"block = neuron.compile(block, inputs=inputs)\n",
"\n",
"#save compiled model\n",
"block.export(\"resnet-50_compiled\", 0, block)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6337e0ec",
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"id": "5a9af0c7",
"metadata": {},
"source": [
"## Deploy\n",
"\n",
"Deply on Infenrentia to see the inference results as below:\n",
"```\n",
"probability=0.643591, class=n02123045 tabby, tabby cat\n",
"probability=0.184392, class=n02123159 tiger cat\n",
"probability=0.105063, class=n02124075 Egyptian cat\n",
"probability=0.030101, class=n02127052 lynx, catamount\n",
"probability=0.016112, class=n02129604 tiger, Panthera tigris\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "960c6aa9",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import mxnet as mx\n",
"import mx_neuron as neuron\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"block = mx.gluon.nn.SymbolBlock.imports('resnet-50_compiled-symbol.json',\\\n",
" ['data', 'softmax_label'], 'resnet-50_compiled-0000.params', ctx=mx.cpu())\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"\n",
"out = block(img, softmax).asnumpy()\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"out = block(img, softmax).asnumpy()\n",
"\n",
"prob = np.squeeze(out)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))"
]
},
{
"cell_type": "raw",
"id": "4f15e776",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "4dcf9bb1",
"metadata": {},
"source": [
"## MXNet 1.8: Getting Started with Gluon Tutorial\n",
"\n",
"In this tutorial you will compile and deploy resnet-50 using the newly supported MXNet 1.8 and Gluon API on an Inf1 instance. This tutorial is only supported with MXNet 1.8.\n",
"\n",
"This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.\n",
"\n",
"To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#install-neuron-mxnet). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)"
]
},
{
"cell_type": "markdown",
"id": "83eb578b",
"metadata": {},
"source": [
"## Compile\n",
"\n",
"A trained model must be compiled to Inferentia target before it can run on Inferentia. In this step we compile a pre-trained ResNet50 and export it as a compiled MXNet checkpoint.\n",
"\n",
"Compilation will take a few minutes. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory.\n",
"\n",
"To check the supported operations for the uncompiled model or information on Neuron subgraphs for the compiled model, please see [Neuron Check Model](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/tutorial-neuron-check-model.html#neuron-check-model)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88c41e01",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import os\n",
"import mxnet as mx\n",
"import mx_neuron as neuron\n",
"import numpy as np\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"block = mx.gluon.nn.SymbolBlock.imports('resnet-50-symbol.json',\\\n",
" ['data', 'softmax_label'], 'resnet-50-0000.params', ctx=mx.cpu())\n",
"\n",
"block.hybridize()\n",
"\n",
"# Compile for Inferentia using Neuron\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32'), 'softmax_label' : mx.nd.ones([1], name='data', dtype='float32') }\n",
"block = neuron.compile(block, inputs=inputs)\n",
"\n",
"#save compiled model\n",
"block.export(\"resnet-50_compiled\", 0, block)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6337e0ec",
"metadata": {},
"outputs": [],
"source": [
"!ls"
]
},
{
"cell_type": "markdown",
"id": "5a9af0c7",
"metadata": {},
"source": [
"## Deploy\n",
"\n",
"Deply on Infenrentia to see the inference results as below:\n",
"```\n",
"probability=0.643591, class=n02123045 tabby, tabby cat\n",
"probability=0.184392, class=n02123159 tiger cat\n",
"probability=0.105063, class=n02124075 Egyptian cat\n",
"probability=0.030101, class=n02127052 lynx, catamount\n",
"probability=0.016112, class=n02129604 tiger, Panthera tigris\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "960c6aa9",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import mxnet as mx\n",
"import mx_neuron as neuron\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"block = mx.gluon.nn.SymbolBlock.imports('resnet-50_compiled-symbol.json',\\\n",
" ['data', 'softmax_label'], 'resnet-50_compiled-0000.params', ctx=mx.cpu())\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"\n",
"out = block(img, softmax).asnumpy()\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"out = block(img, softmax).asnumpy()\n",
"\n",
"prob = np.squeeze(out)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))"
]
},
{
"cell_type": "raw",
"id": "4f15e776",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html> | 2023-09-29T20:55:27.468Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Data Parallel Mode with Gluon MXNet\n",
"\n",
"In this tutorial, you will compile a Gluon BERT model and run in data-parallel mode to completely utilize the NeuronCores. Here you will benchmark a multi-worker setup and compare it with a single worker.\n",
"\n",
"This tutorial is intended only for MXNet-1.8.\n",
"\n",
"In this tutorial, we will be using an inf1.2xlarge with the latest AWS Deep Learning AMI (DLAMI). The inf1.2xlarge instance has 1 AWS Inferentia Chip with 4 NeuronCores.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up your environment\n",
"\n",
"To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#develop-on-aws-ml-accelerator-instance). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Install dependencies\n",
"\n",
"We have to install gluon-nlp to get the BERT model. Run the following command to install:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install gluonnlp"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compiling BERT Model\n",
"\n",
"Next, we compile the Gluon BERT model and save it. Once the model is compiled, we use the same model across the entire tutorial.\n",
"In this tutorial, we will be using a BERT model with sequence length 32"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import mxnet as mx\n",
"import mx_neuron\n",
"import gluonnlp as nlp"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"BERT_MODEL = 'bert_12_768_12'\n",
"BERT_DATA = 'book_corpus_wiki_en_uncased'\n",
"batch_size = 1\n",
"seq_len = 32\n",
"num_cores = 1\n",
"dtype = 'float32'\n",
"\n",
"compiled_model_path = '{}.compiled.{}.{}'.format(BERT_MODEL, batch_size, seq_len)\n",
"\n",
"model, vocab = nlp.model.get_model(BERT_MODEL,\n",
" dataset_name=BERT_DATA,\n",
" use_classifier=False,\n",
" use_decoder=False, ctx=mx.cpu())\n",
" \n",
"# Create sample inputs for compilation\n",
"words = mx.nd.ones([batch_size, seq_len], name='words', dtype=dtype)\n",
"valid_len = mx.nd.ones([batch_size,], name='valid_len', dtype=dtype)\n",
"segments = mx.nd.ones([batch_size, seq_len], name='segments', dtype=dtype)\n",
"inputs = {'data0': words, 'data1': segments, 'data2': valid_len}\n",
"\n",
"# Compiler Args ~~ \n",
"options = {}\n",
"embeddingNames = ['bertmodel0_word_embed_embedding0_fwd', 'bertmodel0_token_type_embed_embedding0_fwd', 'bertencoder0_embedding0']\n",
"options.update({'force_incl_node_names': embeddingNames})\n",
"options.update({'flags': ['--fp32-cast matmult']}) \n",
"\n",
"# Compile and save ~~ \n",
"model = mx_neuron.compile(model, inputs=inputs, **options)\n",
"model.export(compiled_model_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Parallel Mode\n",
"\n",
"Data Parallel Mode is a setup in which you launch multiple copies of the same model, such that each model is running independently of the other. In other words, each model has its own resources to run inference. \n",
"\n",
"On an inf1.2xlarge instance, we have 4 NeuronCores. Hence, we can launch 4 models such that each model is loaded on a single NeuronCore. This unables us to process 4 request concurrently without linear increase in latency. As a result, the throughput of the system increases when compared to a single model inference. This would also allow us to utilize all the 4 NeuronCores on the instance.\n",
"\n",
"Run through the next set of cells to see the difference in throughput as we scale from one model to 4 models running in parallel."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def get_sample_inputs(batch_size, seq_len):\n",
" words = np.ones([batch_size, seq_len], dtype=np.float32)\n",
" valid_len = np.ones([batch_size,], dtype=np.float32)\n",
" segments = np.ones([batch_size, seq_len], dtype=np.float32)\n",
" inputs = {'data0': words, 'data1': segments, 'data2': valid_len}\n",
" return inputs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next for comparison purposes, we run the setup with 1 worker. To do this, we set the num_cores=1. This would launch only 1 model running on a single NeuronCore. After running the below cell, note down the latency and throughput for the system"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from parallel import NeuronSimpleDataParallel\n",
"from benchmark_utils import Results\n",
"import time\n",
"import functools\n",
"import os\n",
"import numpy as np\n",
"import warnings\n",
"\n",
"num_cores = 1\n",
"batch_size=1\n",
"\n",
"# Each worker process should use one core, hence we set\n",
"# os.environ['NEURON_RT_NUM_CORES'] = \"1\"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = \"1\"\n",
"\n",
"#Result aggregation class (code in bert_benchmark_utils.py)\n",
"results = Results(batch_size, num_cores)\n",
"def result_handler(output, start, end):\n",
" elapsed = end - start\n",
" results.add_result([elapsed], [end], [start])\n",
"\n",
"inputs = get_sample_inputs(batch_size, seq_len)\n",
"parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)\n",
"\n",
"#Starting the inference threads\n",
"parallel_neuron_model.start_continuous_inference()\n",
"\n",
"# Warm up the cores\n",
"for _ in range(num_cores*4):\n",
" parallel_neuron_model.warmup(inputs)\n",
" \n",
"# Need to run for high number of iterations to benchmark the models\n",
"for _ in range(1000):\n",
" parallel_neuron_model.infer(inputs)\n",
" # Passing the result_handler as a callback function\n",
" parallel_neuron_model.add_result(result_handler)\n",
"\n",
"# Stop inference \n",
"parallel_neuron_model.stop()\n",
"# Since we are using a multi-process execution with a shared queue, some inferences\n",
"# may still be in execution phase. Hence we need to wait till all the inputs are processed\n",
"# add_all_results() will collect all the results of requests which are in this state\n",
"parallel_neuron_model.add_all_results(result_handler)\n",
"\n",
"\n",
"with open(\"benchmark.txt\", \"w\") as f:\n",
" results.report(f, window_size=1)\n",
"\n",
"with open(\"benchmark.txt\", \"r\") as f:\n",
" for line in f:\n",
" print(line)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we run the setup with 4 workers. To do this, we set the num_cores=4. This would launch 4 model running each running on individual NeuronCore. All the 4 models are running in individual processes, in other words the models are running in parallel. \n",
"\n",
"To feed the models efficiently, we use the producer-consumer setup, in which all processes running a model act as consumers. All consumers are fed using a sharing input queue.\n",
"\n",
"Now we run the below setup. You may notice, that the throughput increase by >2x when compared to a single worker setup."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from parallel import NeuronSimpleDataParallel\n",
"from benchmark_utils import Results\n",
"import time\n",
"import functools\n",
"import os\n",
"import numpy as np\n",
"\n",
"num_cores = 4\n",
"batch_size=1\n",
"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = \"1\"\n",
"\n",
"#Result aggregation class (code in bert_benchmark_utils.py)\n",
"results = Results(batch_size, num_cores)\n",
"def result_handler(output, start, end):\n",
" elapsed = end - start\n",
" results.add_result([elapsed], [end], [start])\n",
"\n",
"inputs = get_sample_inputs(batch_size, seq_len)\n",
"parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)\n",
"\n",
"#Starting the inference threads\n",
"parallel_neuron_model.start_continuous_inference()\n",
"\n",
"# Warm up the cores\n",
"for _ in range(num_cores*4):\n",
" parallel_neuron_model.warmup(inputs)\n",
" \n",
"# Need to run for high number of iterations to benchmark the models\n",
"for _ in range(5000):\n",
" parallel_neuron_model.infer(inputs)\n",
" # Passing the result_handler as a callback function\n",
" parallel_neuron_model.add_result(result_handler)\n",
"\n",
"# Stop inference \n",
"parallel_neuron_model.stop()\n",
"# Since we are using a multi-process execution with a shared queue, some inferences\n",
"# may still be in execution phase. Hence we need to wait till all the inputs are processed\n",
"# add_all_results() will collect all the results of requests which are in this state\n",
"parallel_neuron_model.add_all_results(result_handler)\n",
"\n",
"\n",
"with open(\"benchmark.txt\", \"w\") as f:\n",
" results.report(f, window_size=1)\n",
"\n",
"with open(\"benchmark.txt\", \"r\") as f:\n",
" for line in f:\n",
" print(line)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Data Parallel Mode with Gluon MXNet\n",
"\n",
"In this tutorial, you will compile a Gluon BERT model and run in data-parallel mode to completely utilize the NeuronCores. Here you will benchmark a multi-worker setup and compare it with a single worker.\n",
"\n",
"This tutorial is intended only for MXNet-1.8.\n",
"\n",
"In this tutorial, we will be using an inf1.2xlarge with the latest AWS Deep Learning AMI (DLAMI). The inf1.2xlarge instance has 1 AWS Inferentia Chip with 4 NeuronCores.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up your environment\n",
"\n",
"To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#develop-on-aws-ml-accelerator-instance). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Install dependencies\n",
"\n",
"We have to install gluon-nlp to get the BERT model. Run the following command to install:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install gluonnlp"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compiling BERT Model\n",
"\n",
"Next, we compile the Gluon BERT model and save it. Once the model is compiled, we use the same model across the entire tutorial.\n",
"In this tutorial, we will be using a BERT model with sequence length 32"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import mxnet as mx\n",
"import mx_neuron\n",
"import gluonnlp as nlp"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"BERT_MODEL = 'bert_12_768_12'\n",
"BERT_DATA = 'book_corpus_wiki_en_uncased'\n",
"batch_size = 1\n",
"seq_len = 32\n",
"num_cores = 1\n",
"dtype = 'float32'\n",
"\n",
"compiled_model_path = '{}.compiled.{}.{}'.format(BERT_MODEL, batch_size, seq_len)\n",
"\n",
"model, vocab = nlp.model.get_model(BERT_MODEL,\n",
" dataset_name=BERT_DATA,\n",
" use_classifier=False,\n",
" use_decoder=False, ctx=mx.cpu())\n",
" \n",
"# Create sample inputs for compilation\n",
"words = mx.nd.ones([batch_size, seq_len], name='words', dtype=dtype)\n",
"valid_len = mx.nd.ones([batch_size,], name='valid_len', dtype=dtype)\n",
"segments = mx.nd.ones([batch_size, seq_len], name='segments', dtype=dtype)\n",
"inputs = {'data0': words, 'data1': segments, 'data2': valid_len}\n",
"\n",
"# Compiler Args ~~ \n",
"options = {}\n",
"embeddingNames = ['bertmodel0_word_embed_embedding0_fwd', 'bertmodel0_token_type_embed_embedding0_fwd', 'bertencoder0_embedding0']\n",
"options.update({'force_incl_node_names': embeddingNames})\n",
"options.update({'flags': ['--fp32-cast matmult']}) \n",
"\n",
"# Compile and save ~~ \n",
"model = mx_neuron.compile(model, inputs=inputs, **options)\n",
"model.export(compiled_model_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Parallel Mode\n",
"\n",
"Data Parallel Mode is a setup in which you launch multiple copies of the same model, such that each model is running independently of the other. In other words, each model has its own resources to run inference. \n",
"\n",
"On an inf1.2xlarge instance, we have 4 NeuronCores. Hence, we can launch 4 models such that each model is loaded on a single NeuronCore. This unables us to process 4 request concurrently without linear increase in latency. As a result, the throughput of the system increases when compared to a single model inference. This would also allow us to utilize all the 4 NeuronCores on the instance.\n",
"\n",
"Run through the next set of cells to see the difference in throughput as we scale from one model to 4 models running in parallel."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def get_sample_inputs(batch_size, seq_len):\n",
" words = np.ones([batch_size, seq_len], dtype=np.float32)\n",
" valid_len = np.ones([batch_size,], dtype=np.float32)\n",
" segments = np.ones([batch_size, seq_len], dtype=np.float32)\n",
" inputs = {'data0': words, 'data1': segments, 'data2': valid_len}\n",
" return inputs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next for comparison purposes, we run the setup with 1 worker. To do this, we set the num_cores=1. This would launch only 1 model running on a single NeuronCore. After running the below cell, note down the latency and throughput for the system"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from parallel import NeuronSimpleDataParallel\n",
"from benchmark_utils import Results\n",
"import time\n",
"import functools\n",
"import os\n",
"import numpy as np\n",
"import warnings\n",
"\n",
"num_cores = 1\n",
"batch_size=1\n",
"\n",
"# Each worker process should use one core, hence we set\n",
"# os.environ['NEURON_RT_NUM_CORES'] = \"1\"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = \"1\"\n",
"\n",
"#Result aggregation class (code in bert_benchmark_utils.py)\n",
"results = Results(batch_size, num_cores)\n",
"def result_handler(output, start, end):\n",
" elapsed = end - start\n",
" results.add_result([elapsed], [end], [start])\n",
"\n",
"inputs = get_sample_inputs(batch_size, seq_len)\n",
"parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)\n",
"\n",
"#Starting the inference threads\n",
"parallel_neuron_model.start_continuous_inference()\n",
"\n",
"# Warm up the cores\n",
"for _ in range(num_cores*4):\n",
" parallel_neuron_model.warmup(inputs)\n",
" \n",
"# Need to run for high number of iterations to benchmark the models\n",
"for _ in range(1000):\n",
" parallel_neuron_model.infer(inputs)\n",
" # Passing the result_handler as a callback function\n",
" parallel_neuron_model.add_result(result_handler)\n",
"\n",
"# Stop inference \n",
"parallel_neuron_model.stop()\n",
"# Since we are using a multi-process execution with a shared queue, some inferences\n",
"# may still be in execution phase. Hence we need to wait till all the inputs are processed\n",
"# add_all_results() will collect all the results of requests which are in this state\n",
"parallel_neuron_model.add_all_results(result_handler)\n",
"\n",
"\n",
"with open(\"benchmark.txt\", \"w\") as f:\n",
" results.report(f, window_size=1)\n",
"\n",
"with open(\"benchmark.txt\", \"r\") as f:\n",
" for line in f:\n",
" print(line)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we run the setup with 4 workers. To do this, we set the num_cores=4. This would launch 4 model running each running on individual NeuronCore. All the 4 models are running in individual processes, in other words the models are running in parallel. \n",
"\n",
"To feed the models efficiently, we use the producer-consumer setup, in which all processes running a model act as consumers. All consumers are fed using a sharing input queue.\n",
"\n",
"Now we run the below setup. You may notice, that the throughput increase by >2x when compared to a single worker setup."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from parallel import NeuronSimpleDataParallel\n",
"from benchmark_utils import Results\n",
"import time\n",
"import functools\n",
"import os\n",
"import numpy as np\n",
"\n",
"num_cores = 4\n",
"batch_size=1\n",
"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = \"1\"\n",
"\n",
"#Result aggregation class (code in bert_benchmark_utils.py)\n",
"results = Results(batch_size, num_cores)\n",
"def result_handler(output, start, end):\n",
" elapsed = end - start\n",
" results.add_result([elapsed], [end], [start])\n",
"\n",
"inputs = get_sample_inputs(batch_size, seq_len)\n",
"parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)\n",
"\n",
"#Starting the inference threads\n",
"parallel_neuron_model.start_continuous_inference()\n",
"\n",
"# Warm up the cores\n",
"for _ in range(num_cores*4):\n",
" parallel_neuron_model.warmup(inputs)\n",
" \n",
"# Need to run for high number of iterations to benchmark the models\n",
"for _ in range(5000):\n",
" parallel_neuron_model.infer(inputs)\n",
" # Passing the result_handler as a callback function\n",
" parallel_neuron_model.add_result(result_handler)\n",
"\n",
"# Stop inference \n",
"parallel_neuron_model.stop()\n",
"# Since we are using a multi-process execution with a shared queue, some inferences\n",
"# may still be in execution phase. Hence we need to wait till all the inputs are processed\n",
"# add_all_results() will collect all the results of requests which are in this state\n",
"parallel_neuron_model.add_all_results(result_handler)\n",
"\n",
"\n",
"with open(\"benchmark.txt\", \"w\") as f:\n",
" results.report(f, window_size=1)\n",
"\n",
"with open(\"benchmark.txt\", \"r\") as f:\n",
" for line in f:\n",
" print(line)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
</pre></body></html> | 2023-09-29T20:55:27.479Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/mxnet/resnet50_neuroncore_groups.ipynb.txt | ```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50\n",
"\n",
"\n",
"\n",
"## Introduction:\n",
"\n",
"In this tutorial we will compile and deploy Resnet-50 model in parallel using the concept of NeuronCore Groups on an Inf1 instance. This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs. \n",
"\n",
"Set environment variable NEURON_RT_NUM_CORES to the total number of Neuron cores that will be utilized. The consecutive NeuronCore groups will be created by Neuron Runtime and place the models to the cores according to the compiled size.\n",
"\n",
"Note that in order to map a model to a group, the model must be compiled to fit within the group size. To limit the number of NeuronCores during compilation, use compiler_args dictionary with field “–neuroncore-pipeline-cores“ set to the group size. For exmaple, if NEURON_RT_NUM_CORES=4 and two models compiled with “–neuroncore-pipeline-cores=3“ and “–neuroncore-pipeline-cores=1“ were loaded, the first model would occupy NC0-2 and the second model would occupy NC3. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
"compile_args = {'--neuroncore-pipeline-cores' : 2}\n",
"sym, args, auxs = neuron.compile(sym, args, auxs, inputs, **compile_args)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1. Compile the Resnet50 model for Neuron\n",
"\n",
"2. Run inference using NeuronCore Groups\n",
"\n",
"Please use environment `conda_aws_neuron_mxnet_p36`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compile model for Neuron\n",
"\n",
"Model must be compiled to Inferentia target before it can be used on Inferentia. In the following we will compile the the flag, --neuroncore-pipeline-cores set to 2 and run it. The files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from packaging import version\n",
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"import mx_neuron as neuron\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)\n",
"\n",
"# Compile for Inferentia using Neuron, fit to NeuronCore group size of 2\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }\n",
"compile_args = {'--neuroncore-pipeline-cores' : 2}\n",
"sym, args, aux = neuron.compile(sym, args, aux, inputs, **compile_args)\n",
"\n",
"#save compiled model\n",
"mx.model.save_checkpoint(\"resnet-50_compiled\", 0, sym, args, aux)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run inference using NeuronCore Groups\n",
"\n",
"Within the framework, the model can be mapped to specific cores using ```ctx=mx.neuron(N)``` context where N specifies the index of the Neuron core to deploy. For more information, see https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html .\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import warnings\n",
"\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname) # convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"args['softmax_label'] = softmax\n",
"args['data'] = img\n",
"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = '4'\n",
"\n",
"\n",
"# Inferentia context - group index 1 (size 2) would skip NC0 and place the \n",
"# compiled model onto NC1,2\n",
"ctx = mx.neuron(1)\n",
"\n",
"exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"exe.forward(data=img)\n",
"prob = exe.outputs[0].asnumpy()# print the top-5\n",
"prob = np.squeeze(prob)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can experiment with different Neuron core group combinations and different models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Troubleshooting"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If not enough NeuronCores are provided, an error message will be displayed:\n",
"\n",
"```\n",
"mxnet.base.MXNetError: [04:01:39] src/operator/subgraph/neuron/./neuron_util.h:541: Check failed: rsp.status().code() == 0: Failed load model with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: \"\"\n",
"\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_mxnet_p36)",
"language": "python",
"name": "conda_aws_neuron_mxnet_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50\n",
"\n",
"\n",
"\n",
"## Introduction:\n",
"\n",
"In this tutorial we will compile and deploy Resnet-50 model in parallel using the concept of NeuronCore Groups on an Inf1 instance. This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs. \n",
"\n",
"Set environment variable NEURON_RT_NUM_CORES to the total number of Neuron cores that will be utilized. The consecutive NeuronCore groups will be created by Neuron Runtime and place the models to the cores according to the compiled size.\n",
"\n",
"Note that in order to map a model to a group, the model must be compiled to fit within the group size. To limit the number of NeuronCores during compilation, use compiler_args dictionary with field “–neuroncore-pipeline-cores“ set to the group size. For exmaple, if NEURON_RT_NUM_CORES=4 and two models compiled with “–neuroncore-pipeline-cores=3“ and “–neuroncore-pipeline-cores=1“ were loaded, the first model would occupy NC0-2 and the second model would occupy NC3. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
"compile_args = {'--neuroncore-pipeline-cores' : 2}\n",
"sym, args, auxs = neuron.compile(sym, args, auxs, inputs, **compile_args)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"In this tutorial we provide two main sections:\n",
"\n",
"1. Compile the Resnet50 model for Neuron\n",
"\n",
"2. Run inference using NeuronCore Groups\n",
"\n",
"Please use environment `conda_aws_neuron_mxnet_p36`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compile model for Neuron\n",
"\n",
"Model must be compiled to Inferentia target before it can be used on Inferentia. In the following we will compile the the flag, --neuroncore-pipeline-cores set to 2 and run it. The files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from packaging import version\n",
"import mxnet as mx\n",
"import numpy as np\n",
"\n",
"import mx_neuron as neuron\n",
"\n",
"path='http://data.mxnet.io/models/imagenet/'\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')\n",
"mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)\n",
"\n",
"# Compile for Inferentia using Neuron, fit to NeuronCore group size of 2\n",
"inputs = { \"data\" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }\n",
"compile_args = {'--neuroncore-pipeline-cores' : 2}\n",
"sym, args, aux = neuron.compile(sym, args, aux, inputs, **compile_args)\n",
"\n",
"#save compiled model\n",
"mx.model.save_checkpoint(\"resnet-50_compiled\", 0, sym, args, aux)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run inference using NeuronCore Groups\n",
"\n",
"Within the framework, the model can be mapped to specific cores using ```ctx=mx.neuron(N)``` context where N specifies the index of the Neuron core to deploy. For more information, see https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html .\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import warnings\n",
"\n",
"mx.test_utils.download(path+'synset.txt')\n",
"\n",
"fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')\n",
"img = mx.image.imread(fname) # convert into format (batch, RGB, width, height)\n",
"img = mx.image.imresize(img, 224, 224) # resize\n",
"img = img.transpose((2, 0, 1)) # Channel first\n",
"img = img.expand_dims(axis=0) # batchify\n",
"img = img.astype(dtype='float32')\n",
"\n",
"sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)\n",
"softmax = mx.nd.random_normal(shape=(1,))\n",
"args['softmax_label'] = softmax\n",
"args['data'] = img\n",
"\n",
"os.environ[\"NEURON_RT_NUM_CORES\"] = '4'\n",
"\n",
"\n",
"# Inferentia context - group index 1 (size 2) would skip NC0 and place the \n",
"# compiled model onto NC1,2\n",
"ctx = mx.neuron(1)\n",
"\n",
"exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')\n",
"\n",
"with open('synset.txt', 'r') as f:\n",
" labels = [l.rstrip() for l in f]\n",
"\n",
"exe.forward(data=img)\n",
"prob = exe.outputs[0].asnumpy()# print the top-5\n",
"prob = np.squeeze(prob)\n",
"a = np.argsort(prob)[::-1]\n",
"for i in a[0:5]:\n",
" print('probability=%f, class=%s' %(prob[i], labels[i]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can experiment with different Neuron core group combinations and different models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Troubleshooting"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If not enough NeuronCores are provided, an error message will be displayed:\n",
"\n",
"```\n",
"mxnet.base.MXNetError: [04:01:39] src/operator/subgraph/neuron/./neuron_util.h:541: Check failed: rsp.status().code() == 0: Failed load model with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: \"\"\n",
"\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Environment (conda_aws_neuron_mxnet_p36)",
"language": "python",
"name": "conda_aws_neuron_mxnet_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
</pre></body></html> | 2023-09-29T20:55:27.549Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/index.rst.txt | ```
.. _tensorflow-tutorials:
TensorFlow Tutorials
====================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`tensorflow-tutorial-setup` before running a TensorFlow tutorial on Inferentia. We recommend new users start with the ResNet-50 tutorial.
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup
.. _tensorflow-computervision:
Computer Vision
---------------
* Tensorflow 1.x - OpenPose tutorial :ref:`[html] </src/examples/tensorflow/openpose_demo/openpose.ipynb>` :tensorflow-neuron-src:`[notebook] <openpose_demo/openpose.ipynb>`
* Tensorflow 1.x - ResNet-50 tutorial :ref:`[html] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` :tensorflow-neuron-src:`[notebook] <tensorflow_resnet50/resnet50.ipynb>`
* Tensorflow 1.x - YOLOv4 tutorial :ref:`[html] <tensorflow-yolo4>` :tensorflow-neuron-src:`[notebook] <yolo_v4_demo/evaluate.ipynb>`
* Tensorflow 1.x - YOLOv3 tutorial :ref:`[html] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>` :tensorflow-neuron-src:`[notebook] <yolo_v3_demo/yolo_v3.ipynb>`
* Tensorflow 1.x - SSD300 tutorial :ref:`[html] <tensorflow-ssd300>`
* Tensorflow 1.x - Keras ResNet-50 optimization tutorial :ref:`[html] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>` :tensorflow-neuron-src:`[notebook] <keras_resnet50/keras_resnet50.ipynb>`
.. toctree::
:hidden:
/src/examples/tensorflow/openpose_demo/openpose.ipynb
/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo
/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo
/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb
.. _tensorflow-nlp:
Natural Language Processing
---------------------------
* Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron :ref:`[html] <tensorflow-bert-demo>`
* Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron :ref:`[html] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` :github:`[notebook] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo
/src/examples/tensorflow/huggingface_bert/huggingface_bert
.. _tensorflow-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* Tensorflow 1.x & 2.x - Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving :ref:`[html] </src/examples/tensorflow/tensorflow_serving_tutorial.rst>`
.. toctree::
:hidden:
/src/examples/tensorflow/tensorflow_serving_tutorial.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-tutorials:
TensorFlow Tutorials
====================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`tensorflow-tutorial-setup` before running a TensorFlow tutorial on Inferentia. We recommend new users start with the ResNet-50 tutorial.
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup
.. _tensorflow-computervision:
Computer Vision
---------------
* Tensorflow 1.x - OpenPose tutorial :ref:`[html] </src/examples/tensorflow/openpose_demo/openpose.ipynb>` :tensorflow-neuron-src:`[notebook] <openpose_demo/openpose.ipynb>`
* Tensorflow 1.x - ResNet-50 tutorial :ref:`[html] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` :tensorflow-neuron-src:`[notebook] <tensorflow_resnet50/resnet50.ipynb>`
* Tensorflow 1.x - YOLOv4 tutorial :ref:`[html] <tensorflow-yolo4>` :tensorflow-neuron-src:`[notebook] <yolo_v4_demo/evaluate.ipynb>`
* Tensorflow 1.x - YOLOv3 tutorial :ref:`[html] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>` :tensorflow-neuron-src:`[notebook] <yolo_v3_demo/yolo_v3.ipynb>`
* Tensorflow 1.x - SSD300 tutorial :ref:`[html] <tensorflow-ssd300>`
* Tensorflow 1.x - Keras ResNet-50 optimization tutorial :ref:`[html] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>` :tensorflow-neuron-src:`[notebook] <keras_resnet50/keras_resnet50.ipynb>`
.. toctree::
:hidden:
/src/examples/tensorflow/openpose_demo/openpose.ipynb
/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo
/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo
/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb
.. _tensorflow-nlp:
Natural Language Processing
---------------------------
* Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron :ref:`[html] <tensorflow-bert-demo>`
* Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron :ref:`[html] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` :github:`[notebook] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo
/src/examples/tensorflow/huggingface_bert/huggingface_bert
.. _tensorflow-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* Tensorflow 1.x & 2.x - Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving :ref:`[html] </src/examples/tensorflow/tensorflow_serving_tutorial.rst>`
.. toctree::
:hidden:
/src/examples/tensorflow/tensorflow_serving_tutorial.rst
</pre></body></html> | 2023-09-29T20:55:27.557Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.rst.txt | ```
.. _neuron-cc-ops-xla:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Supported operators [XLA]
=====================================================================
To see a list of supported operators for XLA, run the following command:
``neuron-cc list-operators --framework XLA``
+-------------------------+-------------------------------------------+
| Supported XLA Operators | Notes |
+=========================+===========================================+
| Abs | |
+-------------------------+-------------------------------------------+
| Add | |
+-------------------------+-------------------------------------------+
| Allgather | |
+-------------------------+-------------------------------------------+
| Allreduce | |
+-------------------------+-------------------------------------------+
| Atan2 | |
+-------------------------+-------------------------------------------+
| Batchnorm | |
+-------------------------+-------------------------------------------+
| Batchnormgrad | |
+-------------------------+-------------------------------------------+
| Batchnorminference | |
+-------------------------+-------------------------------------------+
| Broadcast | |
+-------------------------+-------------------------------------------+
| BroadcastInDim | |
+-------------------------+-------------------------------------------+
| Ceil | |
+-------------------------+-------------------------------------------+
| Clamp | |
+-------------------------+-------------------------------------------+
| Compare | |
+-------------------------+-------------------------------------------+
| Concatenate | |
+-------------------------+-------------------------------------------+
| Constant | |
+-------------------------+-------------------------------------------+
| ConstantLiteral | |
+-------------------------+-------------------------------------------+
| ConvertElementType | |
+-------------------------+-------------------------------------------+
| Cos | |
+-------------------------+-------------------------------------------+
| Customcall | |
+-------------------------+-------------------------------------------+
| Div | |
+-------------------------+-------------------------------------------+
| Dot | |
+-------------------------+-------------------------------------------+
| DotGeneral | |
+-------------------------+-------------------------------------------+
| DynamicUpdateSlice | Supports only for constant index |
+-------------------------+-------------------------------------------+
| Eq | |
+-------------------------+-------------------------------------------+
| Exp | |
+-------------------------+-------------------------------------------+
| Floor | |
+-------------------------+-------------------------------------------+
| Gather | Supports only disjoint start_index_map |
| | and remapped_offset_dims |
+-------------------------+-------------------------------------------+
| Ge | |
+-------------------------+-------------------------------------------+
| GetTupleElement | |
+-------------------------+-------------------------------------------+
| Gt | |
+-------------------------+-------------------------------------------+
| Iota | |
+-------------------------+-------------------------------------------+
| Le | |
+-------------------------+-------------------------------------------+
| Log | |
+-------------------------+-------------------------------------------+
| LogicalAnd | |
+-------------------------+-------------------------------------------+
| LogicalNot | |
+-------------------------+-------------------------------------------+
| Lt | |
+-------------------------+-------------------------------------------+
| Max | |
+-------------------------+-------------------------------------------+
| Min | |
+-------------------------+-------------------------------------------+
| Mul | |
+-------------------------+-------------------------------------------+
| Ne | |
+-------------------------+-------------------------------------------+
| Neg | |
+-------------------------+-------------------------------------------+
| Pad | |
+-------------------------+-------------------------------------------+
| Pow | Exponent argument must be a compile-time |
| | integer constant |
+-------------------------+-------------------------------------------+
| Reduce | Min, Max, Add and Mul are the only |
| | supported computations. Init_values must |
| | be constant |
+-------------------------+-------------------------------------------+
| Reshape | |
+-------------------------+-------------------------------------------+
| RngBitGenerator | Ignores user seed |
+-------------------------+-------------------------------------------+
| RngUniform | |
+-------------------------+-------------------------------------------+
| Rsqrt | |
+-------------------------+-------------------------------------------+
| Scatter | |
+-------------------------+-------------------------------------------+
| Select | |
+-------------------------+-------------------------------------------+
| ShiftRightLogical | |
+-------------------------+-------------------------------------------+
| Sign | |
+-------------------------+-------------------------------------------+
| Sin | |
+-------------------------+-------------------------------------------+
| Slice | |
+-------------------------+-------------------------------------------+
| Sqrt | |
+-------------------------+-------------------------------------------+
| Sub | |
+-------------------------+-------------------------------------------+
| Tanh | |
+-------------------------+-------------------------------------------+
| Transpose | |
+-------------------------+-------------------------------------------+
| Tuple | |
+-------------------------+-------------------------------------------+
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-ops-xla:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Supported operators [XLA]
=====================================================================
To see a list of supported operators for XLA, run the following command:
``neuron-cc list-operators --framework XLA``
+-------------------------+-------------------------------------------+
| Supported XLA Operators | Notes |
+=========================+===========================================+
| Abs | |
+-------------------------+-------------------------------------------+
| Add | |
+-------------------------+-------------------------------------------+
| Allgather | |
+-------------------------+-------------------------------------------+
| Allreduce | |
+-------------------------+-------------------------------------------+
| Atan2 | |
+-------------------------+-------------------------------------------+
| Batchnorm | |
+-------------------------+-------------------------------------------+
| Batchnormgrad | |
+-------------------------+-------------------------------------------+
| Batchnorminference | |
+-------------------------+-------------------------------------------+
| Broadcast | |
+-------------------------+-------------------------------------------+
| BroadcastInDim | |
+-------------------------+-------------------------------------------+
| Ceil | |
+-------------------------+-------------------------------------------+
| Clamp | |
+-------------------------+-------------------------------------------+
| Compare | |
+-------------------------+-------------------------------------------+
| Concatenate | |
+-------------------------+-------------------------------------------+
| Constant | |
+-------------------------+-------------------------------------------+
| ConstantLiteral | |
+-------------------------+-------------------------------------------+
| ConvertElementType | |
+-------------------------+-------------------------------------------+
| Cos | |
+-------------------------+-------------------------------------------+
| Customcall | |
+-------------------------+-------------------------------------------+
| Div | |
+-------------------------+-------------------------------------------+
| Dot | |
+-------------------------+-------------------------------------------+
| DotGeneral | |
+-------------------------+-------------------------------------------+
| DynamicUpdateSlice | Supports only for constant index |
+-------------------------+-------------------------------------------+
| Eq | |
+-------------------------+-------------------------------------------+
| Exp | |
+-------------------------+-------------------------------------------+
| Floor | |
+-------------------------+-------------------------------------------+
| Gather | Supports only disjoint start_index_map |
| | and remapped_offset_dims |
+-------------------------+-------------------------------------------+
| Ge | |
+-------------------------+-------------------------------------------+
| GetTupleElement | |
+-------------------------+-------------------------------------------+
| Gt | |
+-------------------------+-------------------------------------------+
| Iota | |
+-------------------------+-------------------------------------------+
| Le | |
+-------------------------+-------------------------------------------+
| Log | |
+-------------------------+-------------------------------------------+
| LogicalAnd | |
+-------------------------+-------------------------------------------+
| LogicalNot | |
+-------------------------+-------------------------------------------+
| Lt | |
+-------------------------+-------------------------------------------+
| Max | |
+-------------------------+-------------------------------------------+
| Min | |
+-------------------------+-------------------------------------------+
| Mul | |
+-------------------------+-------------------------------------------+
| Ne | |
+-------------------------+-------------------------------------------+
| Neg | |
+-------------------------+-------------------------------------------+
| Pad | |
+-------------------------+-------------------------------------------+
| Pow | Exponent argument must be a compile-time |
| | integer constant |
+-------------------------+-------------------------------------------+
| Reduce | Min, Max, Add and Mul are the only |
| | supported computations. Init_values must |
| | be constant |
+-------------------------+-------------------------------------------+
| Reshape | |
+-------------------------+-------------------------------------------+
| RngBitGenerator | Ignores user seed |
+-------------------------+-------------------------------------------+
| RngUniform | |
+-------------------------+-------------------------------------------+
| Rsqrt | |
+-------------------------+-------------------------------------------+
| Scatter | |
+-------------------------+-------------------------------------------+
| Select | |
+-------------------------+-------------------------------------------+
| ShiftRightLogical | |
+-------------------------+-------------------------------------------+
| Sign | |
+-------------------------+-------------------------------------------+
| Sin | |
+-------------------------+-------------------------------------------+
| Slice | |
+-------------------------+-------------------------------------------+
| Sqrt | |
+-------------------------+-------------------------------------------+
| Sub | |
+-------------------------+-------------------------------------------+
| Tanh | |
+-------------------------+-------------------------------------------+
| Transpose | |
+-------------------------+-------------------------------------------+
| Tuple | |
+-------------------------+-------------------------------------------+
</pre></body></html> | 2023-09-29T20:55:27.676Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tutorials/torch-neuronx-profiling-with-tb.rst.txt | ```
.. _torch-neuronx-profiling-with-tb:
Profiling PyTorch Neuron (``torch-neuronx``) with TensorBoard
==============================================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
------------
Neuron provides a plugin for TensorBoard that allows users to measure and visualize
performance on a torch runtime level or an operator
level. With this information, it becomes quicker to identify any
performance bottleneck allowing for quicker addressing of that issue.
For more information on the Neuron plugin for TensorBoard, see :ref:`neuronx-plugin-tensorboard`.
Setup
-----
Prerequisites
~~~~~~~~~~~~~
1. Initial `Trn1 setup for PyTorch
(torch-neuronx) <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/setup/pytorch-install.html>`__
has been done
Environment
~~~~~~~~~~~
::
#activate python virtual environment and install tensorboard_plugin_neuron
source ~/aws_neuron_venv_pytorch_p38/bin/activate
pip install tensorboard_plugin_neuronx
#create work directory for the Neuron Profiling tutorials
mkdir -p ~/neuron_profiling_tensorboard_examples
cd ~/neuron_profiling_tensorboard_examples
Part 1: Operator Level Trace for ``xm.markstep()`` workflow
--------------------------------------
Goal
~~~~
After completing this tutorial, the user should be able to understand
the features of the Operator Level Trace. The user should also be able
to form a narrative/surface level analysis from what is being presented
in the Operator Level Trace.
Set Up
~~~~~~
Let’s set up a directory containing the material for this demo
::
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_1
cd tutorial_1
# this is where our code will be written
touch run.py
Here is the code for ``run.py``:
::
import os
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
import torch_xla.core.xla_model as xm
os.environ["NEURON_CC_FLAGS"] = "--cache_dir=./compiler_cache"
device = xm.xla_device()
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
with torch.no_grad():
model = NN()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000 ):
# IMPORTANT: the model has to be transferred to XLA within
# the context manager, otherwise profiling won't work
neuron_model = model.to(device)
neuron_inp = inp.to(device)
output_neuron = neuron_model(neuron_inp)
xm.mark_step()
print("==CPU OUTPUT==")
print(output)
print()
print("==TRN1 OUTPUT==")
print(output_neuron)
Understanding the Code
~~~~~~~~~~~~~~~~~~~~~~
For this first tutorial, we’ll be using a simple Feed forward NN model.
However, once the TensorBoard dashboard is up, we’ll see some
interesting and unexpected things. A simple model is helpful since it is
easy to reference back to.
Another important part is the “operator” profiling type we specified in the context manager.
**Low Level:** The “operator“ dashboard is the dashboard that contains
the Operator Level Trace This view also only zooms in on the
NeuronDevice, while the ”trace“ dashboard shows processes from all
devices. The Operator Level Trace View is organized by levels of
abstraction, with the top level showing the model class. The next lower
tier shows model components, and the lowest tier shows specific
operators that occur for a specific model component. This view is useful
for identifying model bottlenecks at the operator level.
We also print out the outputs from the CPU model and the TRN1 model to note
the small differences in output.
Running The Profiler
~~~~~~~~~~~~~~~~~~~~
::
python run.py
**Output:**
Initial Output & Compilation Success
::
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Dependency reduction of sg0000
0% 10 20 30 40 50 60 70 80 90 100%``
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Processing the Neuron Profiler Traces
::
torch_neuron: Waiting for XLA profile completion ...
torch_neuron: translate_xplane: Processing plane: '/host:CPU'
torch_neuron: XLA decode - Read filename 2023_04_28_00_54_04
torch_neuron: XLA decode - Read date parts ['2023', '04', '28', '00', '54', '04']
torch_neuron: XLA decode - Read start date 2023-04-28 00:54:04 from directory stamp
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_hlo_op.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_framework_op.json'
Printing output from CPU model and Trn1 Model:
::
==CPU OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0327, -0.3105],
[-0.0073, -0.3268],
[-0.1683, -0.3230]])
==TRN1 OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0328, -0.3106],
[-0.0067, -0.3270],
[-0.1684, -0.3229]], device='xla:1')
Loading the Operators Level Trace in TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run ``tensorboard --load_fast=false --logdir logs/``
Take note of the port (usually 6006) and enter ``localhost:<port>`` into
the local browser (assuming port forwarding is set up properly)
.. note::
Check :ref:`Tensorboard Interface Overview` to understand TensorBoard interface
The Operator Level Trace views are the same format plus an id at the
end; ``year_month_day_hour_minute_second_millisecond_id``. The Tool
dropdown will have 3 options: operator-framework, operator-hlo, and
operator-timeline.
Operator Framework View
~~~~~~~~~~~~~~~~~~~~~~~
|tensorboard-operator-framework-view|
This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the framework level for a
neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.
Operator HLO View
~~~~~~~~~~~~~~~~~
|tensorboard-operator-hlo-view|
This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the hlo level for a
Neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.
.. note::
For this simple model, the pie chart will be the same as the framework view. This won't be
the case for larger and more complex models.
Operator Trace View
~~~~~~~~~~~~~~~~~~~
|tensorboard-operator-trace-view|
.. _trace_view_sections:
Trace View Sections
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Notice there are four sections: Process Overview, Control, Execution, and Data
Transfer. In each section there are more subdivisions with each layer
representing a certain level of abstraction. Also important to note that
the timescale axis is aligned between the two sections. This is
important to note as sometimes there are gaps in the process execution.
Most of the time, there are data transfer operations happening in
between the gaps.
Fusion Operators
^^^^^^^^^^^^^^^^
**Simple Case:** Zooming in on the operations, we can recognize some
operations for a neural network, such as a dot product and transpose,
but sometimes there will be fused operators (fusion operators). To
understand these operators, click on it, and on the bottom of the
dashboard, some information will appear.
|tensorboard-operator-trace-fusion-simple|
Notice in the above example the fusion operator is fusing the operator before and
after itself on the timeline. More specifically, ``fused_3`` is a fusion
of ``NN[model]/input`` and
``NN[model]/ReLU[nl1]/Tensor_1/aten__relu_maximum``. These kinds of
fusions occur when the ``neuronx-cc`` compiler has found an optimization
relating to the two operators. Most often this would be the execution of
the operators on separate compute engines or another form of parallelism.
**Complex Case:** Most often, the order of fusion operators can get a
little complicated or contain "hidden" information. For the first example,
let’s zoom into the data transfer section such that we see the timescale range
from 6000 ns. to 6600 ns. It should look similar to below:
|tensorboard-operator-trace-fusion-complex|
Looking at ``fused_16`` (11452 ns) we see it's surrounded by other fused operators.
Furthermore, the ``fused_16`` operator fuses more than two operators: ``NN[model]/Linear[layer1]/aten__addmm_add``,
``NN[model]/input``, and ``NN[model]/Linear[layer1]/aten__addmm_dot``. These operators can be found in the timeline, but sometimes
the fused operators may not exist in the timeline due to it occurring within another operation. We go over an example of this case
in Part 2.
Understanding the Low Level Timeline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Looking at the trace we can look behind the scenes at how the model is
executed on neuron hardware. Before proceeding with the analysis, it is worth recalling the
way we defined the model for this tutorial:
.. code:: python
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
Analysis
^^^^^^^^
**Input Operators:** We see input operators here. This is because in a markstep flow, we need to transfer inputs to the xla device. This is represented by the ``SyncTensorsGraph.53`` call.
**ReLU at the beginning:** The first couple of blocks in the Process Data Transfer section initially appear to be confusing. There is an ``Input`` (0 ns.)
block followed by a ``ReLU`` (100 ns.) operator. Under the hood here, ``ReLU`` is rewritten as an ``elementwise_max(arr,0)``,
(0 here means an array with zeros) but to create this operation, the zeros have to be set in memory, which is a data operation.
A general rule is that if an operator appears this early in the data transfer section, it most likely means there is an operation
lowering involving setting some values into memory for use later on.
**Memory allocation for Linear[layer1]:** We resume with the data transfer operations. Here, memory is getting allocated for specific operators, and sometimes the allocated
inputs get loaded onto operators while the rest of the input gets allocated. This can be seen at ``fused_18`` (11811 ns.) and ``fused_23`` (12181 ns.).
Eventually the input gets fully allocated, and other allocations occur for dot products, transpose, and broadcast operators for
``Linear[layer1]`` and ``Linear[layer2]``.
Conclusion
^^^^^^^^
There are a few conclusions that can be determined from analyzing the timeline. We can see that we’ve been able to save a bit of time due to
parallelism with fusion operations, and saving some compute time with preloading operations (ex. ``ReLU``). A clear trend is that a majority of the time is spent on data transfer operations.
It is also evident that even a simple Feed Forward NN becomes complicated when put under a microscope in the profiler. Facts such as the implementation of ``ReLU`` in the runtime/architecture, aren’t explicitly stated in the profiler, but do make
themselves known by the unusual ordering placement of the trace blocks and unusual fusion operators.
In terms of action items that can be taken based on our narrative, there
really isn’t any. This is a very very simple model that outputs after 8
microseconds, and we chose it because it is simple to understand. In
more realistic examples we will aim to do more compute than data
transfer on the hardware, and where possible to overlap data transfer
and compute between sequential operations.
The profiler revealed a lot of optimizations that were done, via fusion
operators and parallelism. However, the end goal of this tool is to be
able to improve performance by revealing the bottlenecks of the model.
.. note::
While we did explain some of the quirks visible in the profiler at a microscopic level, it isn’t necessary
to do so for normal use. This tutorial introduced the microscopic explanation for these occurrences to show to the
user that this is *indeed* what happens in the hardware when executing a simple FFNN.
Part 2: Operator Level Trace with ``torch_neuronx.trace()`` workflow
--------------------------------------
Set Up
~~~~~~
The setup will be similar to Part 1.
::
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_2
cd tutorial_2
# this is where our code will be written
touch run.py
Here is the code for ``run.py``:
::
import os
import time
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
model = NN()
model.eval()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000,
traced_only=True):
neuron_model = torch_neuronx.trace(model,inp,compiler_workdir="./compiler_cache")
neuron_model(inp)
print("==CPU OUTPUT==")
print(output)
print()
print("==INF2 OUTPUT==")
print(output_neuron)
Important code differences from Part 1
~~~~~~
1. ``import torch_xla.core.xla_model as xm`` is no longer necessary
2. Set ``traced_only=True`` in ``torch_neuronx.experimental.profiler.profile()``. This option is necessary for traced models, otherwise the generated profile will not be accurate or not work.
3. Tracing the model with ``torch_neuronx.trace()`` and removing ``xm.markstep()``.
Otherwise, the code is the same as Part 1.
Running Part 2
~~~~~~~
To Run:
::
python run.py
The output will look almost identical as Part 1
Loading the Operators Level Trace in TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run ``tensorboard --load_fast=false --logdir logs/``, just like Part 1.
.. note::
Check :ref:`Tensorboard Interface Overview` to understand TensorBoard interface
Timeline View:
|tensorboard-operator-trace-view-traced|
Notable Differences in Timeline View from Part 1:
~~~~~~
**No Input Operators:** For a traced model, we do not transfer the input to an xla device, so these operations are not seen on the timeline. This also affects scheduling, which is why the time taken in
the profiling is less than the markstep one.
**Combined Loading of Linear[layer1] and Tanh:** ``fused_19`` (5824 ns) contains a fusion between ``Linear[layer1]`` and ``Tanh[nl2]``. This might be a bit odd, but such data loading parallelism
can be understood by understanding how tanh is implemented. Typically, functions like tanh are implemented by lookup tables that require being pre-loaded onto memory, which is a data transfer operation.
A bulk of data transfer operations are done in the beginning to optimize computations.
.. note::
Despite these differences, the big picture conclusion drawn from Part 1 still holds, as the two timelines are more similar than different. Some new insights drawn is that the traced model performs better than the markstep flow, since this was profiling a single forward pass.
.. |tensorboard-url-image| image:: /images/Neuron_Profiler_Tensorboard_Url.jpg
.. |tensorboard-NEURON-header| image:: /images/Neuron_Profiler_Tensorboard_Header.jpg
.. |tensorboard-NEURON-dropdown| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |tensorboard-run-tool-dropdowns| image:: /images/Neuron_Profiler_Tensorboard_Run_Tool_Dropdowns.jpg
.. |tensorboard-run-trace-original| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |tensorboard-run-trace-selected-section| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection.jpg
.. |tensorboard-run-trace-selected-section-zoomed| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed.jpg
.. |tensorboard-run-trace-selected-section-zoomed-named-traces| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed_Named_Traces.jpg
.. |tensorboard-operator-framework-view| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |tensorboard-operator-hlo-view| image:: /images/Neuron_Profiler_T1_Op_HLO_View.png
.. |tensorboard-operator-trace-view| image:: /images/Neuron_Profiler_T1_Op_Trace_View.png
.. |tensorboard-operator-trace-view-traced| image:: /images/Neuron_Profiler_T1_Op_Trace_View_Traced.png
.. |tensorboard-operator-trace-fusion-simple| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Simple.png
.. |tensorboard-operator-trace-fusion-complex| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Complex.png
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuronx-profiling-with-tb:
Profiling PyTorch Neuron (``torch-neuronx``) with TensorBoard
==============================================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
------------
Neuron provides a plugin for TensorBoard that allows users to measure and visualize
performance on a torch runtime level or an operator
level. With this information, it becomes quicker to identify any
performance bottleneck allowing for quicker addressing of that issue.
For more information on the Neuron plugin for TensorBoard, see :ref:`neuronx-plugin-tensorboard`.
Setup
-----
Prerequisites
~~~~~~~~~~~~~
1. Initial `Trn1 setup for PyTorch
(torch-neuronx) <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/setup/pytorch-install.html>`__
has been done
Environment
~~~~~~~~~~~
::
#activate python virtual environment and install tensorboard_plugin_neuron
source ~/aws_neuron_venv_pytorch_p38/bin/activate
pip install tensorboard_plugin_neuronx
#create work directory for the Neuron Profiling tutorials
mkdir -p ~/neuron_profiling_tensorboard_examples
cd ~/neuron_profiling_tensorboard_examples
Part 1: Operator Level Trace for ``xm.markstep()`` workflow
--------------------------------------
Goal
~~~~
After completing this tutorial, the user should be able to understand
the features of the Operator Level Trace. The user should also be able
to form a narrative/surface level analysis from what is being presented
in the Operator Level Trace.
Set Up
~~~~~~
Let’s set up a directory containing the material for this demo
::
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_1
cd tutorial_1
# this is where our code will be written
touch run.py
Here is the code for ``run.py``:
::
import os
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
import torch_xla.core.xla_model as xm
os.environ["NEURON_CC_FLAGS"] = "--cache_dir=./compiler_cache"
device = xm.xla_device()
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
with torch.no_grad():
model = NN()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000 ):
# IMPORTANT: the model has to be transferred to XLA within
# the context manager, otherwise profiling won't work
neuron_model = model.to(device)
neuron_inp = inp.to(device)
output_neuron = neuron_model(neuron_inp)
xm.mark_step()
print("==CPU OUTPUT==")
print(output)
print()
print("==TRN1 OUTPUT==")
print(output_neuron)
Understanding the Code
~~~~~~~~~~~~~~~~~~~~~~
For this first tutorial, we’ll be using a simple Feed forward NN model.
However, once the TensorBoard dashboard is up, we’ll see some
interesting and unexpected things. A simple model is helpful since it is
easy to reference back to.
Another important part is the “operator” profiling type we specified in the context manager.
**Low Level:** The “operator“ dashboard is the dashboard that contains
the Operator Level Trace This view also only zooms in on the
NeuronDevice, while the ”trace“ dashboard shows processes from all
devices. The Operator Level Trace View is organized by levels of
abstraction, with the top level showing the model class. The next lower
tier shows model components, and the lowest tier shows specific
operators that occur for a specific model component. This view is useful
for identifying model bottlenecks at the operator level.
We also print out the outputs from the CPU model and the TRN1 model to note
the small differences in output.
Running The Profiler
~~~~~~~~~~~~~~~~~~~~
::
python run.py
**Output:**
Initial Output & Compilation Success
::
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Dependency reduction of sg0000
0% 10 20 30 40 50 60 70 80 90 100%``
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Processing the Neuron Profiler Traces
::
torch_neuron: Waiting for XLA profile completion ...
torch_neuron: translate_xplane: Processing plane: '/host:CPU'
torch_neuron: XLA decode - Read filename 2023_04_28_00_54_04
torch_neuron: XLA decode - Read date parts ['2023', '04', '28', '00', '54', '04']
torch_neuron: XLA decode - Read start date 2023-04-28 00:54:04 from directory stamp
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_hlo_op.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_framework_op.json'
Printing output from CPU model and Trn1 Model:
::
==CPU OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0327, -0.3105],
[-0.0073, -0.3268],
[-0.1683, -0.3230]])
==TRN1 OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0328, -0.3106],
[-0.0067, -0.3270],
[-0.1684, -0.3229]], device='xla:1')
Loading the Operators Level Trace in TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run ``tensorboard --load_fast=false --logdir logs/``
Take note of the port (usually 6006) and enter ``localhost:<port>`` into
the local browser (assuming port forwarding is set up properly)
.. note::
Check :ref:`Tensorboard Interface Overview` to understand TensorBoard interface
The Operator Level Trace views are the same format plus an id at the
end; ``year_month_day_hour_minute_second_millisecond_id``. The Tool
dropdown will have 3 options: operator-framework, operator-hlo, and
operator-timeline.
Operator Framework View
~~~~~~~~~~~~~~~~~~~~~~~
|tensorboard-operator-framework-view|
This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the framework level for a
neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.
Operator HLO View
~~~~~~~~~~~~~~~~~
|tensorboard-operator-hlo-view|
This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the hlo level for a
Neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.
.. note::
For this simple model, the pie chart will be the same as the framework view. This won't be
the case for larger and more complex models.
Operator Trace View
~~~~~~~~~~~~~~~~~~~
|tensorboard-operator-trace-view|
.. _trace_view_sections:
Trace View Sections
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Notice there are four sections: Process Overview, Control, Execution, and Data
Transfer. In each section there are more subdivisions with each layer
representing a certain level of abstraction. Also important to note that
the timescale axis is aligned between the two sections. This is
important to note as sometimes there are gaps in the process execution.
Most of the time, there are data transfer operations happening in
between the gaps.
Fusion Operators
^^^^^^^^^^^^^^^^
**Simple Case:** Zooming in on the operations, we can recognize some
operations for a neural network, such as a dot product and transpose,
but sometimes there will be fused operators (fusion operators). To
understand these operators, click on it, and on the bottom of the
dashboard, some information will appear.
|tensorboard-operator-trace-fusion-simple|
Notice in the above example the fusion operator is fusing the operator before and
after itself on the timeline. More specifically, ``fused_3`` is a fusion
of ``NN[model]/input`` and
``NN[model]/ReLU[nl1]/Tensor_1/aten__relu_maximum``. These kinds of
fusions occur when the ``neuronx-cc`` compiler has found an optimization
relating to the two operators. Most often this would be the execution of
the operators on separate compute engines or another form of parallelism.
**Complex Case:** Most often, the order of fusion operators can get a
little complicated or contain "hidden" information. For the first example,
let’s zoom into the data transfer section such that we see the timescale range
from 6000 ns. to 6600 ns. It should look similar to below:
|tensorboard-operator-trace-fusion-complex|
Looking at ``fused_16`` (11452 ns) we see it's surrounded by other fused operators.
Furthermore, the ``fused_16`` operator fuses more than two operators: ``NN[model]/Linear[layer1]/aten__addmm_add``,
``NN[model]/input``, and ``NN[model]/Linear[layer1]/aten__addmm_dot``. These operators can be found in the timeline, but sometimes
the fused operators may not exist in the timeline due to it occurring within another operation. We go over an example of this case
in Part 2.
Understanding the Low Level Timeline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Looking at the trace we can look behind the scenes at how the model is
executed on neuron hardware. Before proceeding with the analysis, it is worth recalling the
way we defined the model for this tutorial:
.. code:: python
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
Analysis
^^^^^^^^
**Input Operators:** We see input operators here. This is because in a markstep flow, we need to transfer inputs to the xla device. This is represented by the ``SyncTensorsGraph.53`` call.
**ReLU at the beginning:** The first couple of blocks in the Process Data Transfer section initially appear to be confusing. There is an ``Input`` (0 ns.)
block followed by a ``ReLU`` (100 ns.) operator. Under the hood here, ``ReLU`` is rewritten as an ``elementwise_max(arr,0)``,
(0 here means an array with zeros) but to create this operation, the zeros have to be set in memory, which is a data operation.
A general rule is that if an operator appears this early in the data transfer section, it most likely means there is an operation
lowering involving setting some values into memory for use later on.
**Memory allocation for Linear[layer1]:** We resume with the data transfer operations. Here, memory is getting allocated for specific operators, and sometimes the allocated
inputs get loaded onto operators while the rest of the input gets allocated. This can be seen at ``fused_18`` (11811 ns.) and ``fused_23`` (12181 ns.).
Eventually the input gets fully allocated, and other allocations occur for dot products, transpose, and broadcast operators for
``Linear[layer1]`` and ``Linear[layer2]``.
Conclusion
^^^^^^^^
There are a few conclusions that can be determined from analyzing the timeline. We can see that we’ve been able to save a bit of time due to
parallelism with fusion operations, and saving some compute time with preloading operations (ex. ``ReLU``). A clear trend is that a majority of the time is spent on data transfer operations.
It is also evident that even a simple Feed Forward NN becomes complicated when put under a microscope in the profiler. Facts such as the implementation of ``ReLU`` in the runtime/architecture, aren’t explicitly stated in the profiler, but do make
themselves known by the unusual ordering placement of the trace blocks and unusual fusion operators.
In terms of action items that can be taken based on our narrative, there
really isn’t any. This is a very very simple model that outputs after 8
microseconds, and we chose it because it is simple to understand. In
more realistic examples we will aim to do more compute than data
transfer on the hardware, and where possible to overlap data transfer
and compute between sequential operations.
The profiler revealed a lot of optimizations that were done, via fusion
operators and parallelism. However, the end goal of this tool is to be
able to improve performance by revealing the bottlenecks of the model.
.. note::
While we did explain some of the quirks visible in the profiler at a microscopic level, it isn’t necessary
to do so for normal use. This tutorial introduced the microscopic explanation for these occurrences to show to the
user that this is *indeed* what happens in the hardware when executing a simple FFNN.
Part 2: Operator Level Trace with ``torch_neuronx.trace()`` workflow
--------------------------------------
Set Up
~~~~~~
The setup will be similar to Part 1.
::
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_2
cd tutorial_2
# this is where our code will be written
touch run.py
Here is the code for ``run.py``:
::
import os
import time
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
model = NN()
model.eval()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000,
traced_only=True):
neuron_model = torch_neuronx.trace(model,inp,compiler_workdir="./compiler_cache")
neuron_model(inp)
print("==CPU OUTPUT==")
print(output)
print()
print("==INF2 OUTPUT==")
print(output_neuron)
Important code differences from Part 1
~~~~~~
1. ``import torch_xla.core.xla_model as xm`` is no longer necessary
2. Set ``traced_only=True`` in ``torch_neuronx.experimental.profiler.profile()``. This option is necessary for traced models, otherwise the generated profile will not be accurate or not work.
3. Tracing the model with ``torch_neuronx.trace()`` and removing ``xm.markstep()``.
Otherwise, the code is the same as Part 1.
Running Part 2
~~~~~~~
To Run:
::
python run.py
The output will look almost identical as Part 1
Loading the Operators Level Trace in TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run ``tensorboard --load_fast=false --logdir logs/``, just like Part 1.
.. note::
Check :ref:`Tensorboard Interface Overview` to understand TensorBoard interface
Timeline View:
|tensorboard-operator-trace-view-traced|
Notable Differences in Timeline View from Part 1:
~~~~~~
**No Input Operators:** For a traced model, we do not transfer the input to an xla device, so these operations are not seen on the timeline. This also affects scheduling, which is why the time taken in
the profiling is less than the markstep one.
**Combined Loading of Linear[layer1] and Tanh:** ``fused_19`` (5824 ns) contains a fusion between ``Linear[layer1]`` and ``Tanh[nl2]``. This might be a bit odd, but such data loading parallelism
can be understood by understanding how tanh is implemented. Typically, functions like tanh are implemented by lookup tables that require being pre-loaded onto memory, which is a data transfer operation.
A bulk of data transfer operations are done in the beginning to optimize computations.
.. note::
Despite these differences, the big picture conclusion drawn from Part 1 still holds, as the two timelines are more similar than different. Some new insights drawn is that the traced model performs better than the markstep flow, since this was profiling a single forward pass.
.. |tensorboard-url-image| image:: /images/Neuron_Profiler_Tensorboard_Url.jpg
.. |tensorboard-NEURON-header| image:: /images/Neuron_Profiler_Tensorboard_Header.jpg
.. |tensorboard-NEURON-dropdown| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |tensorboard-run-tool-dropdowns| image:: /images/Neuron_Profiler_Tensorboard_Run_Tool_Dropdowns.jpg
.. |tensorboard-run-trace-original| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |tensorboard-run-trace-selected-section| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection.jpg
.. |tensorboard-run-trace-selected-section-zoomed| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed.jpg
.. |tensorboard-run-trace-selected-section-zoomed-named-traces| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed_Named_Traces.jpg
.. |tensorboard-operator-framework-view| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |tensorboard-operator-hlo-view| image:: /images/Neuron_Profiler_T1_Op_HLO_View.png
.. |tensorboard-operator-trace-view| image:: /images/Neuron_Profiler_T1_Op_Trace_View.png
.. |tensorboard-operator-trace-view-traced| image:: /images/Neuron_Profiler_T1_Op_Trace_View_Traced.png
.. |tensorboard-operator-trace-fusion-simple| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Simple.png
.. |tensorboard-operator-trace-fusion-complex| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Complex.png</pre></body></html> | 2023-09-29T20:55:27.726Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/ubuntu/torch-neuron-ubuntu20-pytorch-dlami.rst.txt | ```
.. _setup-torch-neuron-u20-pytorch-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20 with Pytorch DLAMI
=====================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 101
:end-line: 102
.. card:: PyTorch Neuron(``torch-neuron``) for Inference
:link: inference-torch-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron section for more
:class-body: sphinx-design-class-body-small
:link: neuron-pytorch
:link-type: ref
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20-dlami.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-u20-pytorch-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20 with Pytorch DLAMI
=====================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 101
:end-line: 102
.. card:: PyTorch Neuron(``torch-neuron``) for Inference
:link: inference-torch-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron section for more
:class-body: sphinx-design-class-body-small
:link: neuron-pytorch
:link-type: ref
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20-dlami.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:27.777Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/quick-start/torch-neuron.rst.txt | ```
.. _torch_quick_start:
Get Started with PyTorch Neuron
===============================
This page provide links that will assist you to quickly start with :ref:`pytorch-neuronx-main` for both Inference and Training.
.. note::
Below instructions are for Ubuntu20, if you looking for complete setup instructions for different platforms, please :ref:`Check Here. <setup-guide-index>`
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /general/setup/install-templates/launch-instance.txt
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. tab-set::
.. tab-item:: torch-neuronx (``Trn1, Inf2``)
.. include:: tab-inference-torch-neuronx.txt
.. tab-item:: torch-neuron (``Inf1``)
.. include:: tab-inference-torch-neuron.txt
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch_quick_start:
Get Started with PyTorch Neuron
===============================
This page provide links that will assist you to quickly start with :ref:`pytorch-neuronx-main` for both Inference and Training.
.. note::
Below instructions are for Ubuntu20, if you looking for complete setup instructions for different platforms, please :ref:`Check Here. <setup-guide-index>`
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /general/setup/install-templates/launch-instance.txt
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. tab-set::
.. tab-item:: torch-neuronx (``Trn1, Inf2``)
.. include:: tab-inference-torch-neuronx.txt
.. tab-item:: torch-neuron (``Inf1``)
.. include:: tab-inference-torch-neuron.txt</pre></body></html> | 2023-09-29T20:55:27.807Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/index.rst.txt | ```
.. _mxnet-tutorials:
Neuron Apache MXNet (Incubating) Tutorials
==========================================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`mxnet-tutorial-setup` before running an MXNet tutorial on Inferentia.
.. toctree::
:hidden:
/frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup
.. _mxnet-computervision:
Computer Vision
---------------
* ResNet-50 tutorial :ref:`[html] </src/examples/mxnet/resnet50/resnet50.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50/resnet50.ipynb>`
* Model Serving tutorial :ref:`[html] <mxnet-neuron-model-serving>`
* Getting started with Gluon tutorial :ref:`[html] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <mxnet-gluon-tutorial.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/resnet50/resnet50.ipynb
/frameworks/mxnet-neuron/tutorials/tutorial-model-serving
/src/examples/mxnet/mxnet-gluon-tutorial.ipynb
.. _mxnet-nlp:
Natural Language Processing
---------------------------
* MXNet 1.8: Using data parallel mode tutorial :ref:`[html] </src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <data_parallel/data_parallel_tutorial.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb
.. _mxnet-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* NeuronCore Groups tutorial :ref:`[html] </src/examples/mxnet/resnet50_neuroncore_groups.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50_neuroncore_groups.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/resnet50_neuroncore_groups.ipynb
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet-tutorials:
Neuron Apache MXNet (Incubating) Tutorials
==========================================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`mxnet-tutorial-setup` before running an MXNet tutorial on Inferentia.
.. toctree::
:hidden:
/frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup
.. _mxnet-computervision:
Computer Vision
---------------
* ResNet-50 tutorial :ref:`[html] </src/examples/mxnet/resnet50/resnet50.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50/resnet50.ipynb>`
* Model Serving tutorial :ref:`[html] <mxnet-neuron-model-serving>`
* Getting started with Gluon tutorial :ref:`[html] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <mxnet-gluon-tutorial.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/resnet50/resnet50.ipynb
/frameworks/mxnet-neuron/tutorials/tutorial-model-serving
/src/examples/mxnet/mxnet-gluon-tutorial.ipynb
.. _mxnet-nlp:
Natural Language Processing
---------------------------
* MXNet 1.8: Using data parallel mode tutorial :ref:`[html] </src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <data_parallel/data_parallel_tutorial.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb
.. _mxnet-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* NeuronCore Groups tutorial :ref:`[html] </src/examples/mxnet/resnet50_neuroncore_groups.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50_neuroncore_groups.ipynb>`
.. toctree::
:hidden:
/src/examples/mxnet/resnet50_neuroncore_groups.ipynb
</pre></body></html> | 2023-09-29T20:55:27.870Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tutorials/tutorial-neuron-monitor-mnist.rst.txt | ```
.. _track-system-monitor:
Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron
==========================================================================================
.. contents:: Table of Contents
:local:
:depth: 2
This tutorial explains how to monitor resource utilization using **neuron-monitor**, **Prometheus** and **Grafana** while running a multi-layer
perceptron MNIST model on Trainium using PyTorch Neuron.
Multi-layer Perceptron MNIST Model
----------------------------------
This tutorial is based on the MNIST example for PyTorch Neuron on Trainium.
For the full tutorial, please see :ref:`Multi-Layer Perceptron Training Tutorial <neuronx-mlp-training-tutorial>`.
The Training Job
----------------
For this tutorial, we will make the original script do more work thus giving us more system utilization data to observe. The training
loop is simply repeated 1000 times:
.. code:: python
for run in range(0, 1000):
print(f'Run {run}')
model.train()
...
Save the following code as :download:`train_monitor.py <examples/pytorch/mnist_mlp/train_monitor.py>` and you can run it as
``python3 train_monitor.py`` on a Trn1 instance.
.. literalinclude:: /src/examples/pytorch/mnist_mlp/train_monitor.py
:language: python
Setting up **Prometheus** and **Grafana**
-----------------------------------------
.. note::
The setup presented in the following paragraphs can be extended to monitor any number of instances running training jobs or
inference workloads. For this tutorial, we will set everything up on a single Trn1 instance running Amazon Linux 2.
Setting up **Prometheus**
~~~~~~~~~~~~~~~~~~~~~~~~~
For a more detailed guide on how to install **Prometheus** visit their official guide at https://prometheus.io/docs/prometheus/latest/getting_started/.
Download and unzip a prebuilt **Prometheus** binary on your Trn1 instance:
.. code:: bash
wget https://github.com/prometheus/prometheus/releases/download/v2.38.0/prometheus-2.38.0.linux-amd64.tar.gz
tar -xzvf prometheus-2.38.0.linux-amd64.tar.gz
cd prometheus-2.38.0.linux-amd64/
Create a config and add a scrape target:
.. code:: bash
vim prometheus.yml
.. code:: yml
scrape_configs:
- job_name: 'neuron'
# Scrape target every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8000']
Finally, start **Prometheus**:
.. code:: bash
./prometheus --config.file=prometheus.yml
Setting up **Grafana**
~~~~~~~~~~~~~~~~~~~~~~
For a more detailed guide on how to install **Grafana** visit their official guide at https://grafana.com/grafana/download.
Add the Grafana repo to yum:
.. code:: bash
sudo vim /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
Install and start **Grafana**:
.. code:: bash
sudo yum install -y grafana
sudo /bin/systemctl start grafana-server.service
By default, **Grafana** will run a HTTP server on port 3000. If you need to change that, update its config and restart the service:
.. code:: bash
sudo vim /etc/grafana/grafana.ini
...
sudo /bin/systemctl start grafana-server.service
Using your favorite web browser, access the Grafana webpage and add a new dashboard.
The default user and password are both 'admin':
.. image:: tutorial_grafana_login.png
:alt: Image: image.png
Next, you'll add a Prometheus data source by going to ``Configuration`` -> ``Data Sources``:
.. image:: tutorial_grafana_data_sources.png
:alt: Image: image.png
... and adding the local **Prometheus** server as a data source:
.. image:: tutorial_grafana_add_prometheus.png
:alt: Image: image.png
Finally, upload the sample dashboard :download:`neuron-monitor-grafana.json <src/examples/neuron-monitor/neuron-monitor-grafana.json>`
to **Grafana**:
.. image:: tutorial_grafana_upload_dash.png
:alt: Image: image.png
Monitoring the Training Workload
--------------------------------
Start the training job which, due to the artificially added complexity, will take more than 15 minutes:
.. code:: bash
python train_monitor.py
On the same instance, start ``neuron-monitor`` and its companion script, ``neuron-monitor-prometheus.py``:
.. code:: bash
neuron-monitor | neuron-monitor-prometheus.py
Once they are running, you can use your web browser, access the **Grafana** server running on your Trn1 instance and
view a timeline of the system utilization.
The upper part of the dashboard contains:
- a list of the currently monitored instances (for this tutorial there is a single Trn1 instance)
- aggregated metrics for stats such as NeuronCore utilization, NeuronCores in use, iteration success rates, error rates etc.
- a timeline of execution status rates and execution latencies
.. image:: tutorial_grafana_dash_1.png
:alt: Image: image.png
The lower part of the dashboard contains:
- one line of charts containing a timeline of Neuron resource utilization (NeuronCore, vCPU and memory utilization)
- one line of charts containing a timeline of host resource utilization (vCPU and memory utilization)
.. image:: tutorial_grafana_dash_2.png
:alt: Image: image.png
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _track-system-monitor:
Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron
==========================================================================================
.. contents:: Table of Contents
:local:
:depth: 2
This tutorial explains how to monitor resource utilization using **neuron-monitor**, **Prometheus** and **Grafana** while running a multi-layer
perceptron MNIST model on Trainium using PyTorch Neuron.
Multi-layer Perceptron MNIST Model
----------------------------------
This tutorial is based on the MNIST example for PyTorch Neuron on Trainium.
For the full tutorial, please see :ref:`Multi-Layer Perceptron Training Tutorial <neuronx-mlp-training-tutorial>`.
The Training Job
----------------
For this tutorial, we will make the original script do more work thus giving us more system utilization data to observe. The training
loop is simply repeated 1000 times:
.. code:: python
for run in range(0, 1000):
print(f'Run {run}')
model.train()
...
Save the following code as :download:`train_monitor.py <examples/pytorch/mnist_mlp/train_monitor.py>` and you can run it as
``python3 train_monitor.py`` on a Trn1 instance.
.. literalinclude:: /src/examples/pytorch/mnist_mlp/train_monitor.py
:language: python
Setting up **Prometheus** and **Grafana**
-----------------------------------------
.. note::
The setup presented in the following paragraphs can be extended to monitor any number of instances running training jobs or
inference workloads. For this tutorial, we will set everything up on a single Trn1 instance running Amazon Linux 2.
Setting up **Prometheus**
~~~~~~~~~~~~~~~~~~~~~~~~~
For a more detailed guide on how to install **Prometheus** visit their official guide at https://prometheus.io/docs/prometheus/latest/getting_started/.
Download and unzip a prebuilt **Prometheus** binary on your Trn1 instance:
.. code:: bash
wget https://github.com/prometheus/prometheus/releases/download/v2.38.0/prometheus-2.38.0.linux-amd64.tar.gz
tar -xzvf prometheus-2.38.0.linux-amd64.tar.gz
cd prometheus-2.38.0.linux-amd64/
Create a config and add a scrape target:
.. code:: bash
vim prometheus.yml
.. code:: yml
scrape_configs:
- job_name: 'neuron'
# Scrape target every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8000']
Finally, start **Prometheus**:
.. code:: bash
./prometheus --config.file=prometheus.yml
Setting up **Grafana**
~~~~~~~~~~~~~~~~~~~~~~
For a more detailed guide on how to install **Grafana** visit their official guide at https://grafana.com/grafana/download.
Add the Grafana repo to yum:
.. code:: bash
sudo vim /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
Install and start **Grafana**:
.. code:: bash
sudo yum install -y grafana
sudo /bin/systemctl start grafana-server.service
By default, **Grafana** will run a HTTP server on port 3000. If you need to change that, update its config and restart the service:
.. code:: bash
sudo vim /etc/grafana/grafana.ini
...
sudo /bin/systemctl start grafana-server.service
Using your favorite web browser, access the Grafana webpage and add a new dashboard.
The default user and password are both 'admin':
.. image:: tutorial_grafana_login.png
:alt: Image: image.png
Next, you'll add a Prometheus data source by going to ``Configuration`` -> ``Data Sources``:
.. image:: tutorial_grafana_data_sources.png
:alt: Image: image.png
... and adding the local **Prometheus** server as a data source:
.. image:: tutorial_grafana_add_prometheus.png
:alt: Image: image.png
Finally, upload the sample dashboard :download:`neuron-monitor-grafana.json <src/examples/neuron-monitor/neuron-monitor-grafana.json>`
to **Grafana**:
.. image:: tutorial_grafana_upload_dash.png
:alt: Image: image.png
Monitoring the Training Workload
--------------------------------
Start the training job which, due to the artificially added complexity, will take more than 15 minutes:
.. code:: bash
python train_monitor.py
On the same instance, start ``neuron-monitor`` and its companion script, ``neuron-monitor-prometheus.py``:
.. code:: bash
neuron-monitor | neuron-monitor-prometheus.py
Once they are running, you can use your web browser, access the **Grafana** server running on your Trn1 instance and
view a timeline of the system utilization.
The upper part of the dashboard contains:
- a list of the currently monitored instances (for this tutorial there is a single Trn1 instance)
- aggregated metrics for stats such as NeuronCore utilization, NeuronCores in use, iteration success rates, error rates etc.
- a timeline of execution status rates and execution latencies
.. image:: tutorial_grafana_dash_1.png
:alt: Image: image.png
The lower part of the dashboard contains:
- one line of charts containing a timeline of Neuron resource utilization (NeuronCore, vCPU and memory utilization)
- one line of charts containing a timeline of host resource utilization (vCPU and memory utilization)
.. image:: tutorial_grafana_dash_2.png
:alt: Image: image.png
</pre></body></html> | 2023-09-29T20:55:27.998Z | |
TensorFlow Tutorial Setup — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html#tensorflow-tutorial-setup | # TensorFlow Tutorial Setup — AWS Neuron Documentation
Toggle in-page Table of Contents
## TensorFlow Tutorial Setup
_This document is relevant for_: `Inf1`
## TensorFlow Tutorial Setup[#](#tensorflow-tutorial-setup "Permalink to this headline")
1. Launch an Inf1.6xlarge Instance:
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/).
- When choosing an Amazon Machine Image (AMI) make sure to select [Deep Learning AMI with Conda Options](https://docs.aws.amazon.com/dlami/latest/devguide/conda.html). Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
2. Set up a development environment:
- Enable or install TensorFlow-Neuron: [Install TensorFlow Neuron](../setup/tensorflow-install.html#install-neuron-tensorflow).
3. Run tutorial in Jupyter notebook:
- Follow instruction at [Setup Jupyter notebook](../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting) to:
1. Start the Jupyter Notebook on the instance
2. Run the Jupyter Notebook from your local browser
- Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:
```
git clone https://github.com/aws/aws-neuron-sdk.git
cd aws-neuron-sdk/src/examples/tensorflow
```
- Locate the tutorial notebook file (.ipynb file) under `aws-neuron-sdk/src/examples/tensorflow`
- From your local browser, open the tutorial notebook from the menu and follow the instructions.
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow Tutorial Setup — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow Tutorial Setup</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="tensorflow-tutorial-setup">
<span id="id1"></span><h1>TensorFlow Tutorial Setup<a class="headerlink" href="#tensorflow-tutorial-setup" title="Permalink to this headline">#</a></h1>
<ol class="arabic">
<li><dl>
<dt>Launch an Inf1.6xlarge Instance:</dt><dd><ul class="simple">
<li><p>Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a>.</p></li>
<li><p>When choosing an Amazon Machine Image (AMI) make sure to select <a class="reference external" href="https://docs.aws.amazon.com/dlami/latest/devguide/conda.html">Deep Learning AMI with Conda Options</a>. Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.</p></li>
<li><p>After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>You can also launch the instance from AWS CLI, please see <a class="reference internal" href="../../../../general/setup/install-templates/inf1/launch-inf1-dlami-aws-cli.html#launch-inf1-dlami-aws-cli"><span class="std std-ref">AWS CLI commands to launch inf1 instances</span></a>.</p>
</div>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>Set up a development environment:</dt><dd><ul class="simple">
<li><p>Enable or install TensorFlow-Neuron: <a class="reference internal" href="../setup/tensorflow-install.html#install-neuron-tensorflow"><span class="std std-ref">Install TensorFlow Neuron</span></a>.</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl>
<dt>Run tutorial in Jupyter notebook:</dt><dd><ul>
<li><p>Follow instruction at <a class="reference internal" href="../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting"><span class="std std-ref">Setup Jupyter notebook</span></a> to:</p>
<ol class="arabic simple">
<li><p>Start the Jupyter Notebook on the instance</p></li>
<li><p>Run the Jupyter Notebook from your local browser</p></li>
</ol>
</li>
<li><p>Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">aws</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">.</span><span class="n">git</span>
<span class="n">cd</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">/</span><span class="n">src</span><span class="o">/</span><span class="n">examples</span><span class="o">/</span><span class="n">tensorflow</span>
</pre></div>
</div>
</li>
<li><p>Locate the tutorial notebook file (.ipynb file) under <code class="docutils literal notranslate"><span class="pre">aws-neuron-sdk/src/examples/tensorflow</span></code></p></li>
<li><p>From your local browser, open the tutorial notebook from the menu and follow the instructions.</p></li>
</ul>
</dd>
</dl>
</li>
</ol>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:28.430Z |
MXNet Tutorial Setup — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup.html#mxnet-tutorial-setup | # MXNet Tutorial Setup — AWS Neuron Documentation
Toggle in-page Table of Contents
_This document is relevant for_: `Inf1`
## MXNet Tutorial Setup[#](#mxnet-tutorial-setup "Permalink to this headline")
1. Launch an Inf1.6xlarge Instance:
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/).
- When choosing an Amazon Machine Image (AMI) make sure to select [Deep Learning AMI with Conda Options](https://docs.aws.amazon.com/dlami/latest/devguide/conda.html). Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
2. Set up a development environment:
- Enable or install MXNet-Neuron: [Install MXNet Neuron](../setup/mxnet-install.html#install-neuron-mxnet).
3. Run tutorial in Jupyter notebook:
- Follow instruction at [Setup Jupyter notebook](../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting) to:
1. Start the Jupyter Notebook on the instance
2. Run the Jupyter Notebook from your local browser
- Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:
```
git clone https://github.com/aws/aws-neuron-sdk.git
cd aws-neuron-sdk/src/examples/mxnet
```
- Locate the tutorial notebook file (.ipynb file) under `aws-neuron-sdk/src/examples/mxnet`
- From your local browser, open the tutorial notebook from the menu and follow the instructions.
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>MXNet Tutorial Setup — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/mxnet-neuron/tutorials/mxnet-tutorial-setup.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/frameworks/mxnet-neuron/tutorials/mxnet-tutorial-setup.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>MXNet Tutorial Setup</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="mxnet-tutorial-setup">
<span id="id1"></span><h1>MXNet Tutorial Setup<a class="headerlink" href="#mxnet-tutorial-setup" title="Permalink to this headline">#</a></h1>
<ol class="arabic">
<li><dl>
<dt>Launch an Inf1.6xlarge Instance:</dt><dd><ul class="simple">
<li><p>Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a>.</p></li>
<li><p>When choosing an Amazon Machine Image (AMI) make sure to select <a class="reference external" href="https://docs.aws.amazon.com/dlami/latest/devguide/conda.html">Deep Learning AMI with Conda Options</a>. Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.</p></li>
<li><p>After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>You can also launch the instance from AWS CLI, please see <a class="reference internal" href="../../../general/setup/install-templates/inf1/launch-inf1-dlami-aws-cli.html#launch-inf1-dlami-aws-cli"><span class="std std-ref">AWS CLI commands to launch inf1 instances</span></a>.</p>
</div>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>Set up a development environment:</dt><dd><ul class="simple">
<li><p>Enable or install MXNet-Neuron: <a class="reference internal" href="../setup/mxnet-install.html#install-neuron-mxnet"><span class="std std-ref">Install MXNet Neuron</span></a>.</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl>
<dt>Run tutorial in Jupyter notebook:</dt><dd><ul>
<li><p>Follow instruction at <a class="reference internal" href="../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting"><span class="std std-ref">Setup Jupyter notebook</span></a> to:</p>
<ol class="arabic simple">
<li><p>Start the Jupyter Notebook on the instance</p></li>
<li><p>Run the Jupyter Notebook from your local browser</p></li>
</ol>
</li>
<li><p>Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">aws</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">.</span><span class="n">git</span>
<span class="n">cd</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">/</span><span class="n">src</span><span class="o">/</span><span class="n">examples</span><span class="o">/</span><span class="n">mxnet</span>
</pre></div>
</div>
</li>
<li><p>Locate the tutorial notebook file (.ipynb file) under <code class="docutils literal notranslate"><span class="pre">aws-neuron-sdk/src/examples/mxnet</span></code></p></li>
<li><p>From your local browser, open the tutorial notebook from the menu and follow the instructions.</p></li>
</ul>
</dd>
</dl>
</li>
</ol>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:28.696Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/ubuntu/torch-neuron-ubuntu20.rst.txt | ```
.. _setup-torch-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20
====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u20.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20
====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u20.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:29.589Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/ubuntu/torch-neuron-ubuntu20-base-dlami.rst.txt | ```
.. _setup-torch-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20 with DLAMI Base
==================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u20.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 20 with DLAMI Base
==================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u20.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u20.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:29.774Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/ubuntu/torch-neuron-ubuntu22.rst.txt | ```
.. _setup-torch-neuron-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 22
=====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 22 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u22.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u22.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Ubuntu 22
=====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 22 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-u22.txt
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-u22.rst</pre></body></html> | 2023-09-29T20:55:29.988Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/amazon-linux/torch-neuron-al2-base-dlami.rst.txt | ```
.. _setup-torch-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2 with DLAMI Base
=======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-al2.txt
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2 with DLAMI Base
=======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-al2.txt
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.047Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/amazon-linux/torch-neuron-al2.rst.txt | ```
.. _setup-torch-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-al2.txt
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-torch-neuron-al2.txt
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.126Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/amazon-linux/torch-neuronx-al2-pytorch-dlami.rst.txt | ```
.. _setup-torch-neuronx-al2-dlami-pytorch:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2 with DLAMI Pytorch
===========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=trn1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 50
:end-line: 51
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Inference section
:link: inference-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Training section
:link: training-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2-dlami.rst
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-al2-dlami-pytorch:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2 with DLAMI Pytorch
===========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=trn1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 50
:end-line: 51
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Inference section
:link: inference-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Training section
:link: training-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2-dlami.rst
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.135Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/amazon-linux/torch-neuronx-al2-base-dlami.rst.txt | ```
.. _setup-torch-neuronx-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2 with DLAMI Base
=======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-torch-neuronx-al2.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2 with DLAMI Base
=======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-torch-neuronx-al2.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.229Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuron/amazon-linux/torch-neuron-al2-pytorch-dlami.rst.txt | ```
.. _setup-torch-neuron-al2-pytorch-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2 with Pytorch DLAMI
=========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 98
:end-line: 99
.. card:: Visit PyTorch Neuron(``torch-neuron``) for Inference section
:link: inference-torch-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron section for more
:class-body: sphinx-design-class-body-small
:link: neuron-pytorch
:link-type: ref
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-al2-dlami.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuron-al2-pytorch-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuron") Setup on Amazon Linux 2 with Pytorch DLAMI
=========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-torch-neuron`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 98
:end-line: 99
.. card:: Visit PyTorch Neuron(``torch-neuron``) for Inference section
:link: inference-torch-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron section for more
:class-body: sphinx-design-class-body-small
:link: neuron-pytorch
:link-type: ref
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-update-al2-dlami.rst
.. include:: /frameworks/torch/torch-neuron/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.273Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/ubuntu/torch-neuronx-ubuntu20-base-dlami.rst.txt | ```
.. _setup-torch-neuronx-ubuntu20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20 with DLAMI Base
==================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u20.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-ubuntu20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20 with DLAMI Base
==================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u20.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:30.413Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/ubuntu/torch-neuronx-ubuntu20-pytorch-dlami.rst.txt | ```
.. _setup-torch-neuronx-ubuntu20-dlami-pytorch:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20 with DLAMI Pytorch
======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=trn1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 53
:end-line: 54
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Inference section
:link: inference-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Training section
:link: training-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20-dlami.rst
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-ubuntu20-dlami-pytorch:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20 with DLAMI Pytorch
======================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Neuron Pytorch 1.13 AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=trn1
.. dropdown:: Get Started With Pytorch DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 53
:end-line: 54
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Inference section
:link: inference-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit PyTorch Neuron(``torch-neuronx``) for Training section
:link: training-torch-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20-dlami.rst
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:30.697Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/amazon-linux/torch-neuronx-al2.rst.txt | ```
.. _setup-torch-neuronx-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-torch-neuronx-al2.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-torch-neuronx-al2.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-al2.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:30.725Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/ubuntu/torch-neuronx-ubuntu20.rst.txt | ```
.. _setup-torch-neuronx-ubuntu20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:width: 100%
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20
===================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 20 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u20.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-ubuntu20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:width: 100%
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 20
===================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 20 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u20.txt
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-update-u20.rst
.. include :: /frameworks/torch/torch-neuronx/setup/pytorch-install-prev-u20.rst
</pre></body></html> | 2023-09-29T20:55:30.734Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/amazon-linux/tensorflow-neuronx-al2.rst.txt | ```
.. _setup-tensorflow-neuronx-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2
================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-al2.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2
================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-al2.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst
</pre></body></html> | 2023-09-29T20:55:30.741Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/pytorch/neuronx/ubuntu/torch-neuronx-ubuntu22.rst.txt | ```
.. _setup-torch-neuronx-ubuntu22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 22
=====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 22 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u22.txt
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-u22.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-torch-neuronx-ubuntu22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
PyTorch Neuron ("torch-neuronx") Setup on Ubuntu 22
=====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of PyTorch Neuron (``torch-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`pytorch-neuronx-main` for both Inference and Training.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 22 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-torch-neuronx-u22.txt
.. include:: /frameworks/torch/torch-neuronx/setup/pytorch-update-u22.rst</pre></body></html> | 2023-09-29T20:55:30.791Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/amazon-linux/tensorflow-neuronx-al2-base-dlami.rst.txt | ```
.. _setup-tensorflow-neuronx-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
Tensorflow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2 with DLAMI Base
==============================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Tensorflow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-al2.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
Tensorflow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2 with DLAMI Base
==============================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Tensorflow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 2
:end-line: 3
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-al2.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst
</pre></body></html> | 2023-09-29T20:55:30.823Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/ubuntu/tensorflow-neuronx-ubuntu20-base-dlami.rst.txt | ```
.. _setup-tensorflow-neuronx-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20 with DLAMI Base
===========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u20.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20 with DLAMI Base
===========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u20.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst
</pre></body></html> | 2023-09-29T20:55:30.949Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/ubuntu/tensorflow-neuronx-ubuntu20.rst.txt | ```
.. _setup-tensorflow-neuronx-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20
===========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 20 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u20.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20
===========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 20 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u20.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst
</pre></body></html> | 2023-09-29T20:55:31.136Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/ubuntu/tensorflow-neuronx-ubuntu22.rst.txt | ```
.. _setup-tensorflow-neuronx-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 22
=============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 22 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u22.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u22.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 22
=============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. include:: /general/setup/install-templates/trn1-ga-warning.txt
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Select Ubuntu Server 22 AMI
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include :: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 5
:end-line: 6
.. include:: /general/quick-start/tab-inference-tensorflow-neuronx-u22.txt
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u22.rst
</pre></body></html> | 2023-09-29T20:55:31.160Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/mxnet/neuron/amazon-linux/mxnet-neuron-al2.rst.txt | ```
.. _setup-mxnet-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Amazon Linux 2
======================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-al2.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-mxnet-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Amazon Linux 2
======================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-al2.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:31.319Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/ubuntu/tensorflow-neuronx-ubuntu20-tensorflow-dlami.rst.txt | ```
.. _setup-tensorflow-neuronx-u20-dlami-tensorflow:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20 with DLAMI TensorFlow
=================================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `Deep Learning AMI Neuron TensorFlow 2.10 <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=trn1
.. dropdown:: Get Started With TensorFlow DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 95
:end-line: 96
.. card:: Visit TensorFlow Neuron(``tensorflow-neuronx``) for Inference section
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit TensorFlow Neuron section for more
:class-body: sphinx-design-class-body-small
:link: tensorflow-neuron-main
:link-type: ref
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20-dlami.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-u20-dlami-tensorflow:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuronx") Setup on Ubuntu 20 with DLAMI TensorFlow
=================================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main` for both Inference and Training.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `Deep Learning AMI Neuron TensorFlow 2.10 <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=trn1
.. dropdown:: Get Started With TensorFlow DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 95
:end-line: 96
.. card:: Visit TensorFlow Neuron(``tensorflow-neuronx``) for Inference section
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit TensorFlow Neuron section for more
:class-body: sphinx-design-class-body-small
:link: tensorflow-neuron-main
:link-type: ref
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-u20-dlami.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:31.402Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/mxnet/neuron/amazon-linux/mxnet-neuron-al2-base-dlami.rst.txt | ```
.. _setup-mxnet-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-al2.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-mxnet-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Amazon Linux 2
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-al2.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:31.433Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/mxnet/neuron/ubuntu/mxnet-neuron-ubuntu20-base-dlami.rst.txt | ```
.. _setup-mxnet-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Ubuntu 20
================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-u20.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-mxnet-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Ubuntu 20
================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-u20.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:31.471Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuronx/amazon-linux/tensorflow-neuronx-al2-tensorflow-dlami.rst.txt | ```
.. _setup-tensorflow-neuronx-al2-dlami-tensorflow:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
Tensorflow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2 with DLAMI Tensorflow
=======================================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Neuron Tensorflow (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `Deep Learning AMI Neuron TensorFlow 2.10 <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron TensorFlow 2.10 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=trn1
.. dropdown:: Get Started With Tensorflow DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 92
:end-line: 93
.. card:: Visit TensorFlow Neuron(``tensorflow-neuronx``) for Inference section
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit TensorFlow Neuron section for more
:class-body: sphinx-design-class-body-small
:link: tensorflow-neuron-main
:link-type: ref
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2-dlami.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuronx-al2-dlami-tensorflow:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
Tensorflow Neuron ("tensorflow-neuronx") Setup on Amazon Linux 2 with DLAMI Tensorflow
=======================================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Neuron Tensorflow (``tensorflow-neuronx``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`tensorflow-neuronx-main`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Trn1 web page <https://aws.amazon.com/ec2/instance-types/trn1/>`_, `Inf2 web page <https://aws.amazon.com/ec2/instance-types/inf2/>`_
* Check for the latest version of the `Deep Learning AMI Neuron TensorFlow 2.10 <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning AMI Neuron TensorFlow 2.10 (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see an exact matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Update Neuron Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=driver_runtime_tools --framework=pytorch --framework-version=1.13.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=trn1
.. dropdown:: Get Started With Tensorflow DLAMI
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
.. include:: /src/helperscripts/installationScripts/python_instructions.txt
:start-line: 92
:end-line: 93
.. card:: Visit TensorFlow Neuron(``tensorflow-neuronx``) for Inference section
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Visit TensorFlow Neuron section for more
:class-body: sphinx-design-class-body-small
:link: tensorflow-neuron-main
:link-type: ref
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-update-al2-dlami.rst
.. include:: /frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-install-prev-al2.rst</pre></body></html> | 2023-09-29T20:55:31.577Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuron/ubuntu/tensorflow-neuron-ubuntu20.rst.txt | ```
.. _setup-tensorflow-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 20
====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u20.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u20.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 20
====================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u20.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u20.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:31.593Z | |
TensorFlow Setup Guide for Inf2 & Trn1 — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuronx/setup/index.html#tensorflow-neuronx-main | # TensorFlow Setup Guide for Inf2 & Trn1 — AWS Neuron Documentation
Toggle in-page Table of Contents
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
## TensorFlow Setup Guide for Inf2 & Trn1[#](#tensorflow-setup-guide-for-inf2-trn1 "Permalink to this headline")
- [Fresh install](tensorflow-neuronx-install.html)
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow Setup Guide for Inf2 & Trn1 — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuronx/setup/index", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuronx/setup/index.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuronx/setup/index.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/tensorflow/tensorflow-neuronx/setup/index.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow Setup Guide for Inf2 & Trn1</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="tensorflow-setup-guide-for-inf2-trn1">
<span id="tensorflow-neuronx-main"></span><span id="tensorflow-neuron-setup"></span><h1>TensorFlow Setup Guide for Inf2 & Trn1<a class="headerlink" href="#tensorflow-setup-guide-for-inf2-trn1" title="Permalink to this headline">#</a></h1>
<div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="tensorflow-neuronx-install.html">Fresh install</a></li>
</ul>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:31.781Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuron/ubuntu/tensorflow-neuron-ubuntu20-base-dlami.rst.txt | ```
.. _setup-tensorflow-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 20 with DLAMI Base
==========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Tensorflow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u20.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u20.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuron-u20-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 20 with DLAMI Base
==========================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of Tensorflow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Ubuntu 20.04) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u20.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u20.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:31.799Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/tutorials/build-run-neuron-container.rst.txt | ```
.. _how-to-build-neuron-container:
Tutorial How to Build and Run a Neuron Container
================================================
Introduction
------------
This document explains how to build a Neuron Container using an existing Dockerfile.
Pre-requisites
--------------
#. Docker version 18 or newer is configured according to :ref:`tutorial-docker-env-setup`
#. Inf1/Trn1 instance with available :ref:`Neuron Devices<container-devices>`
#. If running a serving application such as tensorflow-model-server, torchserve or multi-model-server, make sure the appropriate ports that the server listens to are exposed using EXPOSE in the Dockerfile or the arguments ``-p 80:8080`` on the ``docker run`` command.
.. _running-application-container:
Build and Run the Application Container
---------------------------------------
Follow the steps below for creating neuron application containers.
- Build a docker image using provided dockerfile :ref:`libmode-dockerfile` for Inf1 and :ref:`trainium-dlc-dockerfile` for Trn1 (also for Trn1 the dockerfile needs mlp train script found here at :ref:`mlp-train`
.. code:: bash
docker build . -f Dockerfile.pt -t neuron-container:pytorch
- Run the container locally:
.. code:: bash
docker run -it --name pt17 --device=/dev/neuron0 neuron-container:pytorch neuron-ls
Expected result for Inf1:
::
+--------------+---------+--------+-----------+-----------+------+------+
| PCI BDF | LOGICAL | NEURON | MEMORY | MEMORY | EAST | WEST |
| | ID | CORES | CHANNEL 0 | CHANNEL 1 | | |
+--------------+---------+--------+-----------+-----------+------+------+
| 0000:00:1f.0 | 0 | 4 | 4096 MB | 4096 MB | 0 | 0 |
+--------------+---------+--------+-----------+-----------+------+------+
Expected result for Trn1:
::
+--------+--------+--------+-----------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+-----------+---------+
| 0 | 4 | 8 GB | 1 | 00:1f.0 |
+--------+--------+--------+-----------+---------+
.. note::
If instead of the --device option above if the env variable AWS_NEURON_VISIBLE_DEVICES
is to be used then the oci hook needs to installed by following instructions in :ref:`tutorial-oci-hook`
Important to know
-----------------
.. _container-devices:
Devices
^^^^^^^
- The docker native way is to use --device /dev/neuron# for each of the Neuron Devices intended to be passed. When using --device option ALL/all is not supported.
.. code:: bash
docker run --device=/dev/neuron0 --device=/dev/neuron1
- If you install the aws-neuronx-oci-hook package, you will have an OCI hook that also supports use of a container environment variable AWS_NEURON_VISIBLE_DEVICES=<ALL | csv of devices>, which intends to make things easier for multi device scenarios. Following are some examples. For setting up oci hook please refer :ref:`oci neuron hook <tutorial-oci-hook>`
.. code:: bash
docker run -e “AWS_NEURON_VISIBLE_DEVICES=0,1”
docker run -e “AWS_NEURON_VISIBLE_DEVICES=ALL”
- In kubernetes environment, the neuron device plugin is used for exposing the neuron device to the containers in the pod. The number of devices can be adjusted using the *aws.amazon.com/neurondevice* resource in the pod specification. Refer :ref:`K8s setup <tutorial-k8s-env-setup-for-neuron>` for more details
.. code:: bash
resources:
limits:
aws.amazon.com/neurondevice: 1
.. note::
Only the number of devices can be specfied.
When only the neuron device plugin is running that does not guaratee the devices to be
contiguous. Make sure to run the neuron scheduler extension :ref:`neuron-k8-scheduler-ext`
so that it makes sure that contigiuous devices are allocated to the containers
- Multiple container applications running in the same host can share the devices but the cores cannot be shared. This is similar to running multiple applications in the host.
- In the kubernetes environment the devices cannot be shared by multiple containers in the pod
.. _container-cores:
Cores
^^^^^
Each neuron device has multiple cores. The cores allocated to process/container can be controlled by
the environment variable NEURON_RT_VISIBLE_CORES and NEURON_RT_NUM_CORES. Please refer :ref:`nrt-configuration` for more details.
- The docker native way is to use --device /dev/neuron# for each of the Neuron Devices intended to be passed. Add --env NEURON_RT_VISIBLE_CORES-1,2 to use cores 1 and 2 to this container. For example in inf1.24xlarge with 64 cores, if we want to use cores 51 & 52, the appropriate device and NEURON_RT_VISIBLE_CORES needs to be used. With 4 cores in each device, core 51 is in device 12 and 52 is in device 13
.. code:: bash
docker run --device=/dev/neuron12 --device=/dev/neuron13 --env NEURON_RT_VISIBLE_CORES=51,52
- In kubernetes environment, the neuron device plugin is used for exposing the neuron cores to the containers in the pod. The number of cores can be adjusted using the *aws.amazon.com/neuroncore* resource in the pod specification. Refer :ref:`K8s setup <tutorial-k8s-env-setup-for-neuron>` for more details.
.. code:: bash
resources:
limits:
aws.amazon.com/neuroncore: 1
.. note::
Only the number of cores can be specfied.
When only the neuron device plugin is running that does not guaratee the cores to be
contiguous. Make sure to run the neuron scheduler extension :ref:`neuron-k8-scheduler-ext`
so that it makes sure that contigiuous cores are allocated to the containers
- Multiple container applications running in the same host cannot share the cores. This is similar to running multiple applications in the host.
- In the kubernetes environment the cores cannot be shared by multiple containers in the pod
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _how-to-build-neuron-container:
Tutorial How to Build and Run a Neuron Container
================================================
Introduction
------------
This document explains how to build a Neuron Container using an existing Dockerfile.
Pre-requisites
--------------
#. Docker version 18 or newer is configured according to :ref:`tutorial-docker-env-setup`
#. Inf1/Trn1 instance with available :ref:`Neuron Devices<container-devices>`
#. If running a serving application such as tensorflow-model-server, torchserve or multi-model-server, make sure the appropriate ports that the server listens to are exposed using EXPOSE in the Dockerfile or the arguments ``-p 80:8080`` on the ``docker run`` command.
.. _running-application-container:
Build and Run the Application Container
---------------------------------------
Follow the steps below for creating neuron application containers.
- Build a docker image using provided dockerfile :ref:`libmode-dockerfile` for Inf1 and :ref:`trainium-dlc-dockerfile` for Trn1 (also for Trn1 the dockerfile needs mlp train script found here at :ref:`mlp-train`
.. code:: bash
docker build . -f Dockerfile.pt -t neuron-container:pytorch
- Run the container locally:
.. code:: bash
docker run -it --name pt17 --device=/dev/neuron0 neuron-container:pytorch neuron-ls
Expected result for Inf1:
::
+--------------+---------+--------+-----------+-----------+------+------+
| PCI BDF | LOGICAL | NEURON | MEMORY | MEMORY | EAST | WEST |
| | ID | CORES | CHANNEL 0 | CHANNEL 1 | | |
+--------------+---------+--------+-----------+-----------+------+------+
| 0000:00:1f.0 | 0 | 4 | 4096 MB | 4096 MB | 0 | 0 |
+--------------+---------+--------+-----------+-----------+------+------+
Expected result for Trn1:
::
+--------+--------+--------+-----------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+-----------+---------+
| 0 | 4 | 8 GB | 1 | 00:1f.0 |
+--------+--------+--------+-----------+---------+
.. note::
If instead of the --device option above if the env variable AWS_NEURON_VISIBLE_DEVICES
is to be used then the oci hook needs to installed by following instructions in :ref:`tutorial-oci-hook`
Important to know
-----------------
.. _container-devices:
Devices
^^^^^^^
- The docker native way is to use --device /dev/neuron# for each of the Neuron Devices intended to be passed. When using --device option ALL/all is not supported.
.. code:: bash
docker run --device=/dev/neuron0 --device=/dev/neuron1
- If you install the aws-neuronx-oci-hook package, you will have an OCI hook that also supports use of a container environment variable AWS_NEURON_VISIBLE_DEVICES=<ALL | csv of devices>, which intends to make things easier for multi device scenarios. Following are some examples. For setting up oci hook please refer :ref:`oci neuron hook <tutorial-oci-hook>`
.. code:: bash
docker run -e “AWS_NEURON_VISIBLE_DEVICES=0,1”
docker run -e “AWS_NEURON_VISIBLE_DEVICES=ALL”
- In kubernetes environment, the neuron device plugin is used for exposing the neuron device to the containers in the pod. The number of devices can be adjusted using the *aws.amazon.com/neurondevice* resource in the pod specification. Refer :ref:`K8s setup <tutorial-k8s-env-setup-for-neuron>` for more details
.. code:: bash
resources:
limits:
aws.amazon.com/neurondevice: 1
.. note::
Only the number of devices can be specfied.
When only the neuron device plugin is running that does not guaratee the devices to be
contiguous. Make sure to run the neuron scheduler extension :ref:`neuron-k8-scheduler-ext`
so that it makes sure that contigiuous devices are allocated to the containers
- Multiple container applications running in the same host can share the devices but the cores cannot be shared. This is similar to running multiple applications in the host.
- In the kubernetes environment the devices cannot be shared by multiple containers in the pod
.. _container-cores:
Cores
^^^^^
Each neuron device has multiple cores. The cores allocated to process/container can be controlled by
the environment variable NEURON_RT_VISIBLE_CORES and NEURON_RT_NUM_CORES. Please refer :ref:`nrt-configuration` for more details.
- The docker native way is to use --device /dev/neuron# for each of the Neuron Devices intended to be passed. Add --env NEURON_RT_VISIBLE_CORES-1,2 to use cores 1 and 2 to this container. For example in inf1.24xlarge with 64 cores, if we want to use cores 51 & 52, the appropriate device and NEURON_RT_VISIBLE_CORES needs to be used. With 4 cores in each device, core 51 is in device 12 and 52 is in device 13
.. code:: bash
docker run --device=/dev/neuron12 --device=/dev/neuron13 --env NEURON_RT_VISIBLE_CORES=51,52
- In kubernetes environment, the neuron device plugin is used for exposing the neuron cores to the containers in the pod. The number of cores can be adjusted using the *aws.amazon.com/neuroncore* resource in the pod specification. Refer :ref:`K8s setup <tutorial-k8s-env-setup-for-neuron>` for more details.
.. code:: bash
resources:
limits:
aws.amazon.com/neuroncore: 1
.. note::
Only the number of cores can be specfied.
When only the neuron device plugin is running that does not guaratee the cores to be
contiguous. Make sure to run the neuron scheduler extension :ref:`neuron-k8-scheduler-ext`
so that it makes sure that contigiuous cores are allocated to the containers
- Multiple container applications running in the same host cannot share the cores. This is similar to running multiple applications in the host.
- In the kubernetes environment the cores cannot be shared by multiple containers in the pod</pre></body></html> | 2023-09-29T20:55:31.807Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/tutorials/tutorial-docker-env-setup.rst.txt | ```
.. _tutorial-docker-env-setup:
Tutorial Docker environment setup
=================================
Introduction
------------
A Neuron application can be deployed using docker containers. This
tutorial describes how to configure docker to expose Inferentia/Trainium devices
to containers.
.. tab-set::
.. tab-item:: Training
.. dropdown:: Install Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
# Configure Linux for Neuron repository updates
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
# Update OS packages
sudo yum update -y
# Install OS headers
sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
# Remove preinstalled packages and Install Neuron Driver and Runtime
sudo yum remove aws-neuron-dkms -y
sudo yum remove aws-neuronx-dkms -y
sudo yum install aws-neuronx-dkms-2.* -y
# Install EFA Driver(only required for multiinstance training)
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
.. dropdown:: Install Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
sudo yum install -y docker.io
sudo usermod -aG docker $USER
Logout and log back in to refresh membership.
.. dropdown:: Verify Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
docker run hello-world
Expected result:
::
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
.. dropdown:: Verify Neuron Component
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
Once the environment is setup, a container can be started with
--device=/dev/neuron# to specify desired set of Inferentia/Trainium devices to be
exposed to the container. To find out the available neuron devices on
your instance, use the command ``ls /dev/neuron*``.
When running neuron-ls inside a container, you will only see the set of
exposed Trainiums. For example:
.. code:: bash
docker run --device=/dev/neuron0 neuron-test neuron-ls
Would produce the following output in trn1.32xlarge:
::
+--------+--------+--------+---------+
| NEURON | NEURON | NEURON | PCI |
| DEVICE | CORES | MEMORY | BDF |
+--------+--------+--------+---------+
| 0 | 2 | 32 GB | 10:1c.0 |
+--------+--------+--------+---------+
.. tab-item:: Inference
.. dropdown:: Install Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
# Configure Linux for Neuron repository updates
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
# Update OS packages
sudo yum update -y
################################################################################################################
# To install or update to Neuron versions 1.19.1 and newer from previous releases:
# - DO NOT skip 'aws-neuron-dkms' install or upgrade step, you MUST install or upgrade to latest Neuron driver
################################################################################################################
# Install OS headers
sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
# Install Neuron Driver
sudo yum install aws-neuron-dkms -y
####################################################################################
# Warning: If Linux kernel is updated as a result of OS package update
# Neuron driver (aws-neuron-dkms) should be re-installed after reboot
####################################################################################
.. dropdown:: Install Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
sudo yum install -y docker.io
sudo usermod -aG docker $USER
Logout and log back in to refresh membership.
.. dropdown:: Verify Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
docker run hello-world
Expected result:
::
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
.. dropdown:: Verify Neuron Component
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
Once the environment is setup, a container can be started with
--device=/dev/neuron# to specify desired set of Inferentia/Trainium devices to be
exposed to the container. To find out the available neuron devices on
your instance, use the command ``ls /dev/neuron*``.
When running neuron-ls inside a container, you will only see the set of
exposed Inferentias. For example:
.. code:: bash
docker run --device=/dev/neuron0 neuron-test neuron-ls
Would produce the following output in inf1.xlarge:
::
+--------------+---------+--------+-----------+-----------+------+------+
| PCI BDF | LOGICAL | NEURON | MEMORY | MEMORY | EAST | WEST |
| | ID | CORES | CHANNEL 0 | CHANNEL 1 | | |
+--------------+---------+--------+-----------+-----------+------+------+
| 0000:00:1f.0 | 0 | 4 | 4096 MB | 4096 MB | 0 | 0 |
+--------------+---------+--------+-----------+-----------+------+------+
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tutorial-docker-env-setup:
Tutorial Docker environment setup
=================================
Introduction
------------
A Neuron application can be deployed using docker containers. This
tutorial describes how to configure docker to expose Inferentia/Trainium devices
to containers.
.. tab-set::
.. tab-item:: Training
.. dropdown:: Install Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
# Configure Linux for Neuron repository updates
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
# Update OS packages
sudo yum update -y
# Install OS headers
sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
# Remove preinstalled packages and Install Neuron Driver and Runtime
sudo yum remove aws-neuron-dkms -y
sudo yum remove aws-neuronx-dkms -y
sudo yum install aws-neuronx-dkms-2.* -y
# Install EFA Driver(only required for multiinstance training)
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
.. dropdown:: Install Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
sudo yum install -y docker.io
sudo usermod -aG docker $USER
Logout and log back in to refresh membership.
.. dropdown:: Verify Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
docker run hello-world
Expected result:
::
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
.. dropdown:: Verify Neuron Component
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
Once the environment is setup, a container can be started with
--device=/dev/neuron# to specify desired set of Inferentia/Trainium devices to be
exposed to the container. To find out the available neuron devices on
your instance, use the command ``ls /dev/neuron*``.
When running neuron-ls inside a container, you will only see the set of
exposed Trainiums. For example:
.. code:: bash
docker run --device=/dev/neuron0 neuron-test neuron-ls
Would produce the following output in trn1.32xlarge:
::
+--------+--------+--------+---------+
| NEURON | NEURON | NEURON | PCI |
| DEVICE | CORES | MEMORY | BDF |
+--------+--------+--------+---------+
| 0 | 2 | 32 GB | 10:1c.0 |
+--------+--------+--------+---------+
.. tab-item:: Inference
.. dropdown:: Install Drivers
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
# Configure Linux for Neuron repository updates
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
# Update OS packages
sudo yum update -y
################################################################################################################
# To install or update to Neuron versions 1.19.1 and newer from previous releases:
# - DO NOT skip 'aws-neuron-dkms' install or upgrade step, you MUST install or upgrade to latest Neuron driver
################################################################################################################
# Install OS headers
sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
# Install Neuron Driver
sudo yum install aws-neuron-dkms -y
####################################################################################
# Warning: If Linux kernel is updated as a result of OS package update
# Neuron driver (aws-neuron-dkms) should be re-installed after reboot
####################################################################################
.. dropdown:: Install Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
sudo yum install -y docker.io
sudo usermod -aG docker $USER
Logout and log back in to refresh membership.
.. dropdown:: Verify Docker
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. code:: bash
docker run hello-world
Expected result:
::
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
.. dropdown:: Verify Neuron Component
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
Once the environment is setup, a container can be started with
--device=/dev/neuron# to specify desired set of Inferentia/Trainium devices to be
exposed to the container. To find out the available neuron devices on
your instance, use the command ``ls /dev/neuron*``.
When running neuron-ls inside a container, you will only see the set of
exposed Inferentias. For example:
.. code:: bash
docker run --device=/dev/neuron0 neuron-test neuron-ls
Would produce the following output in inf1.xlarge:
::
+--------------+---------+--------+-----------+-----------+------+------+
| PCI BDF | LOGICAL | NEURON | MEMORY | MEMORY | EAST | WEST |
| | ID | CORES | CHANNEL 0 | CHANNEL 1 | | |
+--------------+---------+--------+-----------+-----------+------+------+
| 0000:00:1f.0 | 0 | 4 | 4096 MB | 4096 MB | 0 | 0 |
+--------------+---------+--------+-----------+-----------+------+------+
</pre></body></html> | 2023-09-29T20:55:31.842Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/mxnet/neuron/ubuntu/mxnet-neuron-ubuntu20.rst.txt | ```
.. _setup-mxnet-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Ubuntu 20
=================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-u20.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-mxnet-neuron-u20:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
MXNet Neuron ("mxnet-neuron") Setup on Ubuntu 20
=================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of MXNet Neuron (``mxnet-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`install-neuron-mxnet`.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 20 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-mxnet-neuron-u20.txt
.. include:: /frameworks/mxnet-neuron/setup/mxnet-update-u20.rst
.. include:: /frameworks/mxnet-neuron/setup/mxnet-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:31.850Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/docker-example/inference/Dockerfile-inference-dlc.rst.txt | ```
.. _inference-dlc-dockerfile:
DLC sample Dockerfile for Application Container
==============================================
.. literalinclude:: Dockerfile-inference-dlc
:linenos:
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inference-dlc-dockerfile:
DLC sample Dockerfile for Application Container
==============================================
.. literalinclude:: Dockerfile-inference-dlc
:linenos:
</pre></body></html> | 2023-09-29T20:55:31.955Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/tutorials/ks8-neuron-scheduler-flow.rst.txt | ```
.. _k8s-neuron-scheduler-flow:
Neuron Scheduler Extension Flow Diagram
---------------------------------------
::
+----------------------------+
| POD Manifest |
| with Request |
| aws.amazon.com/neuroncore:2|
| |
| |
2 +-------------+--------------+
+--------------------------------+ |
| | |
| | | 3
+------------------------------+-----+ | |
| Kubelet in INF1/TRN1 Node| | |
| +<-----------+ | |
+-----+---------------------+--------+ | +-----v-----------v--------------+
| ^ | | Kube-Scheduler |
| | | | |
| | | +--^------+---------------+------+
9 | 1 | | | | |
| | 8| 5| |4 |
| | | | | |
| | | | | |6
v | | | | |
+-----+---------------------+--------+ | +--+------v---------------v------+
| neuron-device-plugin | +-------+ neuron|scheduler|ext |
| in INF1/TRN1 node | +---------------------+----------+
+----+----------------------+--------+ |
| | |7
| |10 |
| | v
11| | +---------+-------+
| | |POD Manifest: |
| | |Annotation: |
| | |NEURON_CORES:2,3 |
v +---------------------------------------->+ |
--device=/dev/neuron1 --env NEURON_RT_VISIBLE_CORES=2,3 | |
| |
+-----------------+
1. neuron-device-plugin returns the list of Neuron cores/devices to kublet
2. Kubelet advertises the Core/Device list to K8s API server (in turn to kube-scheduler)
3. POD Request for neuron cores/devices [Kube-Scheduler picks up the POD creation request]
4. kube-scheduler calls the neuron-scheduler-extn filter function with list of nodes and POD Specification
5. neuron-scheduler-extn scans through the nodes and filters out nodes with non
contiguous cores/devices and returns the nodes that are capable of supporing the given POD specification
6. kube-scheduler calls the neuron-scheduler-extn bind function with pod and node
7. neuron-scheduler-extn updates the POD annotation with allocated neuron core/device Ids (contiguous)
8. neuron-scheduler-extn sends the bind request to kubelet of the selected node
9. Kubelet calls the Alloc function of the neuron-device-plugin
10. neuron-device-plugin queries the POD Annotation for allocated core/device Ids
11. neuron-device-plugin exports the devices & visisble cores to container runtime
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _k8s-neuron-scheduler-flow:
Neuron Scheduler Extension Flow Diagram
---------------------------------------
::
+----------------------------+
| POD Manifest |
| with Request |
| aws.amazon.com/neuroncore:2|
| |
| |
2 +-------------+--------------+
+--------------------------------+ |
| | |
| | | 3
+------------------------------+-----+ | |
| Kubelet in INF1/TRN1 Node| | |
| +<-----------+ | |
+-----+---------------------+--------+ | +-----v-----------v--------------+
| ^ | | Kube-Scheduler |
| | | | |
| | | +--^------+---------------+------+
9 | 1 | | | | |
| | 8| 5| |4 |
| | | | | |
| | | | | |6
v | | | | |
+-----+---------------------+--------+ | +--+------v---------------v------+
| neuron-device-plugin | +-------+ neuron|scheduler|ext |
| in INF1/TRN1 node | +---------------------+----------+
+----+----------------------+--------+ |
| | |7
| |10 |
| | v
11| | +---------+-------+
| | |POD Manifest: |
| | |Annotation: |
| | |NEURON_CORES:2,3 |
v +---------------------------------------->+ |
--device=/dev/neuron1 --env NEURON_RT_VISIBLE_CORES=2,3 | |
| |
+-----------------+
1. neuron-device-plugin returns the list of Neuron cores/devices to kublet
2. Kubelet advertises the Core/Device list to K8s API server (in turn to kube-scheduler)
3. POD Request for neuron cores/devices [Kube-Scheduler picks up the POD creation request]
4. kube-scheduler calls the neuron-scheduler-extn filter function with list of nodes and POD Specification
5. neuron-scheduler-extn scans through the nodes and filters out nodes with non
contiguous cores/devices and returns the nodes that are capable of supporing the given POD specification
6. kube-scheduler calls the neuron-scheduler-extn bind function with pod and node
7. neuron-scheduler-extn updates the POD annotation with allocated neuron core/device Ids (contiguous)
8. neuron-scheduler-extn sends the bind request to kubelet of the selected node
9. Kubelet calls the Alloc function of the neuron-device-plugin
10. neuron-device-plugin queries the POD Annotation for allocated core/device Ids
11. neuron-device-plugin exports the devices & visisble cores to container runtime
</pre></body></html> | 2023-09-29T20:55:31.966Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/docker-example/inference/torchserve-neuron.rst.txt | ```
.. _torchserve-neuron:
Torchserve Example
==================
.. literalinclude:: torchserve-neuron.sh
:linenos:
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torchserve-neuron:
Torchserve Example
==================
.. literalinclude:: torchserve-neuron.sh
:linenos:
</pre></body></html> | 2023-09-29T20:55:31.972Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuron/ubuntu/tensorflow-neuron-ubuntu22.rst.txt | ```
.. _setup-tensorflow-neuron-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 22
============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 22 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u22.txt
.. include:: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u22.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuron-u22:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Ubuntu 22
============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Ubuntu Server 22 AMI
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-u22.txt
.. include:: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-u22.rst</pre></body></html> | 2023-09-29T20:55:32.002Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/docker-example/inference/config-properties.rst.txt | ```
.. _torchserve-config-properties:
Torchserve config.properties example
====================================
.. literalinclude:: config.properties
:linenos:
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torchserve-config-properties:
Torchserve config.properties example
====================================
.. literalinclude:: config.properties
:linenos:
</pre></body></html> | 2023-09-29T20:55:32.045Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuron/amazon-linux/tensorflow-neuron-al2.rst.txt | ```
.. _setup-tensorflow-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Amazon Linux 2
===============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-al2.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-al2.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuron-al2:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Amazon Linux 2
===============================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console, please make sure to select the correct instance type.
* To get more information about instances sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Select Amazon Linux 2 AMI(HVM) - Kernel 5.10
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-al2.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-al2.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:32.050Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/neuron-setup/tensorflow/neuron/amazon-linux/tensorflow-neuron-al2-base-dlami.rst.txt | ```
.. _setup-tensorflow-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Amazon Linux 2 with DLAMI Base
===============================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-al2.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-al2.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-tensorflow-neuron-al2-base-dlami:
.. card:: Select a Different Framework or Platform for Setup
:link: setup-guide-index
:link-type: ref
:class-body: sphinx-design-class-title-small
TensorFlow Neuron ("tensorflow-neuron") Setup on Amazon Linux 2 with DLAMI Base
===============================================================================
.. contents:: Table of contents
:local:
:depth: 2
Get Started with Latest Release of TensorFlow Neuron (``tensorflow-neuron``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section provide links that will assist you to quickly start with a fresh installation of :ref:`setup-tensorflow-neuron` for Inference.
.. dropdown:: Launch the Instance
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* Please follow the instructions at `launch an Amazon EC2 Instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance>`_ to launch an instance. When choosing the instance type at the EC2 console. please make sure to select the correct instance type.
* To get more information about instance sizes and pricing see: `Inf1 web page <https://aws.amazon.com/ec2/instance-types/inf1/>`_
* Check for the latest version of the `DLAMI Base AMI <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`_ and copy the AMI name that starts with "Deep Learning Base Neuron AMI (Amazon Linux 2) <latest_date>" from "AMI Name:" section
* Search for the copied AMI name in the AMI Search , you should see a matching AMI with the AMI name in Community AMIs. Select the AMI and use it to launch the instance.
* After launching the instance, follow the instructions in `Connect to your instance <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html>`_ to connect to the instance
.. dropdown:: Install Drivers and Tools
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami --category=driver_runtime_tools
.. include:: /general/quick-start/tab-inference-tensorflow-neuron-al2.txt
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update-al2.rst
.. include :: /frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install-prev-u20.rst</pre></body></html> | 2023-09-29T20:55:32.056Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/docker-example/training/mlp.rst.txt | ```
.. _mlp-train:
Simple MLP train script
========================
Save the following contents as mlp_train.py
.. literalinclude:: mlp_train.py
:linenos:
Save the following contents as model.py
.. literalinclude:: model.py
:linenos:
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mlp-train:
Simple MLP train script
========================
Save the following contents as mlp_train.py
.. literalinclude:: mlp_train.py
:linenos:
Save the following contents as model.py
.. literalinclude:: model.py
:linenos:</pre></body></html> | 2023-09-29T20:55:32.076Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/tutorials/tutorial-oci-hook.rst.txt | ```
.. _tutorial-oci-hook:
Tutorial Docker Neuron OCI Hook Setup
=====================================
Introduction
------------
A Neuron application can be deployed using docker containers. Neuron devices
are exposed to the containers using the --device option in the docker run command.
Docker runtime (runc) does not yet support the ALL option to expose all neuron
devices to the container. In order to do that an environment variable,
“AWS_NEURON_VISIBLE_DEVICES=ALL" can be used.
For the above environment variable to be used, the oci neuron hook has to be
installed/configured.
Install oci-add-hooks dependency on the Linux host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
This step should run on the Linux host and not inside the container.
`oci-add-hooks <https://github.com/awslabs/oci-add-hooks>`__ is an OCI
runtime with the sole purpose of injecting OCI prestart, poststart, and
poststop hooks into a container config.json before passing along to an
OCI compatable runtime. oci-add-hooks is used to inject a hook that
exposes Inferentia devices to the container.
.. code:: bash
sudo apt install -y golang && \
export GOPATH=$HOME/go && \
go get github.com/joeshaw/json-lossless && \
cd /tmp/ && \
git clone https://github.com/awslabs/oci-add-hooks && \
cd /tmp/oci-add-hooks && \
make build && \
sudo cp /tmp/oci-add-hooks/oci-add-hooks /usr/local/bin/
Install the package that has oci hook software
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
This step should run on the Linux host and not inside the container.
For Inf1 install the following package
.. code:: bash
sudo apt-get install aws-neuron-runtime-base -y
For Trn1 install the following package
.. code:: bash
sudo apt-get install aws-neuronx-oci-hook -y
For docker runtime setup Docker to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
oci-neuron is a script representing OCI compatible runtime. It wraps
oci-add-hooks, which wraps runc. In this step, we configure docker to
point at oci-neuron OCI runtime. Install dockerIO:
.. code:: bash
sudo cp /opt/aws/neuron/share/docker-daemon.json /etc/docker/daemon.json
sudo service docker restart
If the docker restart command fails, make sure to check if the docker
systemd service is not masked. More information on this can be found
here: https://stackoverflow.com/a/37640824
For containerd runtime, setup containerd to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update the following fields in the /etc/containerd/config.toml to configure
containerd to use the neuron oci hook
.. code:: bash
default_runtime_name = "neuron"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.neuron]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.neuron.options]
BinaryName = "/opt/aws/neuron/bin/oci_neuron_hook_wrapper.sh"
After that restart the containerd daemon
.. code:: bash
sudo systemcl restart containerd
For cri-o runtime, setup cri-o to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update the following fields in the /etc/crio/crio.conf to configure
cri-o to use the neuron oci hook
.. code:: bash
default_runtime_name = "neuron"
[crio.runtime.runtimes.neuron]
runtime_path = "/opt/aws/neuron/bin/oci_neuron_hook_wrapper.sh"
After that restart the containerd daemon
.. code:: bash
sudo systemcl restart cri-o
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tutorial-oci-hook:
Tutorial Docker Neuron OCI Hook Setup
=====================================
Introduction
------------
A Neuron application can be deployed using docker containers. Neuron devices
are exposed to the containers using the --device option in the docker run command.
Docker runtime (runc) does not yet support the ALL option to expose all neuron
devices to the container. In order to do that an environment variable,
“AWS_NEURON_VISIBLE_DEVICES=ALL" can be used.
For the above environment variable to be used, the oci neuron hook has to be
installed/configured.
Install oci-add-hooks dependency on the Linux host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
This step should run on the Linux host and not inside the container.
`oci-add-hooks <https://github.com/awslabs/oci-add-hooks>`__ is an OCI
runtime with the sole purpose of injecting OCI prestart, poststart, and
poststop hooks into a container config.json before passing along to an
OCI compatable runtime. oci-add-hooks is used to inject a hook that
exposes Inferentia devices to the container.
.. code:: bash
sudo apt install -y golang && \
export GOPATH=$HOME/go && \
go get github.com/joeshaw/json-lossless && \
cd /tmp/ && \
git clone https://github.com/awslabs/oci-add-hooks && \
cd /tmp/oci-add-hooks && \
make build && \
sudo cp /tmp/oci-add-hooks/oci-add-hooks /usr/local/bin/
Install the package that has oci hook software
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
This step should run on the Linux host and not inside the container.
For Inf1 install the following package
.. code:: bash
sudo apt-get install aws-neuron-runtime-base -y
For Trn1 install the following package
.. code:: bash
sudo apt-get install aws-neuronx-oci-hook -y
For docker runtime setup Docker to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
oci-neuron is a script representing OCI compatible runtime. It wraps
oci-add-hooks, which wraps runc. In this step, we configure docker to
point at oci-neuron OCI runtime. Install dockerIO:
.. code:: bash
sudo cp /opt/aws/neuron/share/docker-daemon.json /etc/docker/daemon.json
sudo service docker restart
If the docker restart command fails, make sure to check if the docker
systemd service is not masked. More information on this can be found
here: https://stackoverflow.com/a/37640824
For containerd runtime, setup containerd to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update the following fields in the /etc/containerd/config.toml to configure
containerd to use the neuron oci hook
.. code:: bash
default_runtime_name = "neuron"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.neuron]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.neuron.options]
BinaryName = "/opt/aws/neuron/bin/oci_neuron_hook_wrapper.sh"
After that restart the containerd daemon
.. code:: bash
sudo systemcl restart containerd
For cri-o runtime, setup cri-o to use oci-neuron OCI runtime.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update the following fields in the /etc/crio/crio.conf to configure
cri-o to use the neuron oci hook
.. code:: bash
default_runtime_name = "neuron"
[crio.runtime.runtimes.neuron]
runtime_path = "/opt/aws/neuron/bin/oci_neuron_hook_wrapper.sh"
After that restart the containerd daemon
.. code:: bash
sudo systemcl restart cri-o</pre></body></html> | 2023-09-29T20:55:32.215Z | |
Dockerfile for Application Container — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/containers/docker-example/inference/Dockerfile-libmode.html#libmode-dockerfile | # Dockerfile for Application Container — AWS Neuron Documentation
Toggle in-page Table of Contents
_This document is relevant for_: `Inf1`
## Dockerfile for Application Container[#](#dockerfile-for-application-container "Permalink to this headline")
```
1# Example pytorch neuron container
2# To build:
3# docker build . -f Dockerfile.pt -t neuron-container:pytorch
4# To run on EC2 Inf1 instances with AWS DLAMI:
5# docker run -it --device=/dev/neuron0 neuron-container:pytorch
6
7FROM ubuntu:18.04
8
9LABEL maintainer=" "
10
11RUN apt-get update -y \
12 && apt-get install -y --no-install-recommends \
13 gnupg2 \
14 wget \
15 python3-pip \
16 python3-setuptools \
17 && cd /usr/local/bin \
18 && pip3 --no-cache-dir install --upgrade pip \
19 && rm -rf /var/lib/apt/lists/* \
20 && apt-get clean
21
22RUN echo "deb https://apt.repos.neuron.amazonaws.com bionic main" > /etc/apt/sources.list.d/neuron.list
23RUN wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | apt-key add -
24
25# Installing Neuron Tools
26RUN apt-get update -y && apt-get install -y \
27 aws-neuronx-tools
28
29# Sets up Path for Neuron tools
30ENV PATH="/opt/bin/:/opt/aws/neuron/bin:${PATH}"
31
32# Include framework tensorflow-neuron or torch-neuronx and compiler (compiler not needed for inference)
33RUN pip3 install \
34 torch-neuronx \
35 --extra-index-url=https://pip.repos.neuron.amazonaws.com
36
37# Include your APP dependencies here.
38# RUN ...
39
40# Define the entrypoint script that has some application code (if needed) and executes the docker run command
41# For example you can use something like below
42# COPY dockerd-libmode-entrypoint.sh /opt/bin/dockerd-entrypoint.sh
43# RUN chmod +x /opt/bin/dockerd-entrypoint.sh
44# ENTRYPOINT ["/opt/bin/dockerd-entrypoint.sh"]
45
46CMD ["neuron-top"]
```
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Dockerfile for Application Container — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "containers/docker-example/inference/Dockerfile-libmode", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fcontainers/docker-example/inference/Dockerfile-libmode.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/containers/docker-example/inference/Dockerfile-libmode.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/containers/docker-example/inference/Dockerfile-libmode.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Dockerfile for Application Container</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="dockerfile-for-application-container">
<span id="libmode-dockerfile"></span><h1>Dockerfile for Application Container<a class="headerlink" href="#dockerfile-for-application-container" title="Permalink to this headline">#</a></h1>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="linenos"> 1</span><span class="c1"># Example pytorch neuron container</span>
<span class="linenos"> 2</span><span class="c1"># To build:</span>
<span class="linenos"> 3</span><span class="c1"># docker build . -f Dockerfile.pt -t neuron-container:pytorch</span>
<span class="linenos"> 4</span><span class="c1"># To run on EC2 Inf1 instances with AWS DLAMI:</span>
<span class="linenos"> 5</span><span class="c1"># docker run -it --device=/dev/neuron0 neuron-container:pytorch</span>
<span class="linenos"> 6</span>
<span class="linenos"> 7</span><span class="n">FROM</span> <span class="n">ubuntu</span><span class="p">:</span><span class="mf">18.04</span>
<span class="linenos"> 8</span>
<span class="linenos"> 9</span><span class="n">LABEL</span> <span class="n">maintainer</span><span class="o">=</span><span class="s2">" "</span>
<span class="linenos">10</span>
<span class="linenos">11</span><span class="n">RUN</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">update</span> <span class="o">-</span><span class="n">y</span> \
<span class="linenos">12</span> <span class="o">&&</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="o">--</span><span class="n">no</span><span class="o">-</span><span class="n">install</span><span class="o">-</span><span class="n">recommends</span> \
<span class="linenos">13</span> <span class="n">gnupg2</span> \
<span class="linenos">14</span> <span class="n">wget</span> \
<span class="linenos">15</span> <span class="n">python3</span><span class="o">-</span><span class="n">pip</span> \
<span class="linenos">16</span> <span class="n">python3</span><span class="o">-</span><span class="n">setuptools</span> \
<span class="linenos">17</span> <span class="o">&&</span> <span class="n">cd</span> <span class="o">/</span><span class="n">usr</span><span class="o">/</span><span class="n">local</span><span class="o">/</span><span class="nb">bin</span> \
<span class="linenos">18</span> <span class="o">&&</span> <span class="n">pip3</span> <span class="o">--</span><span class="n">no</span><span class="o">-</span><span class="n">cache</span><span class="o">-</span><span class="nb">dir</span> <span class="n">install</span> <span class="o">--</span><span class="n">upgrade</span> <span class="n">pip</span> \
<span class="linenos">19</span> <span class="o">&&</span> <span class="n">rm</span> <span class="o">-</span><span class="n">rf</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">apt</span><span class="o">/</span><span class="n">lists</span><span class="o">/*</span> \
<span class="linenos">20</span> <span class="o">&&</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">clean</span>
<span class="linenos">21</span>
<span class="linenos">22</span><span class="n">RUN</span> <span class="n">echo</span> <span class="s2">"deb https://apt.repos.neuron.amazonaws.com bionic main"</span> <span class="o">></span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">apt</span><span class="o">/</span><span class="n">sources</span><span class="o">.</span><span class="n">list</span><span class="o">.</span><span class="n">d</span><span class="o">/</span><span class="n">neuron</span><span class="o">.</span><span class="n">list</span>
<span class="linenos">23</span><span class="n">RUN</span> <span class="n">wget</span> <span class="o">-</span><span class="n">qO</span> <span class="o">-</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">GPG</span><span class="o">-</span><span class="n">PUB</span><span class="o">-</span><span class="n">KEY</span><span class="o">-</span><span class="n">AMAZON</span><span class="o">-</span><span class="n">AWS</span><span class="o">-</span><span class="n">NEURON</span><span class="o">.</span><span class="n">PUB</span> <span class="o">|</span> <span class="n">apt</span><span class="o">-</span><span class="n">key</span> <span class="n">add</span> <span class="o">-</span>
<span class="linenos">24</span>
<span class="linenos">25</span><span class="c1"># Installing Neuron Tools</span>
<span class="linenos">26</span><span class="n">RUN</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">update</span> <span class="o">-</span><span class="n">y</span> <span class="o">&&</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> \
<span class="linenos">27</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuronx</span><span class="o">-</span><span class="n">tools</span>
<span class="linenos">28</span>
<span class="linenos">29</span><span class="c1"># Sets up Path for Neuron tools</span>
<span class="linenos">30</span><span class="n">ENV</span> <span class="n">PATH</span><span class="o">=</span><span class="s2">"/opt/bin/:/opt/aws/neuron/bin:$</span><span class="si">{PATH}</span><span class="s2">"</span>
<span class="linenos">31</span>
<span class="linenos">32</span><span class="c1"># Include framework tensorflow-neuron or torch-neuronx and compiler (compiler not needed for inference)</span>
<span class="linenos">33</span><span class="n">RUN</span> <span class="n">pip3</span> <span class="n">install</span> \
<span class="linenos">34</span> <span class="n">torch</span><span class="o">-</span><span class="n">neuronx</span> \
<span class="linenos">35</span> <span class="o">--</span><span class="n">extra</span><span class="o">-</span><span class="n">index</span><span class="o">-</span><span class="n">url</span><span class="o">=</span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">pip</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span>
<span class="linenos">36</span>
<span class="linenos">37</span><span class="c1"># Include your APP dependencies here.</span>
<span class="linenos">38</span><span class="c1"># RUN ...</span>
<span class="linenos">39</span>
<span class="linenos">40</span><span class="c1"># Define the entrypoint script that has some application code (if needed) and executes the docker run command</span>
<span class="linenos">41</span><span class="c1"># For example you can use something like below</span>
<span class="linenos">42</span><span class="c1"># COPY dockerd-libmode-entrypoint.sh /opt/bin/dockerd-entrypoint.sh</span>
<span class="linenos">43</span><span class="c1"># RUN chmod +x /opt/bin/dockerd-entrypoint.sh</span>
<span class="linenos">44</span><span class="c1"># ENTRYPOINT ["/opt/bin/dockerd-entrypoint.sh"]</span>
<span class="linenos">45</span>
<span class="linenos">46</span><span class="n">CMD</span> <span class="p">[</span><span class="s2">"neuron-top"</span><span class="p">]</span>
</pre></div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:32.315Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/tutorials/k8s-setup.rst.txt | ```
.. _tutorial-k8s-env-setup-for-neuron:
Kubernetes environment setup for Neuron
=======================================
Introduction
------------
Customers that use Kubernetes can conveniently integrate Inf1/Trn1 instances into their workflows. This tutorial will go through deploying the neuron device plugin daemonset and also how to allocate neuron cores or devices to application pods.
.. dropdown:: Prerequisite
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-prerequisite.rst
.. dropdown:: Deploy Neuron Device Plugin
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-neuron-device-plugin.rst
.. dropdown:: Deploy Neuron Scheduler Extension
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-neuron-scheduler.rst
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tutorial-k8s-env-setup-for-neuron:
Kubernetes environment setup for Neuron
=======================================
Introduction
------------
Customers that use Kubernetes can conveniently integrate Inf1/Trn1 instances into their workflows. This tutorial will go through deploying the neuron device plugin daemonset and also how to allocate neuron cores or devices to application pods.
.. dropdown:: Prerequisite
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-prerequisite.rst
.. dropdown:: Deploy Neuron Device Plugin
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-neuron-device-plugin.rst
.. dropdown:: Deploy Neuron Scheduler Extension
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. include:: /containers/tutorials/k8s-neuron-scheduler.rst</pre></body></html> | 2023-09-29T20:55:32.348Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/install-templates/inf1/launch-inf1-dlami-aws-cli.rst.txt | ```
.. _launch-inf1-dlami-aws-cli:
AWS CLI commands to launch inf1 instances
"""""""""""""""""""""""""""""""""""""""""
.. code:: bash
# Launch instance
# The following are the different Deep Learning AMIs to get started and is recommended
# for the tutorials.
# "Deep Learning AMI (Amazon Linux)*"
# "Deep Learning AMI (Amazon Linux 2)*"
# "Deep Learning AMI (Ubuntu 18.04)*"
#
# You can get the latest AMI ID for any of the above ones using the following command
AWS_REGION="<aws region name like us-east-1>"
AMIID=$(aws ec2 describe-images --filters "Name=name,Values=Deep Learning Base AMI (Ubuntu 18.04)*" --query 'sort_by(Images, &CreationDate)[].[Name,ImageId]' --region $AWS_REGION --output text | tail -n 1 | awk '{print $(NF)}')
INSTANCE_ID=$(aws ec2 run-instances --image-id $AMIID --count 1 --instance-type <inf1.xlarge type> --key-name MyKeyPair --region $AWS_REGION [--subnet-id <subnet id>]| python -c 'import sys, json; print(json.load(sys.stdin)["Instances"][0]["InstanceId"])')
echo "Instance ID of launched instance" $INSTANCE_ID
# Wait for few seconds to a minute for the instance to get created and have public DNS/ip.
# The following command will get the public DNS name of the launched instance to which
# you can then log in to using your key pair.
INSTANCE_PUBLIC_DNS=$(aws ec2 describe-instances --instance-id $INSTANCE_ID --region $AWS_REGION | python -c 'import sys, json; print(json.load(sys.stdin)["Reservations"][0]["Instances"][0]["PublicDnsName"])')
echo "DNS name of the launched instance" $INSTANCE_PUBLIC_DNS
# Wait for couple of minutes for the instance to be ready and then login:
ssh -i <key.pem> <ubuntu/ec2-user>@$INSTANCE_PUBLIC_DNS
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _launch-inf1-dlami-aws-cli:
AWS CLI commands to launch inf1 instances
"""""""""""""""""""""""""""""""""""""""""
.. code:: bash
# Launch instance
# The following are the different Deep Learning AMIs to get started and is recommended
# for the tutorials.
# "Deep Learning AMI (Amazon Linux)*"
# "Deep Learning AMI (Amazon Linux 2)*"
# "Deep Learning AMI (Ubuntu 18.04)*"
#
# You can get the latest AMI ID for any of the above ones using the following command
AWS_REGION="<aws region name like us-east-1>"
AMIID=$(aws ec2 describe-images --filters "Name=name,Values=Deep Learning Base AMI (Ubuntu 18.04)*" --query 'sort_by(Images, &CreationDate)[].[Name,ImageId]' --region $AWS_REGION --output text | tail -n 1 | awk '{print $(NF)}')
INSTANCE_ID=$(aws ec2 run-instances --image-id $AMIID --count 1 --instance-type <inf1.xlarge type> --key-name MyKeyPair --region $AWS_REGION [--subnet-id <subnet id>]| python -c 'import sys, json; print(json.load(sys.stdin)["Instances"][0]["InstanceId"])')
echo "Instance ID of launched instance" $INSTANCE_ID
# Wait for few seconds to a minute for the instance to get created and have public DNS/ip.
# The following command will get the public DNS name of the launched instance to which
# you can then log in to using your key pair.
INSTANCE_PUBLIC_DNS=$(aws ec2 describe-instances --instance-id $INSTANCE_ID --region $AWS_REGION | python -c 'import sys, json; print(json.load(sys.stdin)["Reservations"][0]["Instances"][0]["PublicDnsName"])')
echo "DNS name of the launched instance" $INSTANCE_PUBLIC_DNS
# Wait for couple of minutes for the instance to be ready and then login:
ssh -i <key.pem> <ubuntu/ec2-user>@$INSTANCE_PUBLIC_DNS
</pre></body></html> | 2023-09-29T20:55:32.355Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/devflows/inference/dev-flows.rst.txt | ```
.. _neuron1-devflows:
.. _compilation-flow-target:
.. _deploym-flow-target:
Developer Flows Introduction
============================
|image|
.. |image| image:: /images/neuron-devflow.jpg
:width: 500
:alt: Neuron developer flow
A typical Neuron developer flow includes compilation phase and then deployment (inference) on inf1 instance/s. You can develop on Neuron using one of the following combinations of developer flows:
.. toctree::
:maxdepth: 1
ec2-then-ec2-devflow
ec2-then-ec2-devflow-inf2
neo-then-hosting-devflow
byoc-hosting-devflow
dlc-then-ec2-devflow
dlc-then-ecs-devflow
dlc-then-eks-devflow
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron1-devflows:
.. _compilation-flow-target:
.. _deploym-flow-target:
Developer Flows Introduction
============================
|image|
.. |image| image:: /images/neuron-devflow.jpg
:width: 500
:alt: Neuron developer flow
A typical Neuron developer flow includes compilation phase and then deployment (inference) on inf1 instance/s. You can develop on Neuron using one of the following combinations of developer flows:
.. toctree::
:maxdepth: 1
ec2-then-ec2-devflow
ec2-then-ec2-devflow-inf2
neo-then-hosting-devflow
byoc-hosting-devflow
dlc-then-ec2-devflow
dlc-then-ecs-devflow
dlc-then-eks-devflow
</pre></body></html> | 2023-09-29T20:55:32.427Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/containers/docker-example/training/Dockerfile-trainium-dlc.rst.txt | ```
.. _trainium-dlc-dockerfile:
Dockerfile for Application Container
====================================
.. literalinclude:: Dockerfile-training-dlc
:linenos:
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _trainium-dlc-dockerfile:
Dockerfile for Application Container
====================================
.. literalinclude:: Dockerfile-training-dlc
:linenos:
</pre></body></html> | 2023-09-29T20:55:32.490Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.rst.txt | ```
.. _install-neuron-tensorflow:
Install TensorFlow Neuron
=========================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _install-neuron-tensorflow:
Install TensorFlow Neuron
=========================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
</pre></body></html> | 2023-09-29T20:55:32.785Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/setup/pytorch-install.rst.txt | ```
.. _install-neuron-pytorch:
Install PyTorch Neuron (``torch-neuron``)
=========================================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _install-neuron-pytorch:
Install PyTorch Neuron (``torch-neuron``)
=========================================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.12.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.12.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.11.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.11.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.10.2
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.10.2 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: PyTorch 1.9.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.9.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
</pre></body></html> | 2023-09-29T20:55:32.842Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/setup/pytorch-update.rst.txt | ```
.. _update-neuron-pytorch:
Update to latest PyTorch Neuron (``torch-neuron``)
==================================================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _update-neuron-pytorch:
Update to latest PyTorch Neuron (``torch-neuron``)
==================================================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: PyTorch 1.13.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=pytorch --framework-version=1.13.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami</pre></body></html> | 2023-09-29T20:55:32.945Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/neuron2-intro-faq.rst.txt | ```
.. _neuron2-intro-faq:
Neuron 2.x Introduction at Trn1 GA - FAQ
----------------------------------------
.. contents:: Table of contents
:local:
:depth: 1
.. include:: /release-notes/templates/n2.x-trn1-ga-faq.txt
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron2-intro-faq:
Neuron 2.x Introduction at Trn1 GA - FAQ
----------------------------------------
.. contents:: Table of contents
:local:
:depth: 1
.. include:: /release-notes/templates/n2.x-trn1-ga-faq.txt
</pre></body></html> | 2023-09-29T20:55:33.020Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/tutorials/index.rst.txt | ```
.. _pytorch-tutorials:
PyTorch Neuron Tutorials
====================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`pytorch-tutorial-setup` before running a PyTorch tutorial on Inferentia . We recommend new users start with the ResNet-50 tutorial.
.. toctree::
:hidden:
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/pytorch-tutorial-setup
.. _pytorch-computervision:
Computer Vision
---------------
* ResNet-50 tutorial :ref:`[html] </src/examples/pytorch/resnet50.ipynb>` :pytorch-neuron-src:`[notebook] <resnet50.ipynb>`
* PyTorch YOLOv4 tutorial :ref:`[html] </src/examples/pytorch/yolo_v4.ipynb>` :pytorch-neuron-src:`[notebook] <yolo_v4.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/resnet50.ipynb
/src/examples/pytorch/yolo_v4.ipynb
.. _pytorch-nlp:
Natural Language Processing
---------------------------
* HuggingFace pretrained BERT tutorial :ref:`[html] </src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb>` :pytorch-neuron-src:`[notebook] <bert_tutorial/tutorial_pretrained_bert.ipynb>`
* Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial :ref:`[html] </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>` :pytorch-neuron-src:`[notebook] <byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>`
* LibTorch C++ tutorial :ref:`[html] <pytorch-tutorials-libtorch>`
* TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* HuggingFace MarianMT tutorial :ref:`[html] </src/examples/pytorch/transformers-marianmt.ipynb>` :pytorch-neuron-src:`[notebook] <transformers-marianmt.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb
/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-libtorch
/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/transformers-marianmt.ipynb
.. _pytorch-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* BERT TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* NeuronCore Pipeline tutorial :ref:`[html] </src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb>` :pytorch-neuron-src:`[notebook] <pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb>`
.. toctree::
:hidden:
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-tutorials:
PyTorch Neuron Tutorials
====================
Before running a tutorial
-------------------------
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at :ref:`pytorch-tutorial-setup` before running a PyTorch tutorial on Inferentia . We recommend new users start with the ResNet-50 tutorial.
.. toctree::
:hidden:
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/pytorch-tutorial-setup
.. _pytorch-computervision:
Computer Vision
---------------
* ResNet-50 tutorial :ref:`[html] </src/examples/pytorch/resnet50.ipynb>` :pytorch-neuron-src:`[notebook] <resnet50.ipynb>`
* PyTorch YOLOv4 tutorial :ref:`[html] </src/examples/pytorch/yolo_v4.ipynb>` :pytorch-neuron-src:`[notebook] <yolo_v4.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/resnet50.ipynb
/src/examples/pytorch/yolo_v4.ipynb
.. _pytorch-nlp:
Natural Language Processing
---------------------------
* HuggingFace pretrained BERT tutorial :ref:`[html] </src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb>` :pytorch-neuron-src:`[notebook] <bert_tutorial/tutorial_pretrained_bert.ipynb>`
* Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial :ref:`[html] </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>` :pytorch-neuron-src:`[notebook] <byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>`
* LibTorch C++ tutorial :ref:`[html] <pytorch-tutorials-libtorch>`
* TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* HuggingFace MarianMT tutorial :ref:`[html] </src/examples/pytorch/transformers-marianmt.ipynb>` :pytorch-neuron-src:`[notebook] <transformers-marianmt.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb
/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-libtorch
/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/transformers-marianmt.ipynb
.. _pytorch-utilize-neuron:
Utilizing Neuron Capabilities
-----------------------------
* BERT TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* NeuronCore Pipeline tutorial :ref:`[html] </src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb>` :pytorch-neuron-src:`[notebook] <pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb>`
.. toctree::
:hidden:
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb
</pre></body></html> | 2023-09-29T20:55:33.054Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/setup/mxnet-update.rst.txt | ```
.. _update-neuron-mxnet:
Update to latest MXNet Neuron
===============================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _update-neuron-mxnet:
Update to latest MXNet Neuron
===============================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
</pre></body></html> | 2023-09-29T20:55:33.061Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.rst.txt | ```
.. _update-neuron-tensorflow:
Update to latest TensorFlow Neuron
===============================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _update-neuron-tensorflow:
Update to latest TensorFlow Neuron
===============================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=compile --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: TensorFlow 2.10.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.10.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.9.3
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.9.3 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.8.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.8.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 2.7.4
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=2.7.4 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: TensorFlow 1.15.5
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=update --mode=deploy --category=compiler_framework --framework=tensorflow --framework-version=1.15.5 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
</pre></body></html> | 2023-09-29T20:55:33.108Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/releasecontent.rst.txt | ```
.. _neuron-release-content:
.. _latest-neuron-release-content:
Release Content
===============
.. contents:: Table of contents
:local:
:depth: 2
Neuron 2.9.1 (04/19/2023)
-------------------------
Trn1 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Inf2 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Inf1 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Previous Neuron Releases Content
--------------------------------
* :ref:`pre-release-content`
* :ref:`pre-n1-release-content`
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-release-content:
.. _latest-neuron-release-content:
Release Content
===============
.. contents:: Table of contents
:local:
:depth: 2
Neuron 2.9.1 (04/19/2023)
-------------------------
Trn1 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Inf2 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Inf1 packages
^^^^^^^^^^^^^
.. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1
Previous Neuron Releases Content
--------------------------------
* :ref:`pre-release-content`
* :ref:`pre-n1-release-content`
</pre></body></html> | 2023-09-29T20:55:33.249Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tf2_faq.rst.txt | ```
.. _tf2_faq:
TensorFlow 2.x FAQ
===================
.. contents:: Table of contents
:local:
:depth: 1
How do I get started with TensorFlow?
-------------------------------------
The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the :ref:`HuggingFace DistilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` is a good place to start.
What TensorFlow versions are supported by Neuron?
-------------------------------------------------
The AWS Neuron provide well-tested tensorflow-neuron packages that work with a range of tensorflow official releases, as long as the version of tensorflow-neuron matches that of tensorflow. For example, you may install ``tensorflow-neuron==2.3.3.1.0.9999.0`` on top of ``tensorflow==2.3.3`` and expect them to work together.
Currently, tensorflow-neuron can work with tensorflow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, 2.5.0.
In a fresh Python environment, ``pip install tensorflow-neuron`` would bring in the highest version (2.5.0 as of 07/13/2021), which then pulls ``tensorflow==2.5.0`` into the current environment.
If you already have a particular version of tensorflow 2.x installed, then it is recommended to pay attention to the precise version of tensorflow-neuron and only install the desired one. For example, in an existing Python environment with ``tensorflow==2.3.3`` installed, you may install tensorflow-neuron by pip install ``tensorflow-neuron==2.3.3``, which will reuse the existing tensorflow installation.
What operators are supported?
-----------------------------
Due to fundamental backend design changes in the TensorFlow 2.x framework, the concept of "supported graph operators" is no longer well-defined. Please refer to :ref:`Accelerated Python APIs and graph operators <tensorflow-ref-neuron-accelerated-ops>` for a guide to the set of TensorFlow 2.x Python APIs and graph operators that can be accelerated by Neuron.
How do I compile my model?
--------------------------
It is achieved by a new public API called tfn.trace, which resembles the compilation API of AWS PyTorch Neuron integration. Programmatically, customers would be able to execute the following code.
.. code::
import tensorflow as tf
import tensorflow.neuron as tfn
...
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model_neuron = tfn.trace(model, example_inputs)
model_neuron.save('./model_neuron_dir')
...
model_loaded = tf.saved_model.load('./model_dir')
predict_func = model_loaded['serving_default']
model_loaded_neuron = tfn.trace(predict_func, example_inputs2)
model_loaded_neuron.save('./model_loaded_neuron_dir')
...
How do I deploy my model?
-------------------------
Python tensorflow
^^^^^^^^^^^^^^^^^
Pre-compiled models can be saved and reloaded back into a Python environment using regular tensorflow model loading APIs, as long as tensorflow-neuron is installed.
.. code::
import tensorflow as tf
model = tf.keras.models.load_model('./model_loaded_neuron_dir')
example_inputs = ...
output = model(example_inputs)
tensorflow-serving
^^^^^^^^^^^^^^^^^^
Pre-compiled models can be saved into SavedModel format via tensorflow SavedModel APIs
.. code::
import tensorflow as tf
import tensorflow.neuron as tfn
...
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model_neuron = tfn.trace(model, example_inputs)
tf.saved_model.save(model_neuron, './model_neuron_dir/1')
The generated SavedModel './model_neuron_dir' can be loaded into tensorflow-model-server-neuron, which can be installed through apt or yum based on the type of the operating system. For example, on Ubuntu 18.04 LTS the following command installs and launches a tensorflow-model-server-neuron on a pre-compiled SavedModel.
.. code::
sudo apt install tensorflow-model-server-neuron
# --model_base_path needs to be an absolute path
tensorflow_model_server_neuron --model_base_path=$(pwd)/model_neuron_dir
Where can I find tutorials and examples ?
-----------------------------------------
:ref:`HuggingFace DistilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` is a good place to start.
How to debug or profile my model?
---------------------------------
:ref:`AWS Neuron TensorBoard integration <neuron-plugin-tensorboard>` provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tf2_faq:
TensorFlow 2.x FAQ
===================
.. contents:: Table of contents
:local:
:depth: 1
How do I get started with TensorFlow?
-------------------------------------
The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the :ref:`HuggingFace DistilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` is a good place to start.
What TensorFlow versions are supported by Neuron?
-------------------------------------------------
The AWS Neuron provide well-tested tensorflow-neuron packages that work with a range of tensorflow official releases, as long as the version of tensorflow-neuron matches that of tensorflow. For example, you may install ``tensorflow-neuron==2.3.3.1.0.9999.0`` on top of ``tensorflow==2.3.3`` and expect them to work together.
Currently, tensorflow-neuron can work with tensorflow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, 2.5.0.
In a fresh Python environment, ``pip install tensorflow-neuron`` would bring in the highest version (2.5.0 as of 07/13/2021), which then pulls ``tensorflow==2.5.0`` into the current environment.
If you already have a particular version of tensorflow 2.x installed, then it is recommended to pay attention to the precise version of tensorflow-neuron and only install the desired one. For example, in an existing Python environment with ``tensorflow==2.3.3`` installed, you may install tensorflow-neuron by pip install ``tensorflow-neuron==2.3.3``, which will reuse the existing tensorflow installation.
What operators are supported?
-----------------------------
Due to fundamental backend design changes in the TensorFlow 2.x framework, the concept of "supported graph operators" is no longer well-defined. Please refer to :ref:`Accelerated Python APIs and graph operators <tensorflow-ref-neuron-accelerated-ops>` for a guide to the set of TensorFlow 2.x Python APIs and graph operators that can be accelerated by Neuron.
How do I compile my model?
--------------------------
It is achieved by a new public API called tfn.trace, which resembles the compilation API of AWS PyTorch Neuron integration. Programmatically, customers would be able to execute the following code.
.. code::
import tensorflow as tf
import tensorflow.neuron as tfn
...
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model_neuron = tfn.trace(model, example_inputs)
model_neuron.save('./model_neuron_dir')
...
model_loaded = tf.saved_model.load('./model_dir')
predict_func = model_loaded['serving_default']
model_loaded_neuron = tfn.trace(predict_func, example_inputs2)
model_loaded_neuron.save('./model_loaded_neuron_dir')
...
How do I deploy my model?
-------------------------
Python tensorflow
^^^^^^^^^^^^^^^^^
Pre-compiled models can be saved and reloaded back into a Python environment using regular tensorflow model loading APIs, as long as tensorflow-neuron is installed.
.. code::
import tensorflow as tf
model = tf.keras.models.load_model('./model_loaded_neuron_dir')
example_inputs = ...
output = model(example_inputs)
tensorflow-serving
^^^^^^^^^^^^^^^^^^
Pre-compiled models can be saved into SavedModel format via tensorflow SavedModel APIs
.. code::
import tensorflow as tf
import tensorflow.neuron as tfn
...
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model_neuron = tfn.trace(model, example_inputs)
tf.saved_model.save(model_neuron, './model_neuron_dir/1')
The generated SavedModel './model_neuron_dir' can be loaded into tensorflow-model-server-neuron, which can be installed through apt or yum based on the type of the operating system. For example, on Ubuntu 18.04 LTS the following command installs and launches a tensorflow-model-server-neuron on a pre-compiled SavedModel.
.. code::
sudo apt install tensorflow-model-server-neuron
# --model_base_path needs to be an absolute path
tensorflow_model_server_neuron --model_base_path=$(pwd)/model_neuron_dir
Where can I find tutorials and examples ?
-----------------------------------------
:ref:`HuggingFace DistilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` is a good place to start.
How to debug or profile my model?
---------------------------------
:ref:`AWS Neuron TensorBoard integration <neuron-plugin-tensorboard>` provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.
</pre></body></html> | 2023-09-29T20:55:33.258Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/onnx-faq.rst.txt | ```
.. _onnx-faq:
ONNX FAQ
---------
.. contents:: Table of contents
:local:
:depth: 1
Can I use ONNX models with Neuron ? If not, what should I do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Neuron does not directly support compilation of models in the ONNX file format. The recommended way to compile a model that is in the ONNX file format is to first convert the model to PyTorch using a publicly available tool
like `onnx2pytorch <https://github.com/ToriML/onnx2pytorch>`_ . Once the ONNX model is converted to PyTorch, it can then be compiled with the :func:`torch_neuron.trace` function to produce a model that can run on Neuron.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _onnx-faq:
ONNX FAQ
---------
.. contents:: Table of contents
:local:
:depth: 1
Can I use ONNX models with Neuron ? If not, what should I do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Neuron does not directly support compilation of models in the ONNX file format. The recommended way to compile a model that is in the ONNX file format is to first convert the model to PyTorch using a publicly available tool
like `onnx2pytorch <https://github.com/ToriML/onnx2pytorch>`_ . Once the ONNX model is converted to PyTorch, it can then be compiled with the :func:`torch_neuron.trace` function to produce a model that can run on Neuron.
</pre></body></html> | 2023-09-29T20:55:33.354Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/setup/mxnet-install.rst.txt | ```
.. _install-neuron-mxnet:
Install MXNet Neuron
=====================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _install-neuron-mxnet:
Install MXNet Neuron
=====================
.. include:: /general/setup/install-templates/inf1/note-setup-cntr.rst
.. contents:: Table of contents
:local:
:depth: 2
Develop on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/develop_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Compile on compute instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/compile_mode.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=compile --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
Deploy on AWS ML accelerator instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /general/setup/install-templates/inf1/deploy_mode.rst
.. include :: /general/setup/install-templates/inf1/note-setup-libnrt-warning.rst
.. tab-set::
.. tab-item:: MXNet 1.8.0
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.8.0 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
.. tab-item:: MXNet 1.5.1
.. tab-set::
.. tab-item:: Ubuntu 20 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=ubuntu20 --instance=inf1 --ami=non-dlami
.. tab-item:: Amazon Linux 2 DLAMI Base
.. include :: /general/setup/install-templates/inf1/note-setup-general.rst
.. program-output:: python3 src/helperscripts/n2-helper.py --install-type=install --mode=deploy --category=compiler_framework --framework=mxnet --framework-version=1.5.1 --file=src/helperscripts/n2-manifest.json --os=amazonlinux2 --instance=inf1 --ami=non-dlami
</pre></body></html> | 2023-09-29T20:55:33.375Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/inference/trouble-shooting-faq.rst.txt | ```
.. _trouble-shooting-inf1-faq:
Troubleshooting for Inf1 - FAQ
==============================
.. contents:: Table of contents
:local:
:depth: 1
Performance is not what I expect it to be, what's the next step?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please check our :ref:`performance-optimization` section on performance
tuning and other notes on how to use pipelining and batching to improve
performance.
Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Errors like this will be logged and can be found as shown
:ref:`neuron_gatherinfo`.
How can I debug / profile my inference request?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See :ref:`neuron-plugin-tensorboard`
How to report Bug/Feature Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We welcome you to use the Neuron GitHub issue tracker to report bugs or suggest
features.
When filing an issue, please check existing open, or recently closed,
issues to make sure somebody else hasn't already reported the issue.
Please try to include as much information as you can. Details like these
are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _trouble-shooting-inf1-faq:
Troubleshooting for Inf1 - FAQ
==============================
.. contents:: Table of contents
:local:
:depth: 1
Performance is not what I expect it to be, what's the next step?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please check our :ref:`performance-optimization` section on performance
tuning and other notes on how to use pipelining and batching to improve
performance.
Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Errors like this will be logged and can be found as shown
:ref:`neuron_gatherinfo`.
How can I debug / profile my inference request?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See :ref:`neuron-plugin-tensorboard`
How to report Bug/Feature Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We welcome you to use the Neuron GitHub issue tracker to report bugs or suggest
features.
When filing an issue, please check existing open, or recently closed,
issues to make sure somebody else hasn't already reported the issue.
Please try to include as much information as you can. Details like these
are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
</pre></body></html> | 2023-09-29T20:55:33.460Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tf1_faq.rst.txt | ```
.. _tf1_faq:
TensorFlow 1.x FAQ
===================
.. contents:: Table of contents
:local:
:depth: 1
How do I get started with TensorFlow?
-------------------------------------
The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the :ref:`ResNet50 tutorial </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` is a good place to start.
What TensorFlow versions are supported by Neuron?
-------------------------------------------------
TensorFlow version 1.15.5
What operators are supported?
-----------------------------
``neuron-cc list-operators --framework TENSORFLOW`` provides a list of supported TensorFlow 1.x operators, and they are the operators that run on the machine learning accelerator. Note that operators not in this list are still expected to work with the supported operators in native TensorFlow together, although not accelerated by the hardware.
How do I compile my model?
--------------------------
tensorflow-neuron includes a public-facing compilation API called tfn.saved_model.compile. More can be found here :ref:`tensorflow-ref-neuron-compile-api`.
How do I deploy my model?
-------------------------
Same way as deploying any tensorflow `SavedModel <https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.md#user-content-save-and-restore-models>`_. In Python TensorFlow, the easiest way is through the `tf.contrib.predictor module <https://docs.w3cub.com/tensorflow~python/tf/contrib/predictor/from_saved_model>`_. If a Python-free deployment is preferred for performance or some other reasons, `tensorflow-serving <https://www.tensorflow.org/tfx/guide/serving>`_ is a great choice and the AWS Neuron team provides pre-built model server apt/yum packages named as ``tensorflow-model-server-neuron``.
Where can I find tutorials and examples ?
----------------------------------------------------------
:ref:`tensorflow-tutorials` is a great place to start with.
How to debug or profile my model?
-----------------------------
At TensorFlow level, the `v1 profiler <https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler>`_ is a great tool that provides operator-level breakdown of the inference execution time. Additionally, the :ref:`AWS Neuron TensorBoard integration <neuron-plugin-tensorboard>` provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tf1_faq:
TensorFlow 1.x FAQ
===================
.. contents:: Table of contents
:local:
:depth: 1
How do I get started with TensorFlow?
-------------------------------------
The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the :ref:`ResNet50 tutorial </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` is a good place to start.
What TensorFlow versions are supported by Neuron?
-------------------------------------------------
TensorFlow version 1.15.5
What operators are supported?
-----------------------------
``neuron-cc list-operators --framework TENSORFLOW`` provides a list of supported TensorFlow 1.x operators, and they are the operators that run on the machine learning accelerator. Note that operators not in this list are still expected to work with the supported operators in native TensorFlow together, although not accelerated by the hardware.
How do I compile my model?
--------------------------
tensorflow-neuron includes a public-facing compilation API called tfn.saved_model.compile. More can be found here :ref:`tensorflow-ref-neuron-compile-api`.
How do I deploy my model?
-------------------------
Same way as deploying any tensorflow `SavedModel <https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.md#user-content-save-and-restore-models>`_. In Python TensorFlow, the easiest way is through the `tf.contrib.predictor module <https://docs.w3cub.com/tensorflow~python/tf/contrib/predictor/from_saved_model>`_. If a Python-free deployment is preferred for performance or some other reasons, `tensorflow-serving <https://www.tensorflow.org/tfx/guide/serving>`_ is a great choice and the AWS Neuron team provides pre-built model server apt/yum packages named as ``tensorflow-model-server-neuron``.
Where can I find tutorials and examples ?
----------------------------------------------------------
:ref:`tensorflow-tutorials` is a great place to start with.
How to debug or profile my model?
-----------------------------
At TensorFlow level, the `v1 profiler <https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler>`_ is a great tool that provides operator-level breakdown of the inference execution time. Additionally, the :ref:`AWS Neuron TensorBoard integration <neuron-plugin-tensorboard>` provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.
</pre></body></html> | 2023-09-29T20:55:33.562Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/training/neuron-training.rst.txt | ```
.. _neuron-training-faq:
Training with Neuron - FAQ
==========================
.. contents:: Table of contents
:local:
:depth: 2
Compute
-------
How do I get started with training my model on Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once you select your machine learning framework, you can get started here: :ref:`docs-quick-links`
How do I setup EFA for multi-node training?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For setting up EFA that is needed for multi-node training, please see :ref:`setup-trn1-multi-node-execution`
How do I know if I can train my models with Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We aim to support a broad set of models and distribution libraries. We continuously add more capabilities and enable new features via Neuron SDK releases and suggest you will follow our public roadmap and join our slack and email lists.
How should I size Trainium NeuronCores vs GPUs?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For simplicity, you should consider each NeuronCore within your instances as an independent deep learning compute engine, the equivalent of a GPU. As point of comparison, a trn1.32xlarge has 32 NeuronCores, and their max performance is 40% higher than of P4d for BF16/FP16/FP8, 2.5X faster for TF32, and 5X faster for FP32. Each NeuronCore is independent and connected to the rest of the NeuronCores within the instance via NeuronLink, and across instances with EFA. Each NeuronCore has also full access to the accelerator memory in the instance, which helps scale large models across NeuronCores using various collective compute ops techniques.
What are the time to train advantages of Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While the answer is largely model defendant, training performance on Trn1 is fast due thanks for multiple system wide optimizations working in concert. Dependent on the data type, you should expect between 1.4-5X higher throughput on Trn1 as compared to the latest GPUs instances (P4d). For distributed workloads, 800Gbps EFA gives customers lower latency, and 2x the throughput as compared to P4d. (a Trn1n 1.6Tb option is coming soon). Each Trainium also has a dedicated collective compute (CC) engine, which enables running the CC ops in parallel to the NeuronCores compute. This enables another 10-15% acceleration of the overall workload. Finally, stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision, this is not only simplifying model development (no need for mixed precision) it also helps the loss function converge faster and reduce memory footprint.
What are some of the training performance results for Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
They are great! please refer to the :ref:`benchmark` page for open-source model performance results. We encourage you to try it for your own models/application.
Can I use CUDA libraries with AWS Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Trainium and Neuron are plugged into popular frameworkd, and is automatically optimizing model deployment on Neuron devices like Inferentia and Trainium. The Neuron SDK automatically optimizes for Trainium without using closed source dependencies like Nvidia CUDA, not requiring any application level code changes to accelerate models. We believe this intentional approach allows developers freedom of choice with their code and models. If you have applications dependencieson CUDA (or other 3rd party closed source artifacts) you will need to strip them out, and from that point the Neuron compiler will take the model as is and optimize it at the hardware level.
Networking
----------
What’s important to know about the networking in Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trn1 have the fastest EFA in AWS, clocked at 800Gbps they enable more collective communication as compared to other training instances, which is important if your training job spans across multiple servers. You should also expect lower latency as we streamline the communication path between the dedicated collective communication engine on Trainium, and the AWS Nitro EFA NICs.
How does Trainium accelerates collective communication operations?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trainium introduces a dedicated collective compute engine, that runs in parallel to the compute cores (aka NeuronCores). This improves convergence time of intermediate steps as the communication happens in parallel to the compute. This capability, in addition to the faster and optimized EFA, results in better scalability and faster time to train, as compared to other training instances in AWS.
What does Strong/Weak Scaling mean?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To enable strong scaling, we optimized Trainium to be efficient at small batch sizes. Compared to GPUs, Trn1 maintains high efficiency even for small batch sizes. This allows you to scale-out to thousands of devices without increasing the global mini-batch size at the same rate, which in turn leads to faster end-to-end training convergence.
In weak scaling setup, we show the optimal throughput with suffciently large batch size per Trainium. The large batch size is set to leverage the high core utilization so that the overall end-to-end training will be fast. This setup also enables a large global batch size as it scales with the total number of nodes in the cluster.
Usability
---------
What have AWS done to improve usability of Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision. This of course helps the loss function converge faster and reduce memory footprint, but equally important, it is simplifying model development as you can write your model in FP32, and Neuron/Trainium will auto-cast the model to BF16, and execute it with SR enabled. There is no need to loss accuracy with pure BF16 runs, and more importantly no need for experimenting with mixed precision strategies to find the optimal settings.
Eager debug mode provides a convenient utility to step through the code and evaluate operator correctness as part of your model creation/debug. For more details, please refer to the Neuron documentation
What other AWS services work with Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trn1 via its Neuron SDK supports Amazon ECS, EKS, ParallelCluster, Batch, and Amazon SageMaker. Customers can also choose to run in a Neuron container within their self-managed containers orchestration service (e.g., Kubernetes and Ray).
What tools are available to develop models with Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running training, evaluation or inference workloads you can use Neuron 2.x CLI tools such as neuron-ls and neuron-top to get insights into the NeuronCores and NeuronDevices performance and memory utilization, topology and host vCPU performance and memory utilization. In addition, the Neuron Plugin for TensorBoard provides a standard GUI that enables profile and debug of models. TensorBoard views include:
- Model overview: provide a summary of the model and the utilization on the Host and NeuronDevice
- Operators’ view: provide a breakdown of ML framework and HLO operators on both Host and NeuronDevice
- Code trace view: show a timeline of the model execution at the framework and HLO operators level
- Hardware trace view: show a timeline of the model execution at the level of hardware (Host, NeuronDevice, Data Transfer)
- Topology view: show the NeuronDevices topology within an instance
How will compile time impact my work flow?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We understand compilation is a new step with Trainium, but as long as the overall time to train and cost to train is optimized, the compilation impact on these two metrics is minimized. To further help reduce compilation time impact on usability, Neuron supports a persistent cache, where artifacts that have not changed since the last run can be reused, skipping compilation alltogether. For developing and experimenting with new models, you can use the eager debug mode, that compiles (and caches) op-by-op, enabling quick evaluation without compiling large models. We are also working on Neuron model analyzer (see Neuron roadmap) that will recommend optimized hyper parameters, skipping full compilation per experiment.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-training-faq:
Training with Neuron - FAQ
==========================
.. contents:: Table of contents
:local:
:depth: 2
Compute
-------
How do I get started with training my model on Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once you select your machine learning framework, you can get started here: :ref:`docs-quick-links`
How do I setup EFA for multi-node training?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For setting up EFA that is needed for multi-node training, please see :ref:`setup-trn1-multi-node-execution`
How do I know if I can train my models with Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We aim to support a broad set of models and distribution libraries. We continuously add more capabilities and enable new features via Neuron SDK releases and suggest you will follow our public roadmap and join our slack and email lists.
How should I size Trainium NeuronCores vs GPUs?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For simplicity, you should consider each NeuronCore within your instances as an independent deep learning compute engine, the equivalent of a GPU. As point of comparison, a trn1.32xlarge has 32 NeuronCores, and their max performance is 40% higher than of P4d for BF16/FP16/FP8, 2.5X faster for TF32, and 5X faster for FP32. Each NeuronCore is independent and connected to the rest of the NeuronCores within the instance via NeuronLink, and across instances with EFA. Each NeuronCore has also full access to the accelerator memory in the instance, which helps scale large models across NeuronCores using various collective compute ops techniques.
What are the time to train advantages of Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While the answer is largely model defendant, training performance on Trn1 is fast due thanks for multiple system wide optimizations working in concert. Dependent on the data type, you should expect between 1.4-5X higher throughput on Trn1 as compared to the latest GPUs instances (P4d). For distributed workloads, 800Gbps EFA gives customers lower latency, and 2x the throughput as compared to P4d. (a Trn1n 1.6Tb option is coming soon). Each Trainium also has a dedicated collective compute (CC) engine, which enables running the CC ops in parallel to the NeuronCores compute. This enables another 10-15% acceleration of the overall workload. Finally, stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision, this is not only simplifying model development (no need for mixed precision) it also helps the loss function converge faster and reduce memory footprint.
What are some of the training performance results for Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
They are great! please refer to the :ref:`benchmark` page for open-source model performance results. We encourage you to try it for your own models/application.
Can I use CUDA libraries with AWS Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Trainium and Neuron are plugged into popular frameworkd, and is automatically optimizing model deployment on Neuron devices like Inferentia and Trainium. The Neuron SDK automatically optimizes for Trainium without using closed source dependencies like Nvidia CUDA, not requiring any application level code changes to accelerate models. We believe this intentional approach allows developers freedom of choice with their code and models. If you have applications dependencieson CUDA (or other 3rd party closed source artifacts) you will need to strip them out, and from that point the Neuron compiler will take the model as is and optimize it at the hardware level.
Networking
----------
What’s important to know about the networking in Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trn1 have the fastest EFA in AWS, clocked at 800Gbps they enable more collective communication as compared to other training instances, which is important if your training job spans across multiple servers. You should also expect lower latency as we streamline the communication path between the dedicated collective communication engine on Trainium, and the AWS Nitro EFA NICs.
How does Trainium accelerates collective communication operations?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trainium introduces a dedicated collective compute engine, that runs in parallel to the compute cores (aka NeuronCores). This improves convergence time of intermediate steps as the communication happens in parallel to the compute. This capability, in addition to the faster and optimized EFA, results in better scalability and faster time to train, as compared to other training instances in AWS.
What does Strong/Weak Scaling mean?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To enable strong scaling, we optimized Trainium to be efficient at small batch sizes. Compared to GPUs, Trn1 maintains high efficiency even for small batch sizes. This allows you to scale-out to thousands of devices without increasing the global mini-batch size at the same rate, which in turn leads to faster end-to-end training convergence.
In weak scaling setup, we show the optimal throughput with suffciently large batch size per Trainium. The large batch size is set to leverage the high core utilization so that the overall end-to-end training will be fast. This setup also enables a large global batch size as it scales with the total number of nodes in the cluster.
Usability
---------
What have AWS done to improve usability of Trainium?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision. This of course helps the loss function converge faster and reduce memory footprint, but equally important, it is simplifying model development as you can write your model in FP32, and Neuron/Trainium will auto-cast the model to BF16, and execute it with SR enabled. There is no need to loss accuracy with pure BF16 runs, and more importantly no need for experimenting with mixed precision strategies to find the optimal settings.
Eager debug mode provides a convenient utility to step through the code and evaluate operator correctness as part of your model creation/debug. For more details, please refer to the Neuron documentation
What other AWS services work with Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trn1 via its Neuron SDK supports Amazon ECS, EKS, ParallelCluster, Batch, and Amazon SageMaker. Customers can also choose to run in a Neuron container within their self-managed containers orchestration service (e.g., Kubernetes and Ray).
What tools are available to develop models with Trn1?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running training, evaluation or inference workloads you can use Neuron 2.x CLI tools such as neuron-ls and neuron-top to get insights into the NeuronCores and NeuronDevices performance and memory utilization, topology and host vCPU performance and memory utilization. In addition, the Neuron Plugin for TensorBoard provides a standard GUI that enables profile and debug of models. TensorBoard views include:
- Model overview: provide a summary of the model and the utilization on the Host and NeuronDevice
- Operators’ view: provide a breakdown of ML framework and HLO operators on both Host and NeuronDevice
- Code trace view: show a timeline of the model execution at the framework and HLO operators level
- Hardware trace view: show a timeline of the model execution at the level of hardware (Host, NeuronDevice, Data Transfer)
- Topology view: show the NeuronDevices topology within an instance
How will compile time impact my work flow?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We understand compilation is a new step with Trainium, but as long as the overall time to train and cost to train is optimized, the compilation impact on these two metrics is minimized. To further help reduce compilation time impact on usability, Neuron supports a persistent cache, where artifacts that have not changed since the last run can be reused, skipping compilation alltogether. For developing and experimenting with new models, you can use the eager debug mode, that compiles (and caches) op-by-op, enabling quick evaluation without compiling large models. We are also working on Neuron model analyzer (see Neuron roadmap) that will recommend optimized hyper parameters, skipping full compilation per experiment.
</pre></body></html> | 2023-09-29T20:55:33.573Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/inference/neuron-faq.rst.txt | ```
.. _neuron-f1-faq:
Inference with Neuron - FAQ
---------------------------
.. contents:: Table of contents
:local:
:depth: 1
What ML model types and operators are supported by AWS Neuron?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Neuron includes a compiler that converts your trained machine
learning models to a binary object for execution. The Neuron
compiler supports many commonly used machine learning operators used in computer vision, natural language processing, recommender engines and more. A list of supported ML operators and supported inputs are in :ref:`neuron-supported-operators` .
It's important to mention that to get good performance doesn't require all of the model operators to run on the chip. In many cases, some of the operators will continue to run on the instance CPUs, like the case of embeddings or image pre-processing, and will still provide a compelling end to end performance. We call this approach auto-partitioning, where the Neuron compiler optimizes the model execution based on operators that are most suitable to run on the CPU or the chip.
For the latest model architecture support, please refer to the model architecuture fit and performance pages.
Why is a compiler needed, and how do I use it?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Neuron compiler converts a model from a framework level Neural Network
graph, with operators like convolution and pooling, into a
Neuron Device-specific instruction set, builds the schedule for
execution of these instructions, and converts the model parameters into
format that the neuron device can consume. The supported input formats include
TensorFlow, PyTorch, and MXNet. The output from the
compiler is a Neuron Executable File Format (NEFF) artifact. The NEFF
contains a combination of binary code, the model parameters, and
additional meta-data needed by the Neuron runtime and profiler.
I am using a ML framework today – what will change for me to use this?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To use Inferentia within the Inf1 instances, the developer needs to perform one-time compilation
of the pre-trained model to generate a NEFF, and use this as the inference
model in fleet of Inf1 instances.
- :ref:`tensorflow-neuron`
- :ref:`neuron-pytorch`
- :ref:`neuron-mxnet`
What is a NeuronCore Pipeline? How do I take advantage of it?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A NeuronCore Pipeline is a unique technique to shard a specific Neural
Network across multiple NeuronCores, to take advantage of the large
on-chip cache instead of moving data in and out of external memory. The result is an increased throughput and reduce latency
typically important for real-time inference applications. All Inf1 instances support it, and the Inf1
instances with multiple Inferentia accelerators, such as inf1.6xlarge or
inf1.24xlarge support it thanks to the fast chip-to-chip interconnect.
Developers can choose to use NeuronCore Pipeline mode during compile
stage, with an opt-in flag. :ref:`neuron-cc` provides further details.
NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each Inferentia chip has four compute engines called NeuronCores. A
NeuronCore Group is a way to aggregate NeuronCores to increase hardware
utilization and assign models with the right compute sizing for a
specific application. If you want to run mutiple models in parallel,
you can assign different models to separate NeuronCore Groups. A model
compiled to use multiple NeuronCores in a NeuronCore Pipeline can be
assigned to a NeuronCore Group with enough NeuronCores to load into.
Finally- it is also possible for sets of Inferentia devices to be mapped
to separate Neuron Runtimes. :ref:`neuron-features-index` section has more
information and examples.
Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes. Models format can be imported into TensorFlow, either as a standard
model-server, in which case it appears as a simple command line utility,
or via the Python based TensorFlow environment. The primary additional
step needed is to compile the model into Inferentia NEFF format.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-f1-faq:
Inference with Neuron - FAQ
---------------------------
.. contents:: Table of contents
:local:
:depth: 1
What ML model types and operators are supported by AWS Neuron?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWS Neuron includes a compiler that converts your trained machine
learning models to a binary object for execution. The Neuron
compiler supports many commonly used machine learning operators used in computer vision, natural language processing, recommender engines and more. A list of supported ML operators and supported inputs are in :ref:`neuron-supported-operators` .
It's important to mention that to get good performance doesn't require all of the model operators to run on the chip. In many cases, some of the operators will continue to run on the instance CPUs, like the case of embeddings or image pre-processing, and will still provide a compelling end to end performance. We call this approach auto-partitioning, where the Neuron compiler optimizes the model execution based on operators that are most suitable to run on the CPU or the chip.
For the latest model architecture support, please refer to the model architecuture fit and performance pages.
Why is a compiler needed, and how do I use it?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Neuron compiler converts a model from a framework level Neural Network
graph, with operators like convolution and pooling, into a
Neuron Device-specific instruction set, builds the schedule for
execution of these instructions, and converts the model parameters into
format that the neuron device can consume. The supported input formats include
TensorFlow, PyTorch, and MXNet. The output from the
compiler is a Neuron Executable File Format (NEFF) artifact. The NEFF
contains a combination of binary code, the model parameters, and
additional meta-data needed by the Neuron runtime and profiler.
I am using a ML framework today – what will change for me to use this?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To use Inferentia within the Inf1 instances, the developer needs to perform one-time compilation
of the pre-trained model to generate a NEFF, and use this as the inference
model in fleet of Inf1 instances.
- :ref:`tensorflow-neuron`
- :ref:`neuron-pytorch`
- :ref:`neuron-mxnet`
What is a NeuronCore Pipeline? How do I take advantage of it?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A NeuronCore Pipeline is a unique technique to shard a specific Neural
Network across multiple NeuronCores, to take advantage of the large
on-chip cache instead of moving data in and out of external memory. The result is an increased throughput and reduce latency
typically important for real-time inference applications. All Inf1 instances support it, and the Inf1
instances with multiple Inferentia accelerators, such as inf1.6xlarge or
inf1.24xlarge support it thanks to the fast chip-to-chip interconnect.
Developers can choose to use NeuronCore Pipeline mode during compile
stage, with an opt-in flag. :ref:`neuron-cc` provides further details.
NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each Inferentia chip has four compute engines called NeuronCores. A
NeuronCore Group is a way to aggregate NeuronCores to increase hardware
utilization and assign models with the right compute sizing for a
specific application. If you want to run mutiple models in parallel,
you can assign different models to separate NeuronCore Groups. A model
compiled to use multiple NeuronCores in a NeuronCore Pipeline can be
assigned to a NeuronCore Group with enough NeuronCores to load into.
Finally- it is also possible for sets of Inferentia devices to be mapped
to separate Neuron Runtimes. :ref:`neuron-features-index` section has more
information and examples.
Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes. Models format can be imported into TensorFlow, either as a standard
model-server, in which case it appears as a simple command line utility,
or via the Python based TensorFlow environment. The primary additional
step needed is to compile the model into Inferentia NEFF format.
</pre></body></html> | 2023-09-29T20:55:33.658Z | |
PyTorch Tutorial Setup — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/tutorials/pytorch-tutorial-setup.html#pytorch-tutorial-setup | # PyTorch Tutorial Setup — AWS Neuron Documentation
## PyTorch Tutorial Setup
_This document is relevant for_: `Inf1`
## PyTorch Tutorial Setup[#](#pytorch-tutorial-setup "Permalink to this headline")
1. Launch an Inf1.6xlarge Instance:
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/).
- When choosing an Amazon Machine Image (AMI) make sure to select [Deep Learning AMI with Conda Options](https://docs.aws.amazon.com/dlami/latest/devguide/conda.html). Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
2. Set up a development environment:
- Enable or install PyTorch-Neuron: [Install PyTorch Neuron (torch-neuron)](../setup/pytorch-install.html#install-neuron-pytorch).
3. Run tutorial in Jupyter notebook:
- Follow instruction at [Setup Jupyter notebook](../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting) to:
1. Start the Jupyter Notebook on the instance
2. Run the Jupyter Notebook from your local browser
- Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:
```
git clone https://github.com/aws/aws-neuron-sdk.git
cd aws-neuron-sdk/src/examples/pytorch
```
- Locate the tutorial notebook file (.ipynb file) under `aws-neuron-sdk/src/examples/pytorch`
- From your local browser, open the tutorial notebook from the menu and follow the instructions.
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PyTorch Tutorial Setup — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/tutorials/pytorch-tutorial-setup", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/tutorials/pytorch-tutorial-setup.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/tutorials/pytorch-tutorial-setup.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/torch/torch-neuron/tutorials/pytorch-tutorial-setup.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>PyTorch Tutorial Setup</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="pytorch-tutorial-setup">
<span id="id1"></span><h1>PyTorch Tutorial Setup<a class="headerlink" href="#pytorch-tutorial-setup" title="Permalink to this headline">#</a></h1>
<ol class="arabic">
<li><dl>
<dt>Launch an Inf1.6xlarge Instance:</dt><dd><ul class="simple">
<li><p>Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a>.</p></li>
<li><p>When choosing an Amazon Machine Image (AMI) make sure to select <a class="reference external" href="https://docs.aws.amazon.com/dlami/latest/devguide/conda.html">Deep Learning AMI with Conda Options</a>. Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.</p></li>
<li><p>After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>You can also launch the instance from AWS CLI, please see <a class="reference internal" href="../../../../general/setup/install-templates/inf1/launch-inf1-dlami-aws-cli.html#launch-inf1-dlami-aws-cli"><span class="std std-ref">AWS CLI commands to launch inf1 instances</span></a>.</p>
</div>
</dd>
</dl>
</li>
<li><dl class="simple">
<dt>Set up a development environment:</dt><dd><ul class="simple">
<li><p>Enable or install PyTorch-Neuron: <a class="reference internal" href="../setup/pytorch-install.html#install-neuron-pytorch"><span class="std std-ref">Install PyTorch Neuron (torch-neuron)</span></a>.</p></li>
</ul>
</dd>
</dl>
</li>
<li><dl>
<dt>Run tutorial in Jupyter notebook:</dt><dd><ul>
<li><p>Follow instruction at <a class="reference internal" href="../../../../general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html#setup-jupyter-notebook-steps-troubleshooting"><span class="std std-ref">Setup Jupyter notebook</span></a> to:</p>
<ol class="arabic simple">
<li><p>Start the Jupyter Notebook on the instance</p></li>
<li><p>Run the Jupyter Notebook from your local browser</p></li>
</ol>
</li>
<li><p>Connect to the instance from the terminal, clone the Neuron Github repository to the Inf1 instance and then change the working directory to the tutorial directory:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">aws</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">.</span><span class="n">git</span>
<span class="n">cd</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">sdk</span><span class="o">/</span><span class="n">src</span><span class="o">/</span><span class="n">examples</span><span class="o">/</span><span class="n">pytorch</span>
</pre></div>
</div>
</li>
<li><p>Locate the tutorial notebook file (.ipynb file) under <code class="docutils literal notranslate"><span class="pre">aws-neuron-sdk/src/examples/pytorch</span></code></p></li>
<li><p>From your local browser, open the tutorial notebook from the menu and follow the instructions.</p></li>
</ul>
</dd>
</dl>
</li>
</ol>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:33.897Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq/contributing-faq.rst.txt | ```
.. _contribute-faq:
Contributing Guidelines FAQs
============================
.. contents:: Table of contents
:local:
:depth: 1
Whether it's
a bug report, new feature, correction, or additional documentation, we
greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull
requests to ensure we have all the necessary information to effectively
respond to your bug report or contribution.
How to reporting Bugs/Feature Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We welcome you to use the GitHub issue tracker to report bugs or suggest
features.
When filing an issue, please check existing open, or recently closed,
issues to make sure somebody else hasn't already reported the issue.
Please try to include as much information as you can. Details like these
are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
Contributing via Pull Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contributions via pull requests are much appreciated. Before sending us
a pull request, please ensure that:
1. You are working against the latest source on the *master* branch.
2. You check existing open, and recently merged, pull requests to make
sure someone else hasn't addressed the problem already.
3. You open an issue to discuss any significant work - we would hate for
your time to be wasted.
To send us a pull request, please:
1. Fork the repository.
2. Modify the source; please focus on the specific change you are
contributing. If you also reformat all the code, it will be hard for
us to focus on your change.
3. Ensure local tests pass.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull
request interface.
6. Pay attention to any automated CI failures reported in the pull
request, and stay involved in the conversation.
GitHub provides additional document on `forking a
repository <https://help.github.com/articles/fork-a-repo/>`__ and
`creating a pull
request <https://help.github.com/articles/creating-a-pull-request/>`__.
How to find contributions to work on
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Looking at the existing issues is a great way to find something to
contribute on. As our projects, by default, use the default GitHub issue
labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix),
looking at any 'help wanted' issues is a great place to start.
What is the code of conduct
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This project has adopted the `Amazon Open Source Code of
Conduct <https://aws.github.io/code-of-conduct>`__. For more information
see the `Code of Conduct
FAQ <https://aws.github.io/code-of-conduct-faq>`__ or contact
opensource-codeofconduct@amazon.com with any additional questions or
comments.
How to notify for a security issue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you discover a potential security issue in this project we ask that
you notify AWS/Amazon Security via our `vulnerability reporting
page <http://aws.amazon.com/security/vulnerability-reporting/>`__.
Please do **not** create a public github issue.
What is the licensing
~~~~~~~~~~~~~~~~~~~~~~~~
See the `link <https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION>`_
and `link <https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-SUMMARY-DOCS-SAMPLES>`_ files
for our project's licensing. We will ask you to confirm the licensing of
your contribution.
We may ask you to sign a `Contributor License Agreement
(CLA) <http://en.wikipedia.org/wiki/Contributor_License_Agreement>`__
for larger changes.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _contribute-faq:
Contributing Guidelines FAQs
============================
.. contents:: Table of contents
:local:
:depth: 1
Whether it's
a bug report, new feature, correction, or additional documentation, we
greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull
requests to ensure we have all the necessary information to effectively
respond to your bug report or contribution.
How to reporting Bugs/Feature Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We welcome you to use the GitHub issue tracker to report bugs or suggest
features.
When filing an issue, please check existing open, or recently closed,
issues to make sure somebody else hasn't already reported the issue.
Please try to include as much information as you can. Details like these
are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
Contributing via Pull Requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contributions via pull requests are much appreciated. Before sending us
a pull request, please ensure that:
1. You are working against the latest source on the *master* branch.
2. You check existing open, and recently merged, pull requests to make
sure someone else hasn't addressed the problem already.
3. You open an issue to discuss any significant work - we would hate for
your time to be wasted.
To send us a pull request, please:
1. Fork the repository.
2. Modify the source; please focus on the specific change you are
contributing. If you also reformat all the code, it will be hard for
us to focus on your change.
3. Ensure local tests pass.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull
request interface.
6. Pay attention to any automated CI failures reported in the pull
request, and stay involved in the conversation.
GitHub provides additional document on `forking a
repository <https://help.github.com/articles/fork-a-repo/>`__ and
`creating a pull
request <https://help.github.com/articles/creating-a-pull-request/>`__.
How to find contributions to work on
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Looking at the existing issues is a great way to find something to
contribute on. As our projects, by default, use the default GitHub issue
labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix),
looking at any 'help wanted' issues is a great place to start.
What is the code of conduct
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This project has adopted the `Amazon Open Source Code of
Conduct <https://aws.github.io/code-of-conduct>`__. For more information
see the `Code of Conduct
FAQ <https://aws.github.io/code-of-conduct-faq>`__ or contact
opensource-codeofconduct@amazon.com with any additional questions or
comments.
How to notify for a security issue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you discover a potential security issue in this project we ask that
you notify AWS/Amazon Security via our `vulnerability reporting
page <http://aws.amazon.com/security/vulnerability-reporting/>`__.
Please do **not** create a public github issue.
What is the licensing
~~~~~~~~~~~~~~~~~~~~~~~~
See the `link <https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION>`_
and `link <https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-SUMMARY-DOCS-SAMPLES>`_ files
for our project's licensing. We will ask you to confirm the licensing of
your contribution.
We may ask you to sign a `Contributor License Agreement
(CLA) <http://en.wikipedia.org/wiki/Contributor_License_Agreement>`__
for larger changes.
</pre></body></html> | 2023-09-29T20:55:34.005Z | |
Previous Releases’ Content (Neuron 1.x) — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/release-notes/neuron1/prev/content.html#pre-n1-release-content | # Previous Releases’ Content (Neuron 1.x) — AWS Neuron Documentation
_This document is relevant for_: `Inf1`
## Previous Releases’ Content (Neuron 1.x)[#](#previous-releases-content-neuron-1-x "Permalink to this headline")
Table of contents
- [Neuron 2.5.0 (11/23/2022)](#neuron-2-5-0-11-23-2022)
- [Neuron 1.19.2 (08/02/2022)](#neuron-1-19-2-08-02-2022)
- [Neuron 1.19.1 (05/27/2022)](#neuron-1-19-1-05-27-2022)
- [Neuron 1.19.0 (04/29/2022)](#neuron-1-19-0-04-29-2022)
- [Neuron 1.18.0 (03/25/2022)](#neuron-1-18-0-03-25-2022)
- [Neuron 1.17.2 (02/18/2022)](#neuron-1-17-2-02-18-2022)
- [Neuron 1.17.1 (02/16/2022)](#neuron-1-17-1-02-16-2022)
- [Neuron 1.17.0 (01/20/2022)](#neuron-1-17-0-01-20-2022)
- [Neuron 1.16.3 (01/05/2022)](#neuron-1-16-3-01-05-2022)
- [Neuron 1.16.2 (12/15/2021)](#neuron-1-16-2-12-15-2021)
- [Neuron 1.16.1 (11/05/2021)](#neuron-1-16-1-11-05-2021)
- [Neuron 1.16.0 (10/27/2021)](#neuron-1-16-0-10-27-2021)
- [Neuron v1.15.2 (September 22 2021)](#neuron-v1-15-2-september-22-2021)
- [Neuron v1.15.1 (August 30 2021)](#neuron-v1-15-1-august-30-2021)
- [Neuron v1.15.0 (August 12 2021)](#neuron-v1-15-0-august-12-2021)
- [Neuron v1.14.2 (July 26 2021)](#neuron-v1-14-2-july-26-2021)
- [Neuron v1.14.1 (July 2nd 2021)](#neuron-v1-14-1-july-2nd-2021)
- [Neuron v1.14.0 (May 28th 2021)](#neuron-v1-14-0-may-28th-2021)
- [Neuron v1.13.0 (May 1st 2021)](#neuron-v1-13-0-may-1st-2021)
- [Neuron v1.12.2 (Mar 4th 2021)](#neuron-v1-12-2-mar-4th-2021)
- [Neuron v1.12.1 (Feb 24th 2021)](#neuron-v1-12-1-feb-24th-2021)
- [Neuron v1.12.0 (Jan 30 2021)](#neuron-v1-12-0-jan-30-2021)
## [Neuron 2.5.0 (11/23/2022)](#id62)[#](#neuron-2-5-0-11-23-2022 "Permalink to this headline")
### Release included packages[#](#release-included-packages "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 2.5.0:
driver : aws-neuronx-dkms-2.6.33.0
libnrt : libnrt.so (version 2.10.27.0)
k8-plugin : aws-neuronx-k8-plugin-2.1.12.0
k8-scheduler : aws-neuronx-k8-scheduler-2.1.12.0
tools : aws-neuronx-tools-2.5.19.0
compiler : neuron-cc-1.13.5.0
neuronperf : neuronperf-1.6.1.0
pytorch : torch-neuron-1.7.1.2.5.8.0
pytorch : torch-neuron-1.8.1.2.5.8.0
pytorch : torch-neuron-1.9.1.2.5.8.0
pytorch : torch-neuron-1.10.2.2.5.8.0
pytorch : torch-neuron-1.11.0.2.5.8.0
pytorch : torch-neuron-1.12.1.2.5.8.0
tensorflow : tensorflow-neuron-1.15.5.2.5.6.0
tensorflow : tensorflow-neuron-2.5.3.2.5.6.0
tensorflow : tensorflow-neuron-2.6.5.2.5.6.0
tensorflow : tensorflow-neuron-2.7.3.2.5.6.0
tensorflow : tensorflow-neuron-2.8.2.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-1.15.0.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.5.4.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.6.3.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.7.0.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.8.0.2.5.6.0
tensorboard : tensorboard-plugin-neuron-2.4.6.0
mxnet : mxnet_neuron-1.5.1.1.10.11.0
mxnet : mx_neuron-1.8.0.2.2.43.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#release-supported-frameworks "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
## [Neuron 1.19.2 (08/02/2022)](#id63)[#](#neuron-1-19-2-08-02-2022 "Permalink to this headline")
### Release included packages[#](#id1 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.19.2:
driver : aws-neuron-dkms-2.3.26.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.3.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.7.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id2 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
## [Neuron 1.19.1 (05/27/2022)](#id64)[#](#neuron-1-19-1-05-27-2022 "Permalink to this headline")
### Release included packages[#](#id3 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.19.1:
driver : aws-neuron-dkms-2.3.11.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.2.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.2.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.4.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id4 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#dependency-software-supported-versions "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
Python 3.7
|
## [Neuron 1.19.0 (04/29/2022)](#id65)[#](#neuron-1-19-0-04-29-2022 "Permalink to this headline")
### Release included packages[#](#id5 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.19.0:
driver : aws-neuron-dkms-2.3.3.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.0.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.4.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id6 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.19.0:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id7 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
Python 3.7
|
## [Neuron 1.18.0 (03/25/2022)](#id66)[#](#neuron-1-18-0-03-25-2022 "Permalink to this headline")
### Release included packages[#](#id8 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.18.0:
driver : aws-neuron-dkms-2.2.14.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.8.2.0
k8-scheduler : aws-neuron-k8-scheduler-1.8.2.0
tools : aws-neuron-tools-2.0.790.0
compiler : neuron-cc-1.10.3.0
neuronperf : neuronperf-1.2.0.0
pytorch : torch-neuron-1.5.1.2.2.0.0
pytorch : torch-neuron-1.7.1.2.2.0.0
pytorch : torch-neuron-1.8.1.2.2.0.0
pytorch : torch-neuron-1.9.1.2.2.0.0
pytorch : torch-neuron-1.10.1.2.2.0.0
tensorflow : tensorflow-neuron-1.15.5.2.2.0.0
tensorflow : tensorflow-neuron-2.5.3.2.2.0.0
tensorflow : tensorflow-neuron-2.6.3.2.2.0.0
tensorflow : tensorflow-neuron-2.7.1.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.2.0.0
tensorboard : tensorboard-plugin-neuron-2.3.0.0
mxnet : mxnet_neuron-1.5.1.1.9.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id9 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.18.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id10 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
Python 3.7
|
## [Neuron 1.17.2 (02/18/2022)](#id67)[#](#neuron-1-17-2-02-18-2022 "Permalink to this headline")
### Release included packages[#](#id11 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.17.2:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.14.0
tensorflow : tensorflow-neuron-2.1.4.2.1.14.0
tensorflow : tensorflow-neuron-2.2.3.2.1.14.0
tensorflow : tensorflow-neuron-2.3.4.2.1.14.0
tensorflow : tensorflow-neuron-2.4.3.2.1.14.0
tensorflow : tensorflow-neuron-2.5.2.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.14.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id12 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.17.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id13 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.17.1 (02/16/2022)](#id68)[#](#neuron-1-17-1-02-16-2022 "Permalink to this headline")
### Release included packages[#](#id14 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.17.1:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.13.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.2.2.1.13.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.13.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.13.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id15 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.17.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id16 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.17.0 (01/20/2022)](#id69)[#](#neuron-1-17-0-01-20-2022 "Permalink to this headline")
### Release included packages[#](#id17 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.17.0:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.6.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.2.2.1.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.6.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.6.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id18 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.17.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id19 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.16.3 (01/05/2022)](#id70)[#](#neuron-1-16-3-01-05-2022 "Permalink to this headline")
### Release included packages[#](#id20 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.16.3:
driver : aws-neuron-dkms-2.2.8.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.4.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.4.0
tools : aws-neuron-tools-2.0.494.0
compiler : neuron-cc-1.8.5.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.536.0
pytorch : torch-neuron-1.7.1.2.0.536.0
pytorch : torch-neuron-1.8.1.2.0.536.0
pytorch : torch-neuron-1.9.1.2.0.536.0
tensorflow : tensorflow-neuron-1.15.5.2.0.5.0
tensorflow : tensorflow-neuron-2.1.4.2.0.5.0
tensorflow : tensorflow-neuron-2.2.3.2.0.5.0
tensorflow : tensorflow-neuron-2.3.4.2.0.5.0
tensorflow : tensorflow-neuron-2.4.3.2.0.5.0
tensorflow : tensorflow-neuron-2.5.1.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.5.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.3.0
mxnet : mx_neuron-1.8.0.2.0.290.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id21 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.16.3:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id22 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.16.2 (12/15/2021)](#id71)[#](#neuron-1-16-2-12-15-2021 "Permalink to this headline")
### Release included packages[#](#id23 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.16.2:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.327.0
compiler : neuron-cc-1.8.2.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.468.0
pytorch : torch-neuron-1.7.1.2.0.468.0
pytorch : torch-neuron-1.8.1.2.0.468.0
pytorch : torch-neuron-1.9.1.2.0.468.0
tensorflow : tensorflow-neuron-1.15.5.2.0.4.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.1.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.4.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.276.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id24 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.16.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id25 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.16.1 (11/05/2021)](#id72)[#](#neuron-1-16-1-11-05-2021 "Permalink to this headline")
### Release included packages[#](#id26 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.16.1:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.327.0
compiler : neuron-cc-1.7.3.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.392.0
pytorch : torch-neuron-1.7.1.2.0.392.0
pytorch : torch-neuron-1.8.1.2.0.392.0
pytorch : torch-neuron-1.9.1.2.0.392.0
tensorflow : tensorflow-neuron-1.15.5.2.0.4.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.1.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.4.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.276.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id27 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.16.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id28 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron 1.16.0 (10/27/2021)](#id73)[#](#neuron-1-16-0-10-27-2021 "Permalink to this headline")
### Release included packages[#](#id29 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.16.0:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.15.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.277.0
compiler : neuron-cc-1.7.3.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.318.0
pytorch : torch-neuron-1.7.1.2.0.318.0
pytorch : torch-neuron-1.8.1.2.0.318.0
pytorch : torch-neuron-1.9.1.2.0.318.0
tensorflow : tensorflow-neuron-1.15.5.2.0.3.0
tensorflow : tensorflow-neuron-2.1.4.2.0.3.0
tensorflow : tensorflow-neuron-2.2.3.2.0.3.0
tensorflow : tensorflow-neuron-2.3.4.2.0.3.0
tensorflow : tensorflow-neuron-2.4.3.2.0.3.0
tensorflow : tensorflow-neuron-2.5.1.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.3.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.271.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id30 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.16.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id31 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
|
## [Neuron v1.15.2 (September 22 2021)](#id74)[#](#neuron-v1-15-2-september-22-2021 "Permalink to this headline")
### Release included packages[#](#id32 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.15.2:
driver : aws-neuron-dkms-2.1.5.0
runtime-server : aws-neuron-runtime-1.6.24.0
k8-plugin : aws-neuron-k8-plugin-1.6.22.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.22.0
runtime-base : aws-neuron-runtime-base-1.6.21.0
tools : aws-neuron-tools-1.7.25.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.10.0
tensorflow : tensorflow-neuron-2.1.4.1.6.10.0
tensorflow : tensorflow-neuron-2.2.3.1.6.10.0
tensorflow : tensorflow-neuron-2.3.3.1.6.10.0
tensorflow : tensorflow-neuron-2.4.2.1.6.10.0
tensorflow : tensorflow-neuron-2.5.0.1.6.10.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.10.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id33 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.15.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id34 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.15.1 (August 30 2021)](#id75)[#](#neuron-v1-15-1-august-30-2021 "Permalink to this headline")
### Release included packages[#](#id35 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.15.1:
driver : aws-neuron-dkms-2.1.5.0
runtime-server : aws-neuron-runtime-1.6.24.0
k8-plugin : aws-neuron-k8-plugin-1.6.22.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.22.0
runtime-base : aws-neuron-runtime-base-1.6.21.0
tools : aws-neuron-tools-1.7.25.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.8.0
tensorflow : tensorflow-neuron-2.1.4.1.6.8.0
tensorflow : tensorflow-neuron-2.2.3.1.6.8.0
tensorflow : tensorflow-neuron-2.3.3.1.6.8.0
tensorflow : tensorflow-neuron-2.4.2.1.6.8.0
tensorflow : tensorflow-neuron-2.5.0.1.6.8.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.8.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id36 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.15.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id37 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.15.0 (August 12 2021)](#id76)[#](#neuron-v1-15-0-august-12-2021 "Permalink to this headline")
### Release included packages[#](#id38 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.15.0:
driver : aws-neuron-dkms-2.0.450.0
runtime-server : aws-neuron-runtime-1.6.19.0
k8-plugin : aws-neuron-k8-plugin-1.6.17.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.17.0
runtime-base : aws-neuron-runtime-base-1.6.16.0
tools : aws-neuron-tools-1.7.20.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.8.0
tensorflow : tensorflow-neuron-2.1.4.1.6.8.0
tensorflow : tensorflow-neuron-2.2.3.1.6.8.0
tensorflow : tensorflow-neuron-2.3.3.1.6.8.0
tensorflow : tensorflow-neuron-2.4.2.1.6.8.0
tensorflow : tensorflow-neuron-2.5.0.1.6.8.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.8.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id39 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.15.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id40 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.14.2 (July 26 2021)](#id77)[#](#neuron-v1-14-2-july-26-2021 "Permalink to this headline")
### Release included packages[#](#id41 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.14.2:
driver : aws-neuron-dkms-2.0.386.0
runtime-server : aws-neuron-runtime-1.6.9.0
k8-plugin : aws-neuron-k8-plugin-1.6.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.7.0
runtime-base : aws-neuron-runtime-base-1.6.6.0
tools : aws-neuron-tools-1.7.10.0
compiler : neuron-cc-1.5.5.0
pytorch : torch-neuron-1.5.1.1.5.12.0
pytorch : torch-neuron-1.7.1.1.5.12.0
pytorch : torch-neuron-1.8.1.1.5.12.0
tensorflow : tensorflow-neuron-1.15.5.1.5.1.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.5.1.0
mxnet : mxnet_neuron-1.5.1.1.6.1.0
mxnet : mx_neuron-1.8.0.1.3.0.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id42 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.14.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id43 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.14.1 (July 2nd 2021)](#id78)[#](#neuron-v1-14-1-july-2nd-2021 "Permalink to this headline")
### Release included packages[#](#id44 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.14.1:
driver : aws-neuron-dkms-1.5.0.0
runtime-server : aws-neuron-runtime-1.6.5.0
k8-plugin : aws-neuron-k8-plugin-1.6.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.0.0
runtime-base : aws-neuron-runtime-base-1.6.1.0
tools : aws-neuron-tools-1.7.4.0
compiler : neuron-cc-1.5.5.0
pytorch : torch-neuron-1.5.1.1.5.12.0
pytorch : torch-neuron-1.7.1.1.5.12.0
pytorch : torch-neuron-1.8.1.1.5.12.0
tensorflow : tensorflow-neuron-1.15.5.1.5.1.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.5.1.0
mxnet : mxnet_neuron-1.5.1.1.6.1.0
mxnet : mx_neuron-1.8.0.1.3.0.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id45 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.14.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id46 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.14.0 (May 28th 2021)](#id79)[#](#neuron-v1-14-0-may-28th-2021 "Permalink to this headline")
### Release included packages[#](#id47 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.14.0:
driver : aws-neuron-dkms-1.5.0.0
runtime-server : aws-neuron-runtime-1.5.0.0
k8-plugin : aws-neuron-k8-plugin-1.6.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.0.0
runtime-base : aws-neuron-runtime-base-1.5.1.0
tools : aws-neuron-tools-1.6.1.0
compiler : neuron-cc-1.4.1.0
pytorch : torch-neuron-1.5.1.1.4.1.0
pytorch : torch-neuron-1.7.1.1.4.1.0
pytorch : torch-neuron-1.8.1.1.4.1.0
tensorflow : tensorflow-neuron-1.15.5.1.4.0.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.4.0.0
mxnet : mxnet_neuron-1.5.1.1.5.1.0
mxnet : mx_neuron-1.8.0.1.2.1.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id48 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.14.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id49 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
## [Neuron v1.13.0 (May 1st 2021)](#id80)[#](#neuron-v1-13-0-may-1st-2021 "Permalink to this headline")
### Release included packages[#](#id50 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.13.0:
driver : aws-neuron-dkms-1.4.9.0
runtime-server : aws-neuron-runtime-1.4.17.0
k8-plugin : aws-neuron-k8-plugin-1.5.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.5.3.0
runtime-base : aws-neuron-runtime-base-1.4.12.0
tools : aws-neuron-tools-1.5.6.0
compiler : neuron-cc-1.3.7.0
pytorch : torch-neuron-1.5.1.1.3.5.0
pytorch : torch-neuron-1.7.1.1.3.5.0
tensorflow : tensorflow-neuron-1.15.5.1.3.3.0
tensorboard : tensorboard-plugin-neuron-2.0.29.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.3.3.0
mxnet : mxnet_neuron-1.5.1.1.4.4.0
mxnet : mx_neuron-1.8.0.1.1.2.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id51 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.13.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
```
### Dependency Software Supported Versions[#](#id52 "Permalink to this headline")
|
Software
|
Supported
|
| --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
- Python 3.8 \[Experimental\]
|
|
Neuron Conda Packages
|
- torch-neuron-1.7.1.1.3.5.0
- tensorflow-neuron 1.15.5.1.3.3.0
- mxnet-neuron-1.5.1.1.4.4.0
|
## [Neuron v1.12.2 (Mar 4th 2021)](#id81)[#](#neuron-v1-12-2-mar-4th-2021 "Permalink to this headline")
### Release included packages[#](#id53 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.12.2:
driver : aws-neuron-dkms-1.4.5.0
runtime-server : aws-neuron-runtime-1.4.12.0
k8-plugin : aws-neuron-k8-plugin-1.4.5.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.5.0
runtime-base : aws-neuron-runtime-base-1.4.8.0
tools : aws-neuron-tools-1.4.12.0
compiler : neuron-cc-1.2.7.0
pytorch : torch-neuron-1.5.1.1.2.16.0
pytorch : torch-neuron-1.7.1.1.2.16.0
tensorflow : tensorflow-neuron-1.15.5.1.2.9.0
tensorboard : tensorboard-neuron-1.15.0.1.2.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.9.0
mxnet : mxnet-neuron-1.5.1.1.3.8.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id54 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.12.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
```
### Dependency Software Supported Versions[#](#id55 "Permalink to this headline")
|
Software
|
Supported
|
Maintenance
|
End Of Support
|
| --- | --- | --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
| |
- Python 3.5 (2/24/2021)
|
|
Neuron Conda Packages
|
- torch-neuron 1.7.1.1.2.16.0
- tensorflow-neuron 1.15.5.1.2.9.0
- mxnet-neuron 1.5.1.1.3.8.0
| | |
## [Neuron v1.12.1 (Feb 24th 2021)](#id82)[#](#neuron-v1-12-1-feb-24th-2021 "Permalink to this headline")
### Release included packages[#](#id56 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.12.1:
driver : aws-neuron-dkms-1.4.5.0
runtime-server : aws-neuron-runtime-1.4.9.0
k8-plugin : aws-neuron-k8-plugin-1.4.5.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.5.0
runtime-base : aws-neuron-runtime-base-1.4.8.0
tools : aws-neuron-tools-1.4.8.0
compiler : neuron-cc-1.2.7.0
pytorch : torch-neuron-1.5.1.1.2.15.0
pytorch : torch-neuron-1.7.1.1.2.15.0
tensorflow : tensorflow-neuron-1.15.5.1.2.8.0
tensorboard : tensorboard-neuron-1.15.0.1.2.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.8.0
mxnet : mxnet-neuron-1.5.1.1.3.7.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id57 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.12.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
```
### Dependency Software Supported Versions[#](#id58 "Permalink to this headline")
|
Software
|
Supported
|
Maintenance
|
End Of Support
|
| --- | --- | --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
| |
- Python 3.5 (2/24/2021)
|
|
Neuron Conda Packages
|
- torch-neuron 1.7.1.1.2.15.0
- tensorflow-neuron 1.15.5.1.2.8.0
- mxnet-neuron 1.5.1.1.3.7.0
| | |
## [Neuron v1.12.0 (Jan 30 2021)](#id83)[#](#neuron-v1-12-0-jan-30-2021 "Permalink to this headline")
### Release included packages[#](#id59 "Permalink to this headline")
```
List of Neuron packages included in Neuron release version 1.12.0:
driver : aws-neuron-dkms-1.4.1.0
runtime-server : aws-neuron-runtime-1.4.3.0
k8-plugin : aws-neuron-k8-plugin-1.4.1.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.1.0
runtime-base : aws-neuron-runtime-base-1.4.2.0
tools : aws-neuron-tools-1.4.2.0
compiler : neuron-cc-1.2.2.0
pytorch : torch-neuron-1.5.1.1.2.3.0
pytorch : torch-neuron-1.7.1.1.2.3.0
tensorflow : tensorflow-neuron-1.15.5.1.2.2.0
tensorboard : tensorboard-neuron-1.15.0.1.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.2.0
mxnet : mxnet-neuron-1.5.1.1.3.7.0
```
See [SDK Maintenance Policy](../../../general/sdk-policy.html#neuron-maintenance-policy) for more information.
### Release supported frameworks[#](#id60 "Permalink to this headline")
```
List of frameworks included in Neuron release version 1.12.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
```
### Dependency Software Supported Versions[#](#id61 "Permalink to this headline")
|
Software
|
Supported
|
Maintenance
|
End Of Support
|
| --- | --- | --- | --- |
|
Python
|
- Python 3.6
- Python 3.7
| | |
|
Neuron Conda Packages
|
- Conda-PyTorch 1.5.1, Conda-PyTorch 1.7.1,
- Conda-TensorFlow 1.5.1, Conda-MXNet 1.5.1
| | |
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Previous Releases’ Content (Neuron 1.x) — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "release-notes/neuron1/prev/content", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Frelease-notes/neuron1/prev/content.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/release-notes/neuron1/prev/content.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/release-notes/neuron1/prev/content.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-2-5-0-11-23-2022">
Neuron 2.5.0 (11/23/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#release-included-packages">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#release-supported-frameworks">
Release supported frameworks
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-2-08-02-2022">
Neuron 1.19.2 (08/02/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id1">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id2">
Release supported frameworks
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-1-05-27-2022">
Neuron 1.19.1 (05/27/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id4">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#dependency-software-supported-versions">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-0-04-29-2022">
Neuron 1.19.0 (04/29/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id5">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id6">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id7">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-18-0-03-25-2022">
Neuron 1.18.0 (03/25/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id8">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id9">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id10">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-2-02-18-2022">
Neuron 1.17.2 (02/18/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id11">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id12">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id13">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-1-02-16-2022">
Neuron 1.17.1 (02/16/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id14">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id15">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id16">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-0-01-20-2022">
Neuron 1.17.0 (01/20/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id17">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id18">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id19">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-3-01-05-2022">
Neuron 1.16.3 (01/05/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id20">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id21">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id22">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-2-12-15-2021">
Neuron 1.16.2 (12/15/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id23">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id24">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id25">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-1-11-05-2021">
Neuron 1.16.1 (11/05/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id26">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id27">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id28">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-0-10-27-2021">
Neuron 1.16.0 (10/27/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id29">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id30">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id31">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-2-september-22-2021">
Neuron v1.15.2 (September 22 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id32">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id33">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id34">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-1-august-30-2021">
Neuron v1.15.1 (August 30 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id35">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id36">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id37">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-0-august-12-2021">
Neuron v1.15.0 (August 12 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id38">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id39">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id40">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-2-july-26-2021">
Neuron v1.14.2 (July 26 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id41">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id42">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id43">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-1-july-2nd-2021">
Neuron v1.14.1 (July 2nd 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id44">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id45">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id46">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-0-may-28th-2021">
Neuron v1.14.0 (May 28th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id47">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id48">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id49">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-13-0-may-1st-2021">
Neuron v1.13.0 (May 1st 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id50">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id51">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id52">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-2-mar-4th-2021">
Neuron v1.12.2 (Mar 4th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id53">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id54">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id55">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-1-feb-24th-2021">
Neuron v1.12.1 (Feb 24th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id56">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id57">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id58">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-0-jan-30-2021">
Neuron v1.12.0 (Jan 30 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id59">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id60">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id61">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Previous Releases’ Content (Neuron 1.x)</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-2-5-0-11-23-2022">
Neuron 2.5.0 (11/23/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#release-included-packages">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#release-supported-frameworks">
Release supported frameworks
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-2-08-02-2022">
Neuron 1.19.2 (08/02/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id1">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id2">
Release supported frameworks
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-1-05-27-2022">
Neuron 1.19.1 (05/27/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id4">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#dependency-software-supported-versions">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-19-0-04-29-2022">
Neuron 1.19.0 (04/29/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id5">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id6">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id7">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-18-0-03-25-2022">
Neuron 1.18.0 (03/25/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id8">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id9">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id10">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-2-02-18-2022">
Neuron 1.17.2 (02/18/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id11">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id12">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id13">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-1-02-16-2022">
Neuron 1.17.1 (02/16/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id14">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id15">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id16">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-17-0-01-20-2022">
Neuron 1.17.0 (01/20/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id17">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id18">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id19">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-3-01-05-2022">
Neuron 1.16.3 (01/05/2022)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id20">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id21">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id22">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-2-12-15-2021">
Neuron 1.16.2 (12/15/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id23">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id24">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id25">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-1-11-05-2021">
Neuron 1.16.1 (11/05/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id26">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id27">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id28">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-1-16-0-10-27-2021">
Neuron 1.16.0 (10/27/2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id29">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id30">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id31">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-2-september-22-2021">
Neuron v1.15.2 (September 22 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id32">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id33">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id34">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-1-august-30-2021">
Neuron v1.15.1 (August 30 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id35">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id36">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id37">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-15-0-august-12-2021">
Neuron v1.15.0 (August 12 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id38">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id39">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id40">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-2-july-26-2021">
Neuron v1.14.2 (July 26 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id41">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id42">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id43">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-1-july-2nd-2021">
Neuron v1.14.1 (July 2nd 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id44">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id45">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id46">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-14-0-may-28th-2021">
Neuron v1.14.0 (May 28th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id47">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id48">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id49">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-13-0-may-1st-2021">
Neuron v1.13.0 (May 1st 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id50">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id51">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id52">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-2-mar-4th-2021">
Neuron v1.12.2 (Mar 4th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id53">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id54">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id55">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-1-feb-24th-2021">
Neuron v1.12.1 (Feb 24th 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id56">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id57">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id58">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#neuron-v1-12-0-jan-30-2021">
Neuron v1.12.0 (Jan 30 2021)
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id59">
Release included packages
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id60">
Release supported frameworks
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id61">
Dependency Software Supported Versions
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="previous-releases-content-neuron-1-x">
<span id="pre-n1-release-content"></span><h1>Previous Releases’ Content (Neuron 1.x)<a class="headerlink" href="#previous-releases-content-neuron-1-x" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#neuron-2-5-0-11-23-2022" id="id62">Neuron 2.5.0 (11/23/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-19-2-08-02-2022" id="id63">Neuron 1.19.2 (08/02/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-19-1-05-27-2022" id="id64">Neuron 1.19.1 (05/27/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-19-0-04-29-2022" id="id65">Neuron 1.19.0 (04/29/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-18-0-03-25-2022" id="id66">Neuron 1.18.0 (03/25/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-17-2-02-18-2022" id="id67">Neuron 1.17.2 (02/18/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-17-1-02-16-2022" id="id68">Neuron 1.17.1 (02/16/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-17-0-01-20-2022" id="id69">Neuron 1.17.0 (01/20/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-16-3-01-05-2022" id="id70">Neuron 1.16.3 (01/05/2022)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-16-2-12-15-2021" id="id71">Neuron 1.16.2 (12/15/2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-16-1-11-05-2021" id="id72">Neuron 1.16.1 (11/05/2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-1-16-0-10-27-2021" id="id73">Neuron 1.16.0 (10/27/2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-15-2-september-22-2021" id="id74">Neuron v1.15.2 (September 22 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-15-1-august-30-2021" id="id75">Neuron v1.15.1 (August 30 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-15-0-august-12-2021" id="id76">Neuron v1.15.0 (August 12 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-14-2-july-26-2021" id="id77">Neuron v1.14.2 (July 26 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-14-1-july-2nd-2021" id="id78">Neuron v1.14.1 (July 2nd 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-14-0-may-28th-2021" id="id79">Neuron v1.14.0 (May 28th 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-13-0-may-1st-2021" id="id80">Neuron v1.13.0 (May 1st 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-12-2-mar-4th-2021" id="id81">Neuron v1.12.2 (Mar 4th 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-12-1-feb-24th-2021" id="id82">Neuron v1.12.1 (Feb 24th 2021)</a></p></li>
<li><p><a class="reference internal" href="#neuron-v1-12-0-jan-30-2021" id="id83">Neuron v1.12.0 (Jan 30 2021)</a></p></li>
</ul>
</div>
<div class="section" id="neuron-2-5-0-11-23-2022">
<h2><a class="toc-backref" href="#id62">Neuron 2.5.0 (11/23/2022)</a><a class="headerlink" href="#neuron-2-5-0-11-23-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="release-included-packages">
<h3>Release included packages<a class="headerlink" href="#release-included-packages" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 2.5.0:
driver : aws-neuronx-dkms-2.6.33.0
libnrt : libnrt.so (version 2.10.27.0)
k8-plugin : aws-neuronx-k8-plugin-2.1.12.0
k8-scheduler : aws-neuronx-k8-scheduler-2.1.12.0
tools : aws-neuronx-tools-2.5.19.0
compiler : neuron-cc-1.13.5.0
neuronperf : neuronperf-1.6.1.0
pytorch : torch-neuron-1.7.1.2.5.8.0
pytorch : torch-neuron-1.8.1.2.5.8.0
pytorch : torch-neuron-1.9.1.2.5.8.0
pytorch : torch-neuron-1.10.2.2.5.8.0
pytorch : torch-neuron-1.11.0.2.5.8.0
pytorch : torch-neuron-1.12.1.2.5.8.0
tensorflow : tensorflow-neuron-1.15.5.2.5.6.0
tensorflow : tensorflow-neuron-2.5.3.2.5.6.0
tensorflow : tensorflow-neuron-2.6.5.2.5.6.0
tensorflow : tensorflow-neuron-2.7.3.2.5.6.0
tensorflow : tensorflow-neuron-2.8.2.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-1.15.0.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.5.4.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.6.3.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.7.0.2.5.6.0
tensorflow-model-server : tensorflow-model-server-neuronx-2.8.0.2.5.6.0
tensorboard : tensorboard-plugin-neuron-2.4.6.0
mxnet : mxnet_neuron-1.5.1.1.10.11.0
mxnet : mx_neuron-1.8.0.2.2.43.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="release-supported-frameworks">
<h3>Release supported frameworks<a class="headerlink" href="#release-supported-frameworks" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
</div>
<div class="section" id="neuron-1-19-2-08-02-2022">
<h2><a class="toc-backref" href="#id63">Neuron 1.19.2 (08/02/2022)</a><a class="headerlink" href="#neuron-1-19-2-08-02-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id1">
<h3>Release included packages<a class="headerlink" href="#id1" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.19.2:
driver : aws-neuron-dkms-2.3.26.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.3.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.7.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id2">
<h3>Release supported frameworks<a class="headerlink" href="#id2" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
</div>
<div class="section" id="neuron-1-19-1-05-27-2022">
<h2><a class="toc-backref" href="#id64">Neuron 1.19.1 (05/27/2022)</a><a class="headerlink" href="#neuron-1-19-1-05-27-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id3">
<h3>Release included packages<a class="headerlink" href="#id3" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.19.1:
driver : aws-neuron-dkms-2.3.11.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.2.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.2.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.4.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id4">
<h3>Release supported frameworks<a class="headerlink" href="#id4" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.19.1:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="dependency-software-supported-versions">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#dependency-software-supported-versions" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><p>Python 3.7</p></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-19-0-04-29-2022">
<h2><a class="toc-backref" href="#id65">Neuron 1.19.0 (04/29/2022)</a><a class="headerlink" href="#neuron-1-19-0-04-29-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id5">
<h3>Release included packages<a class="headerlink" href="#id5" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.19.0:
driver : aws-neuron-dkms-2.3.3.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.9.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.9.0.0
tools : aws-neuron-tools-2.1.4.0
compiler : neuron-cc-1.11.4.0
neuronperf : neuronperf-1.3.0.0
pytorch : torch-neuron-1.7.1.2.3.0.0
pytorch : torch-neuron-1.8.1.2.3.0.0
pytorch : torch-neuron-1.9.1.2.3.0.0
pytorch : torch-neuron-1.10.2.2.3.0.0
pytorch : torch-neuron-1.11.0.2.3.0.0
tensorflow : tensorflow-neuron-1.15.5.2.3.0.0
tensorflow : tensorflow-neuron-2.5.3.2.3.0.0
tensorflow : tensorflow-neuron-2.6.3.2.3.0.0
tensorflow : tensorflow-neuron-2.7.1.2.3.0.0
tensorflow : tensorflow-neuron-2.8.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.3.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.8.0.2.3.0.0
tensorboard : tensorboard-plugin-neuron-2.4.0.0
mxnet : mxnet_neuron-1.5.1.1.10.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id6">
<h3>Release supported frameworks<a class="headerlink" href="#id6" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.19.0:
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.2
pytorch : pytorch-1.11.0
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
tensorflow : tensorflow-2.8.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id7">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id7" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><p>Python 3.7</p></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-18-0-03-25-2022">
<h2><a class="toc-backref" href="#id66">Neuron 1.18.0 (03/25/2022)</a><a class="headerlink" href="#neuron-1-18-0-03-25-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id8">
<h3>Release included packages<a class="headerlink" href="#id8" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.18.0:
driver : aws-neuron-dkms-2.2.14.0
libnrt : libnrt.so (version 2.2.51.0)
k8-plugin : aws-neuron-k8-plugin-1.8.2.0
k8-scheduler : aws-neuron-k8-scheduler-1.8.2.0
tools : aws-neuron-tools-2.0.790.0
compiler : neuron-cc-1.10.3.0
neuronperf : neuronperf-1.2.0.0
pytorch : torch-neuron-1.5.1.2.2.0.0
pytorch : torch-neuron-1.7.1.2.2.0.0
pytorch : torch-neuron-1.8.1.2.2.0.0
pytorch : torch-neuron-1.9.1.2.2.0.0
pytorch : torch-neuron-1.10.1.2.2.0.0
tensorflow : tensorflow-neuron-1.15.5.2.2.0.0
tensorflow : tensorflow-neuron-2.5.3.2.2.0.0
tensorflow : tensorflow-neuron-2.6.3.2.2.0.0
tensorflow : tensorflow-neuron-2.7.1.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.4.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.6.3.2.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-2.7.0.2.2.0.0
tensorboard : tensorboard-plugin-neuron-2.3.0.0
mxnet : mxnet_neuron-1.5.1.1.9.0.0
mxnet : mx_neuron-1.8.0.2.2.2.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id9">
<h3>Release supported frameworks<a class="headerlink" href="#id9" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.18.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.5.3
tensorflow : tensorflow-2.6.3
tensorflow : tensorflow-2.7.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id10">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id10" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><p>Python 3.7</p></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-17-2-02-18-2022">
<h2><a class="toc-backref" href="#id67">Neuron 1.17.2 (02/18/2022)</a><a class="headerlink" href="#neuron-1-17-2-02-18-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id11">
<h3>Release included packages<a class="headerlink" href="#id11" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.17.2:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.14.0
tensorflow : tensorflow-neuron-2.1.4.2.1.14.0
tensorflow : tensorflow-neuron-2.2.3.2.1.14.0
tensorflow : tensorflow-neuron-2.3.4.2.1.14.0
tensorflow : tensorflow-neuron-2.4.3.2.1.14.0
tensorflow : tensorflow-neuron-2.5.2.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.1.14.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.14.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id12">
<h3>Release supported frameworks<a class="headerlink" href="#id12" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.17.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id13">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id13" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-17-1-02-16-2022">
<h2><a class="toc-backref" href="#id68">Neuron 1.17.1 (02/16/2022)</a><a class="headerlink" href="#neuron-1-17-1-02-16-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id14">
<h3>Release included packages<a class="headerlink" href="#id14" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.17.1:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.13.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.2.2.1.13.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.13.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.13.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id15">
<h3>Release supported frameworks<a class="headerlink" href="#id15" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.17.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id16">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id16" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-17-0-01-20-2022">
<h2><a class="toc-backref" href="#id69">Neuron 1.17.0 (01/20/2022)</a><a class="headerlink" href="#neuron-1-17-0-01-20-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id17">
<h3>Release included packages<a class="headerlink" href="#id17" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.17.0:
driver : aws-neuron-dkms-2.2.13.0
libnrt : libnrt.so (version 2.2.31.0)
k8-plugin : aws-neuron-k8-plugin-1.7.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.7.0
tools : aws-neuron-tools-2.0.623.0
compiler : neuron-cc-1.9.1.0
neuronperf : neuronperf-1.1.0.0
pytorch : torch-neuron-1.5.1.2.1.7.0
pytorch : torch-neuron-1.7.1.2.1.7.0
pytorch : torch-neuron-1.8.1.2.1.7.0
pytorch : torch-neuron-1.9.1.2.1.7.0
pytorch : torch-neuron-1.10.1.2.1.7.0
tensorflow : tensorflow-neuron-1.15.5.2.1.6.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.2.2.1.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.1.6.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.3.2.1.6.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.8.0.0
mxnet : mx_neuron-1.8.0.2.1.5.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id18">
<h3>Release supported frameworks<a class="headerlink" href="#id18" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.17.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
pytorch : pytorch-1.10.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.2
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id19">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id19" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-16-3-01-05-2022">
<h2><a class="toc-backref" href="#id70">Neuron 1.16.3 (01/05/2022)</a><a class="headerlink" href="#neuron-1-16-3-01-05-2022" title="Permalink to this headline">#</a></h2>
<div class="section" id="id20">
<h3>Release included packages<a class="headerlink" href="#id20" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.16.3:
driver : aws-neuron-dkms-2.2.8.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.4.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.4.0
tools : aws-neuron-tools-2.0.494.0
compiler : neuron-cc-1.8.5.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.536.0
pytorch : torch-neuron-1.7.1.2.0.536.0
pytorch : torch-neuron-1.8.1.2.0.536.0
pytorch : torch-neuron-1.9.1.2.0.536.0
tensorflow : tensorflow-neuron-1.15.5.2.0.5.0
tensorflow : tensorflow-neuron-2.1.4.2.0.5.0
tensorflow : tensorflow-neuron-2.2.3.2.0.5.0
tensorflow : tensorflow-neuron-2.3.4.2.0.5.0
tensorflow : tensorflow-neuron-2.4.3.2.0.5.0
tensorflow : tensorflow-neuron-2.5.1.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.5.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.5.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.3.0
mxnet : mx_neuron-1.8.0.2.0.290.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id21">
<h3>Release supported frameworks<a class="headerlink" href="#id21" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.16.3:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id22">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id22" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-16-2-12-15-2021">
<h2><a class="toc-backref" href="#id71">Neuron 1.16.2 (12/15/2021)</a><a class="headerlink" href="#neuron-1-16-2-12-15-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id23">
<h3>Release included packages<a class="headerlink" href="#id23" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.16.2:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.327.0
compiler : neuron-cc-1.8.2.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.468.0
pytorch : torch-neuron-1.7.1.2.0.468.0
pytorch : torch-neuron-1.8.1.2.0.468.0
pytorch : torch-neuron-1.9.1.2.0.468.0
tensorflow : tensorflow-neuron-1.15.5.2.0.4.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.1.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.4.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.276.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id24">
<h3>Release supported frameworks<a class="headerlink" href="#id24" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.16.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id25">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id25" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-16-1-11-05-2021">
<h2><a class="toc-backref" href="#id72">Neuron 1.16.1 (11/05/2021)</a><a class="headerlink" href="#neuron-1-16-1-11-05-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id26">
<h3>Release included packages<a class="headerlink" href="#id26" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.16.1:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.18.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.327.0
compiler : neuron-cc-1.7.3.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.392.0
pytorch : torch-neuron-1.7.1.2.0.392.0
pytorch : torch-neuron-1.8.1.2.0.392.0
pytorch : torch-neuron-1.9.1.2.0.392.0
tensorflow : tensorflow-neuron-1.15.5.2.0.4.0
tensorflow : tensorflow-neuron-2.1.4.2.0.4.0
tensorflow : tensorflow-neuron-2.2.3.2.0.4.0
tensorflow : tensorflow-neuron-2.3.4.2.0.4.0
tensorflow : tensorflow-neuron-2.4.3.2.0.4.0
tensorflow : tensorflow-neuron-2.5.1.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.4.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.4.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.276.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id27">
<h3>Release supported frameworks<a class="headerlink" href="#id27" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.16.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id28">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id28" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-1-16-0-10-27-2021">
<h2><a class="toc-backref" href="#id73">Neuron 1.16.0 (10/27/2021)</a><a class="headerlink" href="#neuron-1-16-0-10-27-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id29">
<h3>Release included packages<a class="headerlink" href="#id29" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.16.0:
driver : aws-neuron-dkms-2.2.6.0
libnrt : libnrt.so (version 2.2.15.0)
k8-plugin : aws-neuron-k8-plugin-1.7.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.7.3.0
tools : aws-neuron-tools-2.0.277.0
compiler : neuron-cc-1.7.3.0
neuronperf : neuronperf-1.0.85.0
pytorch : torch-neuron-1.5.1.2.0.318.0
pytorch : torch-neuron-1.7.1.2.0.318.0
pytorch : torch-neuron-1.8.1.2.0.318.0
pytorch : torch-neuron-1.9.1.2.0.318.0
tensorflow : tensorflow-neuron-1.15.5.2.0.3.0
tensorflow : tensorflow-neuron-2.1.4.2.0.3.0
tensorflow : tensorflow-neuron-2.2.3.2.0.3.0
tensorflow : tensorflow-neuron-2.3.4.2.0.3.0
tensorflow : tensorflow-neuron-2.4.3.2.0.3.0
tensorflow : tensorflow-neuron-2.5.1.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.3.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.4.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.3.2.0.3.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.2.2.0.3.0
tensorboard : tensorboard-plugin-neuron-2.2.0.0
mxnet : mxnet_neuron-1.5.1.1.7.0.0
mxnet : mx_neuron-1.8.0.2.0.271.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id30">
<h3>Release supported frameworks<a class="headerlink" href="#id30" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.16.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
pytorch : pytorch-1.9.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.4
tensorflow : tensorflow-2.4.3
tensorflow : tensorflow-2.5.1
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id31">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id31" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-15-2-september-22-2021">
<h2><a class="toc-backref" href="#id74">Neuron v1.15.2 (September 22 2021)</a><a class="headerlink" href="#neuron-v1-15-2-september-22-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id32">
<h3>Release included packages<a class="headerlink" href="#id32" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.15.2:
driver : aws-neuron-dkms-2.1.5.0
runtime-server : aws-neuron-runtime-1.6.24.0
k8-plugin : aws-neuron-k8-plugin-1.6.22.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.22.0
runtime-base : aws-neuron-runtime-base-1.6.21.0
tools : aws-neuron-tools-1.7.25.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.10.0
tensorflow : tensorflow-neuron-2.1.4.1.6.10.0
tensorflow : tensorflow-neuron-2.2.3.1.6.10.0
tensorflow : tensorflow-neuron-2.3.3.1.6.10.0
tensorflow : tensorflow-neuron-2.4.2.1.6.10.0
tensorflow : tensorflow-neuron-2.5.0.1.6.10.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.10.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.10.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id33">
<h3>Release supported frameworks<a class="headerlink" href="#id33" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.15.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id34">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id34" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-15-1-august-30-2021">
<h2><a class="toc-backref" href="#id75">Neuron v1.15.1 (August 30 2021)</a><a class="headerlink" href="#neuron-v1-15-1-august-30-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id35">
<h3>Release included packages<a class="headerlink" href="#id35" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.15.1:
driver : aws-neuron-dkms-2.1.5.0
runtime-server : aws-neuron-runtime-1.6.24.0
k8-plugin : aws-neuron-k8-plugin-1.6.22.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.22.0
runtime-base : aws-neuron-runtime-base-1.6.21.0
tools : aws-neuron-tools-1.7.25.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.8.0
tensorflow : tensorflow-neuron-2.1.4.1.6.8.0
tensorflow : tensorflow-neuron-2.2.3.1.6.8.0
tensorflow : tensorflow-neuron-2.3.3.1.6.8.0
tensorflow : tensorflow-neuron-2.4.2.1.6.8.0
tensorflow : tensorflow-neuron-2.5.0.1.6.8.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.8.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id36">
<h3>Release supported frameworks<a class="headerlink" href="#id36" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.15.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id37">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id37" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-15-0-august-12-2021">
<h2><a class="toc-backref" href="#id76">Neuron v1.15.0 (August 12 2021)</a><a class="headerlink" href="#neuron-v1-15-0-august-12-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id38">
<h3>Release included packages<a class="headerlink" href="#id38" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.15.0:
driver : aws-neuron-dkms-2.0.450.0
runtime-server : aws-neuron-runtime-1.6.19.0
k8-plugin : aws-neuron-k8-plugin-1.6.17.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.17.0
runtime-base : aws-neuron-runtime-base-1.6.16.0
tools : aws-neuron-tools-1.7.20.0
compiler : neuron-cc-1.6.13.0
pytorch : torch-neuron-1.5.1.1.5.21.0
pytorch : torch-neuron-1.7.1.1.5.21.0
pytorch : torch-neuron-1.8.1.1.5.21.0
tensorflow : tensorflow-neuron-1.15.5.1.6.8.0
tensorflow : tensorflow-neuron-2.1.4.1.6.8.0
tensorflow : tensorflow-neuron-2.2.3.1.6.8.0
tensorflow : tensorflow-neuron-2.3.3.1.6.8.0
tensorflow : tensorflow-neuron-2.4.2.1.6.8.0
tensorflow : tensorflow-neuron-2.5.0.1.6.8.0
tensorboard : tensorboard-plugin-neuron-2.1.2.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.1.4.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.2.2.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.3.0.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.4.1.1.6.8.0
tensorflow-model-server : tensorflow-model-server-neuron-2.5.1.1.6.8.0
mxnet : mxnet_neuron-1.5.1.1.6.5.0
mxnet : mx_neuron-1.8.0.1.3.4.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id39">
<h3>Release supported frameworks<a class="headerlink" href="#id39" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.15.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
tensorflow : tensorflow-2.1.4
tensorflow : tensorflow-2.2.3
tensorflow : tensorflow-2.3.3
tensorflow : tensorflow-2.4.2
tensorflow : tensorflow-2.5.0
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id40">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id40" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-14-2-july-26-2021">
<h2><a class="toc-backref" href="#id77">Neuron v1.14.2 (July 26 2021)</a><a class="headerlink" href="#neuron-v1-14-2-july-26-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id41">
<h3>Release included packages<a class="headerlink" href="#id41" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.14.2:
driver : aws-neuron-dkms-2.0.386.0
runtime-server : aws-neuron-runtime-1.6.9.0
k8-plugin : aws-neuron-k8-plugin-1.6.7.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.7.0
runtime-base : aws-neuron-runtime-base-1.6.6.0
tools : aws-neuron-tools-1.7.10.0
compiler : neuron-cc-1.5.5.0
pytorch : torch-neuron-1.5.1.1.5.12.0
pytorch : torch-neuron-1.7.1.1.5.12.0
pytorch : torch-neuron-1.8.1.1.5.12.0
tensorflow : tensorflow-neuron-1.15.5.1.5.1.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.5.1.0
mxnet : mxnet_neuron-1.5.1.1.6.1.0
mxnet : mx_neuron-1.8.0.1.3.0.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id42">
<h3>Release supported frameworks<a class="headerlink" href="#id42" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.14.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id43">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id43" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-14-1-july-2nd-2021">
<h2><a class="toc-backref" href="#id78">Neuron v1.14.1 (July 2nd 2021)</a><a class="headerlink" href="#neuron-v1-14-1-july-2nd-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id44">
<h3>Release included packages<a class="headerlink" href="#id44" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.14.1:
driver : aws-neuron-dkms-1.5.0.0
runtime-server : aws-neuron-runtime-1.6.5.0
k8-plugin : aws-neuron-k8-plugin-1.6.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.0.0
runtime-base : aws-neuron-runtime-base-1.6.1.0
tools : aws-neuron-tools-1.7.4.0
compiler : neuron-cc-1.5.5.0
pytorch : torch-neuron-1.5.1.1.5.12.0
pytorch : torch-neuron-1.7.1.1.5.12.0
pytorch : torch-neuron-1.8.1.1.5.12.0
tensorflow : tensorflow-neuron-1.15.5.1.5.1.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.5.1.0
mxnet : mxnet_neuron-1.5.1.1.6.1.0
mxnet : mx_neuron-1.8.0.1.3.0.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id45">
<h3>Release supported frameworks<a class="headerlink" href="#id45" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.14.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id46">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id46" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-14-0-may-28th-2021">
<h2><a class="toc-backref" href="#id79">Neuron v1.14.0 (May 28th 2021)</a><a class="headerlink" href="#neuron-v1-14-0-may-28th-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id47">
<h3>Release included packages<a class="headerlink" href="#id47" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.14.0:
driver : aws-neuron-dkms-1.5.0.0
runtime-server : aws-neuron-runtime-1.5.0.0
k8-plugin : aws-neuron-k8-plugin-1.6.0.0
k8-scheduler : aws-neuron-k8-scheduler-1.6.0.0
runtime-base : aws-neuron-runtime-base-1.5.1.0
tools : aws-neuron-tools-1.6.1.0
compiler : neuron-cc-1.4.1.0
pytorch : torch-neuron-1.5.1.1.4.1.0
pytorch : torch-neuron-1.7.1.1.4.1.0
pytorch : torch-neuron-1.8.1.1.4.1.0
tensorflow : tensorflow-neuron-1.15.5.1.4.0.0
tensorboard : tensorboard-plugin-neuron-2.1.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.4.0.0
mxnet : mxnet_neuron-1.5.1.1.5.1.0
mxnet : mx_neuron-1.8.0.1.2.1.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id48">
<h3>Release supported frameworks<a class="headerlink" href="#id48" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.14.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
pytorch : pytorch-1.8.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id49">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id49" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-13-0-may-1st-2021">
<h2><a class="toc-backref" href="#id80">Neuron v1.13.0 (May 1st 2021)</a><a class="headerlink" href="#neuron-v1-13-0-may-1st-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id50">
<h3>Release included packages<a class="headerlink" href="#id50" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.13.0:
driver : aws-neuron-dkms-1.4.9.0
runtime-server : aws-neuron-runtime-1.4.17.0
k8-plugin : aws-neuron-k8-plugin-1.5.3.0
k8-scheduler : aws-neuron-k8-scheduler-1.5.3.0
runtime-base : aws-neuron-runtime-base-1.4.12.0
tools : aws-neuron-tools-1.5.6.0
compiler : neuron-cc-1.3.7.0
pytorch : torch-neuron-1.5.1.1.3.5.0
pytorch : torch-neuron-1.7.1.1.3.5.0
tensorflow : tensorflow-neuron-1.15.5.1.3.3.0
tensorboard : tensorboard-plugin-neuron-2.0.29.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.3.3.0
mxnet : mxnet_neuron-1.5.1.1.4.4.0
mxnet : mx_neuron-1.8.0.1.1.2.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id51">
<h3>Release supported frameworks<a class="headerlink" href="#id51" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.13.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
mxnet : mxnet-1.8.0
</pre></div>
</div>
</div>
<div class="section" id="id52">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id52" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
<li><p>Python 3.8 [Experimental]</p></li>
</ul>
</td>
</tr>
<tr class="row-odd"><td><p>Neuron Conda Packages</p></td>
<td><ul class="simple">
<li><p>torch-neuron-1.7.1.1.3.5.0</p></li>
<li><p>tensorflow-neuron 1.15.5.1.3.3.0</p></li>
<li><p>mxnet-neuron-1.5.1.1.4.4.0</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-12-2-mar-4th-2021">
<h2><a class="toc-backref" href="#id81">Neuron v1.12.2 (Mar 4th 2021)</a><a class="headerlink" href="#neuron-v1-12-2-mar-4th-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id53">
<h3>Release included packages<a class="headerlink" href="#id53" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.12.2:
driver : aws-neuron-dkms-1.4.5.0
runtime-server : aws-neuron-runtime-1.4.12.0
k8-plugin : aws-neuron-k8-plugin-1.4.5.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.5.0
runtime-base : aws-neuron-runtime-base-1.4.8.0
tools : aws-neuron-tools-1.4.12.0
compiler : neuron-cc-1.2.7.0
pytorch : torch-neuron-1.5.1.1.2.16.0
pytorch : torch-neuron-1.7.1.1.2.16.0
tensorflow : tensorflow-neuron-1.15.5.1.2.9.0
tensorboard : tensorboard-neuron-1.15.0.1.2.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.9.0
mxnet : mxnet-neuron-1.5.1.1.3.8.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id54">
<h3>Release supported frameworks<a class="headerlink" href="#id54" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.12.2:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
</pre></div>
</div>
</div>
<div class="section" id="id55">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id55" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
<th class="head"><p>Maintenance</p></th>
<th class="head"><p>End Of Support</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
<td></td>
<td><ul class="simple">
<li><p>Python 3.5 (2/24/2021)</p></li>
</ul>
</td>
</tr>
<tr class="row-odd"><td><p>Neuron Conda Packages</p></td>
<td><ul class="simple">
<li><p>torch-neuron 1.7.1.1.2.16.0</p></li>
<li><p>tensorflow-neuron 1.15.5.1.2.9.0</p></li>
<li><p>mxnet-neuron 1.5.1.1.3.8.0</p></li>
</ul>
</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-12-1-feb-24th-2021">
<h2><a class="toc-backref" href="#id82">Neuron v1.12.1 (Feb 24th 2021)</a><a class="headerlink" href="#neuron-v1-12-1-feb-24th-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id56">
<h3>Release included packages<a class="headerlink" href="#id56" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.12.1:
driver : aws-neuron-dkms-1.4.5.0
runtime-server : aws-neuron-runtime-1.4.9.0
k8-plugin : aws-neuron-k8-plugin-1.4.5.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.5.0
runtime-base : aws-neuron-runtime-base-1.4.8.0
tools : aws-neuron-tools-1.4.8.0
compiler : neuron-cc-1.2.7.0
pytorch : torch-neuron-1.5.1.1.2.15.0
pytorch : torch-neuron-1.7.1.1.2.15.0
tensorflow : tensorflow-neuron-1.15.5.1.2.8.0
tensorboard : tensorboard-neuron-1.15.0.1.2.6.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.8.0
mxnet : mxnet-neuron-1.5.1.1.3.7.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id57">
<h3>Release supported frameworks<a class="headerlink" href="#id57" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.12.1:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
</pre></div>
</div>
</div>
<div class="section" id="id58">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id58" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
<th class="head"><p>Maintenance</p></th>
<th class="head"><p>End Of Support</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
<td></td>
<td><ul class="simple">
<li><p>Python 3.5 (2/24/2021)</p></li>
</ul>
</td>
</tr>
<tr class="row-odd"><td><p>Neuron Conda Packages</p></td>
<td><ul class="simple">
<li><p>torch-neuron 1.7.1.1.2.15.0</p></li>
<li><p>tensorflow-neuron 1.15.5.1.2.8.0</p></li>
<li><p>mxnet-neuron 1.5.1.1.3.7.0</p></li>
</ul>
</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="section" id="neuron-v1-12-0-jan-30-2021">
<h2><a class="toc-backref" href="#id83">Neuron v1.12.0 (Jan 30 2021)</a><a class="headerlink" href="#neuron-v1-12-0-jan-30-2021" title="Permalink to this headline">#</a></h2>
<div class="section" id="id59">
<h3>Release included packages<a class="headerlink" href="#id59" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of Neuron packages included in Neuron release version 1.12.0:
driver : aws-neuron-dkms-1.4.1.0
runtime-server : aws-neuron-runtime-1.4.3.0
k8-plugin : aws-neuron-k8-plugin-1.4.1.0
k8-scheduler : aws-neuron-k8-scheduler-1.4.1.0
runtime-base : aws-neuron-runtime-base-1.4.2.0
tools : aws-neuron-tools-1.4.2.0
compiler : neuron-cc-1.2.2.0
pytorch : torch-neuron-1.5.1.1.2.3.0
pytorch : torch-neuron-1.7.1.1.2.3.0
tensorflow : tensorflow-neuron-1.15.5.1.2.2.0
tensorboard : tensorboard-neuron-1.15.0.1.2.0.0
tensorflow-model-server : tensorflow-model-server-neuron-1.15.0.1.2.2.0
mxnet : mxnet-neuron-1.5.1.1.3.7.0
</pre></div>
</div>
<p>See <a class="reference internal" href="../../../general/sdk-policy.html#neuron-maintenance-policy"><span class="std std-ref">SDK Maintenance Policy</span></a> for more information.</p>
</div>
<div class="section" id="id60">
<h3>Release supported frameworks<a class="headerlink" href="#id60" title="Permalink to this headline">#</a></h3>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of frameworks included in Neuron release version 1.12.0:
pytorch : pytorch-1.5.1
pytorch : pytorch-1.7.1
tensorflow : tensorflow-1.15.5
mxnet : mxnet-1.5.1
</pre></div>
</div>
</div>
<div class="section" id="id61">
<h3>Dependency Software Supported Versions<a class="headerlink" href="#id61" title="Permalink to this headline">#</a></h3>
<table class="colwidths-auto table">
<thead>
<tr class="row-odd"><th class="head"><p>Software</p></th>
<th class="head"><p>Supported</p></th>
<th class="head"><p>Maintenance</p></th>
<th class="head"><p>End Of Support</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Python</p></td>
<td><ul class="simple">
<li><p>Python 3.6</p></li>
<li><p>Python 3.7</p></li>
</ul>
</td>
<td></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Neuron Conda Packages</p></td>
<td><ul class="simple">
<li><p>Conda-PyTorch 1.5.1, Conda-PyTorch 1.7.1,</p></li>
<li><p>Conda-TensorFlow 1.5.1, Conda-MXNet 1.5.1</p></li>
</ul>
</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:34.239Z |
Deploy Neuron Container on EC2 — AWS Neuron Documentation | https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/devflows/inference/dlc-then-ec2-devflow.html | # Deploy Neuron Container on EC2 — AWS Neuron Documentation
_This document is relevant for_: `Inf1`
## Deploy Neuron Container on EC2[#](#deploy-neuron-container-on-ec2 "Permalink to this headline")
Table of Contents
- [Description](#description)
- [Setup Environment](#setup-environment)
## [Description](#id1)[#](#description "Permalink to this headline")
[](../../../_images/dlc-on-ec2-dev-flow.png)
You can use the Neuron version of the [AWS Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-ec2-tutorials-inference.html) to run inference on inf1 instances. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI), pull the container image with the Neuron version of the desired framework, and run the container as a server for the already compiled model. This developer flow assumes the model has already has been compiled through a [compilation developer flow](dev-flows.html#compilation-flow-target)
## [Setup Environment](#id2)[#](#setup-environment "Permalink to this headline")
1. Launch an Inf1 Instance
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/).
- Select your Amazon Machine Image (AMI) of choice, please note that Neuron supports Ubuntu 18 AMI or Amazon Linux 2 AMI, you can also choose Ubuntu 18 or Amazon Linux 2 Deep Learning AMI (DLAMI)
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
2. Once you have your EC2 environment set according to [Tutorial Docker environment setup](../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup), you can build and run a Neuron container using the [Tutorial How to Build and Run a Neuron Container](../../../containers/tutorials/build-run-neuron-container.html#how-to-build-neuron-container) section above.
Note
**Prior to running the container**, make sure that the Neuron runtime on the instance is turned off, by running the command:
```
sudo service neuron-rtd stop
```
_This document is relevant for_: `Inf1` | <!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Deploy Neuron Container on EC2 — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/devflows/inference/dlc-then-ec2-devflow", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/devflows/inference/dlc-then-ec2-devflow.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/devflows/inference/dlc-then-ec2-devflow.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/general/devflows/inference/dlc-then-ec2-devflow.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/general/devflows/inference/dlc-then-ec2-devflow.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/general/devflows/inference/dlc-then-ec2-devflow.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/general/devflows/inference/dlc-then-ec2-devflow.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/general/devflows/inference/dlc-then-ec2-devflow.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/general/devflows/inference/dlc-then-ec2-devflow.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/general/devflows/inference/dlc-then-ec2-devflow.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/general/devflows/inference/dlc-then-ec2-devflow.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/general/devflows/inference/dlc-then-ec2-devflow.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/general/devflows/inference/dlc-then-ec2-devflow.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/general/devflows/inference/dlc-then-ec2-devflow.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/general/devflows/inference/dlc-then-ec2-devflow.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/general/devflows/inference/dlc-then-ec2-devflow.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/general/devflows/inference/dlc-then-ec2-devflow.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/general/devflows/inference/dlc-then-ec2-devflow.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/general/devflows/inference/dlc-then-ec2-devflow.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/general/devflows/inference/dlc-then-ec2-devflow.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//general/devflows/inference/dlc-then-ec2-devflow.rst">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/devflows/inference/dlc-then-ec2-devflow.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/devflows/inference/dlc-then-ec2-devflow.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/general/devflows/inference/dlc-then-ec2-devflow.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#description">
Description
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-environment">
Setup Environment
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Deploy Neuron Container on EC2</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#description">
Description
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-environment">
Setup Environment
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="deploy-neuron-container-on-ec2">
<span id="dlc-then-ec2-devflow"></span><h1>Deploy Neuron Container on EC2<a class="headerlink" href="#deploy-neuron-container-on-ec2" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#description" id="id1">Description</a></p></li>
<li><p><a class="reference internal" href="#setup-environment" id="id2">Setup Environment</a></p></li>
</ul>
</div>
<div class="section" id="description">
<h2><a class="toc-backref" href="#id1">Description</a><a class="headerlink" href="#description" title="Permalink to this headline">#</a></h2>
<p><a class="reference internal" href="../../../_images/dlc-on-ec2-dev-flow.png"><img alt="Neuron developer flow for DLC on EC2" class="align-middle" src="../../../_images/dlc-on-ec2-dev-flow.png" style="width: 500px;"></a></p>
<p>You can use the Neuron version of the <a class="reference external" href="https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-ec2-tutorials-inference.html">AWS Deep Learning Containers</a> to run inference on inf1 instances. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI), pull the container image with the Neuron version of the desired framework, and run the container as a server for the already compiled model. This developer flow assumes the model has already has been compiled through a <a class="reference internal" href="dev-flows.html#compilation-flow-target"><span class="std std-ref">compilation developer flow</span></a></p>
</div>
<div class="section" id="setup-environment">
<span id="dlc-then-ec2-setenv"></span><h2><a class="toc-backref" href="#id2">Setup Environment</a><a class="headerlink" href="#setup-environment" title="Permalink to this headline">#</a></h2>
<ol class="arabic simple">
<li><dl class="simple">
<dt>Launch an Inf1 Instance</dt><dd><ul class="simple">
<li><p>Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a>.</p></li>
<li><p>Select your Amazon Machine Image (AMI) of choice, please note that Neuron supports Ubuntu 18 AMI or Amazon Linux 2 AMI, you can also choose
Ubuntu 18 or Amazon Linux 2 Deep Learning AMI (DLAMI)</p></li>
<li><p>After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
</dd>
</dl>
</li>
<li><p>Once you have your EC2 environment set according to <a class="reference internal" href="../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a>, you can build and run a Neuron container using the <a class="reference internal" href="../../../containers/tutorials/build-run-neuron-container.html#how-to-build-neuron-container"><span class="std std-ref">Tutorial How to Build and Run a Neuron Container</span></a> section above.</p></li>
</ol>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p><strong>Prior to running the container</strong>, make sure that the Neuron runtime on the instance is turned off, by running the command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>service<span class="w"> </span>neuron-rtd<span class="w"> </span>stop
</pre></div>
</div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html> | 2023-09-29T20:55:34.621Z |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt.rst.txt | ```
.. _megatron-lm-pretraining-tutorial:
Megatron-LM GPT Pretraining Tutorial [End of Support]
======================================================
GPT is a large language model that excels at many natural language
processing (NLP) tasks. It is derived from the decoder part of the
Transformer. `Neuron Reference For Megatron-LM [EOS] <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`__ is a library
that enables large-scale distributed training of language models such as
GPT and is adapted from `Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`__.
This tutorial explains how to run the Neuron reference for Megatron-LM GPT pretraining on Trainium.
The AWS Neuron SDK provides access to Trainium devices through an
extension of PyTorch/XLA - a library that includes the familiar PyTorch
interface along with XLA-specific additions. For Trainium customers,
this means that existing PyTorch training scripts can be executed on
Trn1 instances with minimal code modifications. For additional details
relating to PyTorch/XLA, please refer to the `official PyTorch/XLA
documentation <https://pytorch.org/xla>`__.
To run on Trainium, Neuron Reference For Megatron-LM library includes the following changes:
- GPU devices are replaced with Pytorch/XLA devices.
- Pytorch/XLA distributed backend is used to bridge the PyTorch distributed
APIs to XLA communication semantics.
- Pytorch/XLA MpDeviceLoader is used for the data ingestion pipelines.
Pytorch/XLA MpDeviceLoader helps improve performance by overlapping the three
execution steps: tracing, compilation and data batch loading to the
device.
- CUDA APIs are mapped to generic PyTorch APIs.
- CUDA fused optimizers are replaced with generic PyTorch alternatives.
The GPT example in this tutorial is an adaptation of the original
Megatron-LM GPT example, trained using the Wikipedia dataset.
.. contents:: Table of Contents
:local:
:depth: 3
.. include:: ../note-performance.txt
Install PyTorch Neuron
~~~~~~~~~~~~~~~~~~~~~~
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you intent to run multiple experiments and save many checkpoints.
Download Preprocessed Wikipedia Dataset
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download the vocabulary file, the merge table file, and the preprocessed Wikipedia dataset using the following commands:
::
export DATA_DIR=~/examples_datasets/gpt2
mkdir -p ${DATA_DIR} && cd ${DATA_DIR}
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.bin . --no-sign-request
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.idx . --no-sign-request
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/license.txt . --no-sign-request
See section ``Preparing Wikipedia dataset from scratch`` if you would like to recreate the preprocessed dataset from scratch.
Setting up the training environment on trn1.32xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please follow the :ref:`instructions <pytorch-neuron-setup>` to setup Python virtual environment
with Neuron packages.
Install Python3 development package needed to build the data helpers tools. If you are on Amazon Linux, do:
::
sudo yum install -y python3-devel
If you are on Ubuntu, do:
::
sudo apt install -y python3-dev
Clone the AWS Neuron Reference for Megatron-LM package, install dependencies, and build the data helpers tool:
::
cd ~/
git clone https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm.git
pip install pybind11 regex
pushd .
cd aws-neuron-reference-for-megatron-lm/megatron/data/
make
popd
GPT Pretraining Python Script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The GPT pretraining python script is a wrapper that imports the Megatron-LM
library modules and sets up the pieces needed by the Megatron-LM
trainer: GPT model, loss function, forward pass, data provider.
It is adapted from `pretrain_gpt.py <https://github.com/NVIDIA/Megatron-LM/blob/main/pretrain_gpt.py>`__. The
Neuron changes are:
- Use XLA device
- Not using mpu.broadcast_data as it is currently unsupported. Instead
each worker reads the data in parallel.
- Use int instead of long datatype for token data
The script is available at ``~/aws-neuron-reference-for-megatron-lm/pretrain_gpt.py``
GPT Training Shell Script
~~~~~~~~~~~~~~~~~~~~~~~~~
The GPT training shell script runs the above python script with
following model configurations (for 6.7 billion parameters model):
- Number of layers: 32
- Hidden size: 4096
- Number attention heads: 32
- Sequence length: 2048
- Max positional embeddings size: 2048
The following training parameters are used:
- The number of gradient accumulation microsteps is 64, with worker
batch size of 1.
- The tensor parallelism degree is 8.
- The data parallelism degree is 4.
- The number of workers is 32.
Additionally, the script uses:
- CPU intitialization
- AdamW optimizer (default).
- Gradient clipping.
- No CUDA fusions (bias-gelu, masked-softmax, bias-dropout)
- Disabled contiguous buffer in local DDP
- Option ``--distributed-backend xla`` picks the XLA distributed backend
to bridge the Pytorch distributed APIs to XLA
communication semantics.
See `this link <https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py>`__ for a full list of options and their descriptions.
.. note::
Not all options are supported. Currently only tensor-parallel and data-parallel modes
in Neuron Reference For Megatron-LM are supported. We support tensor-parallel degree of 8
and data-parallel degree of upto 64.
The script for running on a single node is available at
``~/aws-neuron-reference-for-megatron-lm/examples/pretrain_gpt3_6.7B_32layers_bf16.sh``
This shell script expects dataset files to be located in ~/examples_datasets/gpt2/ following the steps above. If you place the dataset files in another location, please update the DATA_PATH variable in the shell script.
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
To run the GPT example, first activate the Python virtual environment,
change to the Megatron-LM package location, and allow execute permission on the script:
::
source ~/aws_neuron_venv_pytorch/bin/activate
cd ~/aws-neuron-reference-for-megatron-lm/
chmod +x *.sh
Next, run the parallel compilations of graphs in order to reduce
compilation time during the actual run.
::
neuron_parallel_compile ./examples/pretrain_gpt3_6.7B_32layers_bf16.sh
This command performs a short trial run of the training script to
extract graphs and then do parallel compilations on those graphs before
populating the persistent cache with compiled graphs. This helps reduce
the compilation time during the actual run of the training script.
.. note::
Please ignore the results of the trial run as they are not the actual
execution results.
If some or all the graphs were already compiled and cached in
the persistent cache, then fewer or none of the graphs would need
compilation. To force recompilation, you can remove the cache directory
at ``/var/tmp/neuron-compile-cache/.``
Compilation is recommended if there are some changes in the script (such
as batch size, number of layers, number of workers, etc.). Compilation will only happen if the model graph or its parameters/compilation flags change.
Finally, run the script for the actual run:
::
./examples/pretrain_gpt3_6.7B_32layers_bf16.sh
During the run, you will see outputs like below, some lines showing
throughput and loss statistics every global step.
::
`iteration 4873/ 10000 | consumed samples: 311872 | elapsed time per iteration (ms): 8718.9 | learning rate: 1.500E-04 | global batch size: 64 | lm loss: 3.296875E+00 | grad norm: 0.430 | throughput: 7.340`
Monitoring Training Job Progress
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using a single Trn1 instance with 32 NeuronCores, the current GPT
pretraining will run for ~81 hours. During this time, you will see the
average loss metric begin at 11 and ultimately converge to ~3.2.
Throughput for the training job will be ~7.3 seq/sec.
Monitoring Training Job Progress using neuron-top
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the training job still running, launch a second SSH connection into
the trn1 instance, and use the ``neuron-top`` command to examine the
aggregate NeuronCore utilization.
Monitoring Training Job Progress using TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The demo includes TensorBoard-compatible logging, which allows the
learning rate and training metrics to be monitored in real-time. By
default, the training script logs metrics to the following TensorBoard
log directory ``~/aws-neuron-reference-for-megatron-lm/tb_*``.
In order to view your training metrics in TensorBoard, first run the
following commands in your SSH session:
::
source ~/aws_neuron_venv_pytorch/bin/activate
cd ~/aws-neuron-reference-for-megatron-lm/
tensorboard --logdir ./
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
Finishing the tutorial
~~~~~~~~~~~~~~~~~~~~~~
Once you are ready, and the
training throughput is as expected, there are a couple of options for
finishing the GPT pretraining demo:
**Allow the training script to run to completion**. If you would like to
observe the training script run to completion, it is recommended to
launch the training script from a terminal multiplexer such as ``tmux``
or ``screen``, and then detach the session so that the training script
can run in the background. With this approach, you can safely let the
training script run unattended, without risk of an SSH disconnection
causing the training job to stop running.
**Stop the training job early**. To stop the training job early, press
CTRL-C in the terminal window in which you launched the training script.
In some cases, if you manually cancel a job using CTRL-C and then later
want to run the job again, you might first need to terminate all the
python processes by the command ``killall -9 python3`` .
Running a multi-node GPT
~~~~~~~~~~~~~~~~~~~~~~~~
We use SLURM to launch multi-node GPT training jobs. Like single node runs,
we have a precompilation step followed by the actual run. To precompile:
::
sbatch examples/pretrain_gpt3_6.7B_compile.slurm
This will precompile the script ``examples/pretrain_gpt3_6.7B_32layers_bf16_bs1024_slurm.sh``
on all the nodes and populate the caches.
To run the compiled model:
::
sbatch examples/pretrain_gpt3_6.7B.slurm
The number of nodes is currently set to 16 and since the tensor-parallel degree used is
8, the data-parallel degree is automatically computed to be 64, resulting in a 8x64 two
dimensional mesh parallelism.
The tensorboard logs are written by the last rank and will be in the TensorBoard
log directory ``~/aws-neuron-reference-for-megatron-lm/tb_*``.
Compared to the single-node script, we use an increased batch size of 1024 which gives us
a throughput bump of ~98 seq/sec. The number of iterations is also increased with changes
in the hyperparameters pertaining to learning rates, weight decay.
Checkpointing GPT Model
~~~~~~~~~~~~~~~~~~~~~~~
A new mode of checkpointing using serialized tensor and staggered save/load is supported
to alleviate memory pressure. To save the model, add the lines:
::
--save-xser $CHECKPOINT_PATH
--save-interval 1500
This will save the checkpoint at path variable provided for every 1500 iterations.
.. note::
Please note that the model saves all the model weights, optimizer and rng states (~76GB for a
32 layermodel). And if checkpointed frequently can quickly lead to low disk storage.
Make sure there is enough disk space.
To load the checkpoint, we first need to remove ``--use-cpu-initialization`` from the script
and then add
::
--load-xser $CHECKPOINT_PATH
.. note::
Please note not removing the --use-cpu-initialization flag may lead to out-of-memory
execution and result in unstable resumption of training.
Preparing Wikipedia Dataset from Scratch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The process of preparing the Wikipedia dataset follows the original
`Megatron-LM
documentation <https://github.com/NVIDIA/Megatron-LM#user-content-datasets>`__. You
will need a large c5 machine like c5n.18xlarge and using the latest Deep
Learning AMI. First download the Wikipedia dataset. Depending on
the network bandwidth, this is expected to be about ~65 minutes.
::
export WIKI_DIR=~/examples_datasets/wiki
mkdir -p $WIKI_DIR && cd $WIKI_DIR
wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
Download the vocabulary and merge table files for the desired model. This
example uses the GPT-2 model:
::
export DATA_DIR=~/examples_datasets/gpt2
export GPT2_DATA=${DATA_DIR}/gpt2
mkdir -p ${GPT2_DATA} && cd ${GPT2_DATA}
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
mkdir -p ${GPT2_DATA}/checkpoint
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O ${GPT2_DATA}/checkpoint/megatron_lm_345m_v0.0.zip
Extract the downloaded data using WikiExtractor (this step takes about 2
hours):
::
git clone https://github.com/attardi/wikiextractor.git /tmp/wikiextractor
cd /tmp/wikiextractor
python -m wikiextractor.WikiExtractor --json ~/examples_datasets/wiki/enwiki-latest-pages-articles.xml.bz2 --output ~/examples_datasets/wiki/text/ -q --processes 70 2>&1 | tee wikiextract.out &
The Wikiextractor first preprocesses the template of all pages
sequentially, followed by a Map/Reduce process for extracting the pages
and converting to the loose json format required by Megatron-LM.
Once the extraction completes, we merge the text files with (~2
minutes):
::
conda activate pytorch_latest_p37
cd ~/examples_datasets/wiki
find ~/examples_datasets/wiki/text/ -name wiki* | parallel -m -j 70 "cat {} >> mergedfile.json"
The ``mergedfile.json`` size on disk is 16GB. With it, create the binary
data format for Megatron GPT2. NOTE: Refer to `this
solution <https://github.com/NVIDIA/Megatron-LM/issues/62>`__ if an
``IndexError: list index out of range`` occurs. To create the binary
data, type the following command:
::
python ~/aws-neuron-reference-for-megatron-lm/tools/preprocess_data.py \
--input ~/examples_datasets/wiki/mergedfile.json \
--output-prefix my-gpt2 \
--vocab ~/examples_datasets/gpt2/gpt2-vocab.json \
--dataset-impl mmap \
--tokenizer-type GPT2BPETokenizer \
--merge-file ~/examples_datasets/gpt2/gpt2-merges.txt \
--append-eod \
--workers 70
Files my-gpt2_text_document.\* are generated after about 12 minutes.
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No broadcast support
--------------------
Currently, the mpu.broadcast_data is unsupported on Trainium.
No pipeline parallel support
-------------------------------------------
Currently, only tensor parallel and data parallel are supported and there is no
pipeline parallel support in Neuron Reference For Megatron-LM.
Dropout is disabled
-------------------
Currently, dropout is disabled in the example.
"Failed accept4: Too many open files"
-------------------------------------
When running Megatron-LM GPT3 6.7B example above on `Ubuntu Server 20.04 LTS (HVM)` and `Ubuntu Server 22.04 LTS (HVM)` AMIs, you may encounter the following "Failed accept4: Too many open files" error:
.. code:: bash
E0301 08:06:14.272283286 72588 tcp_server_posix.cc:214] Failed accept4: Too many open files
2023-03-01 08:06:15.515834: F tensorflow/libtpu/neuron/neuron_compiler.cc:200] Check failed: fd != -1 Opening lock file failed with errno 24
The reason is that on this AMI, the "ulimit -n" is set to 1024, which is too low compared to for example `Amazon Linux 2 AMI (HVM) - Kernel 5.10` where it is set tp 65535 by default. To workaround this issue, please increase "ulimit -n" to a higher value, such as 65535 which matches `Amazon Linux 2 AMI (HVM) - Kernel 5.10` and is sufficient for the Megatron-LM GPT3 6.7B example. Additionally, this can be set within the shell script (which is ran using SLURM srun command) so that it is set for each worker process.
.. code:: bash
ulimit -n 65535
Error: cannot import name 'helpers' from 'megatron.data'
--------------------------------------------------------
You may encounter the error "cannot import name 'helpers' from 'megatron.data'" like below:
.. code:: bash
Exception in device=NEURONT:0: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py)
Traceback (most recent call last):
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 373, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 367, in _start_fn
fn(gindex, *args)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 138, in pretrain_mp
forward_step, args_defaults={'tokenizer_type': 'GPT2BPETokenizer'})
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 162, in pretrain
train_valid_test_dataset_provider)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 1021, in build_train_valid_test_data_iterators
train_val_test_num_samples)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 128, in train_valid_test_datasets_provider
skip_warmup=(not args.mmap_warmup))
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 43, in build_train_valid_test_datasets
seq_length, seed, skip_warmup)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 118, in _build_train_valid_test_datasets
train_dataset = build_dataset(0, 'train')
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 115, in build_dataset
seq_length, seed)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 156, in __init__
num_samples, seq_length, seed)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 274, in _build_index_mappings
from megatron.data import helpers
ImportError: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py)
To fix this, please go into aws-neuron-reference-for-megatron-lm/megatron/data/ and do "make":
.. code:: bash
pip install pybind11
pushd .
cd aws-neuron-reference-for-megatron-lm/megatron/data/
make
popd
Error: Out of space while checkpointing
--------------------------------------------------------
You may seem an error as follows. The model checkpoints are large as they dump all the model weights,
optimizer and rng states. And if these are frequently checkpointed, the storage can run out fast.
Please make sure you have enough disk space.
.. code:: bash
Traceback (most recent call last):
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 380, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 604, in _save
zip_file.write_record(name, storage.data_ptr(), num_bytes)
OSError: [Errno 28] No space left on device
Troubleshooting
~~~~~~~~~~~~~~~
See :ref:`pytorch-neuron-traning-troubleshooting`
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _megatron-lm-pretraining-tutorial:
Megatron-LM GPT Pretraining Tutorial [End of Support]
======================================================
GPT is a large language model that excels at many natural language
processing (NLP) tasks. It is derived from the decoder part of the
Transformer. `Neuron Reference For Megatron-LM [EOS] <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`__ is a library
that enables large-scale distributed training of language models such as
GPT and is adapted from `Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`__.
This tutorial explains how to run the Neuron reference for Megatron-LM GPT pretraining on Trainium.
The AWS Neuron SDK provides access to Trainium devices through an
extension of PyTorch/XLA - a library that includes the familiar PyTorch
interface along with XLA-specific additions. For Trainium customers,
this means that existing PyTorch training scripts can be executed on
Trn1 instances with minimal code modifications. For additional details
relating to PyTorch/XLA, please refer to the `official PyTorch/XLA
documentation <https://pytorch.org/xla>`__.
To run on Trainium, Neuron Reference For Megatron-LM library includes the following changes:
- GPU devices are replaced with Pytorch/XLA devices.
- Pytorch/XLA distributed backend is used to bridge the PyTorch distributed
APIs to XLA communication semantics.
- Pytorch/XLA MpDeviceLoader is used for the data ingestion pipelines.
Pytorch/XLA MpDeviceLoader helps improve performance by overlapping the three
execution steps: tracing, compilation and data batch loading to the
device.
- CUDA APIs are mapped to generic PyTorch APIs.
- CUDA fused optimizers are replaced with generic PyTorch alternatives.
The GPT example in this tutorial is an adaptation of the original
Megatron-LM GPT example, trained using the Wikipedia dataset.
.. contents:: Table of Contents
:local:
:depth: 3
.. include:: ../note-performance.txt
Install PyTorch Neuron
~~~~~~~~~~~~~~~~~~~~~~
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you intent to run multiple experiments and save many checkpoints.
Download Preprocessed Wikipedia Dataset
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download the vocabulary file, the merge table file, and the preprocessed Wikipedia dataset using the following commands:
::
export DATA_DIR=~/examples_datasets/gpt2
mkdir -p ${DATA_DIR} && cd ${DATA_DIR}
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.bin . --no-sign-request
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.idx . --no-sign-request
aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/license.txt . --no-sign-request
See section ``Preparing Wikipedia dataset from scratch`` if you would like to recreate the preprocessed dataset from scratch.
Setting up the training environment on trn1.32xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please follow the :ref:`instructions <pytorch-neuron-setup>` to setup Python virtual environment
with Neuron packages.
Install Python3 development package needed to build the data helpers tools. If you are on Amazon Linux, do:
::
sudo yum install -y python3-devel
If you are on Ubuntu, do:
::
sudo apt install -y python3-dev
Clone the AWS Neuron Reference for Megatron-LM package, install dependencies, and build the data helpers tool:
::
cd ~/
git clone https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm.git
pip install pybind11 regex
pushd .
cd aws-neuron-reference-for-megatron-lm/megatron/data/
make
popd
GPT Pretraining Python Script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The GPT pretraining python script is a wrapper that imports the Megatron-LM
library modules and sets up the pieces needed by the Megatron-LM
trainer: GPT model, loss function, forward pass, data provider.
It is adapted from `pretrain_gpt.py <https://github.com/NVIDIA/Megatron-LM/blob/main/pretrain_gpt.py>`__. The
Neuron changes are:
- Use XLA device
- Not using mpu.broadcast_data as it is currently unsupported. Instead
each worker reads the data in parallel.
- Use int instead of long datatype for token data
The script is available at ``~/aws-neuron-reference-for-megatron-lm/pretrain_gpt.py``
GPT Training Shell Script
~~~~~~~~~~~~~~~~~~~~~~~~~
The GPT training shell script runs the above python script with
following model configurations (for 6.7 billion parameters model):
- Number of layers: 32
- Hidden size: 4096
- Number attention heads: 32
- Sequence length: 2048
- Max positional embeddings size: 2048
The following training parameters are used:
- The number of gradient accumulation microsteps is 64, with worker
batch size of 1.
- The tensor parallelism degree is 8.
- The data parallelism degree is 4.
- The number of workers is 32.
Additionally, the script uses:
- CPU intitialization
- AdamW optimizer (default).
- Gradient clipping.
- No CUDA fusions (bias-gelu, masked-softmax, bias-dropout)
- Disabled contiguous buffer in local DDP
- Option ``--distributed-backend xla`` picks the XLA distributed backend
to bridge the Pytorch distributed APIs to XLA
communication semantics.
See `this link <https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py>`__ for a full list of options and their descriptions.
.. note::
Not all options are supported. Currently only tensor-parallel and data-parallel modes
in Neuron Reference For Megatron-LM are supported. We support tensor-parallel degree of 8
and data-parallel degree of upto 64.
The script for running on a single node is available at
``~/aws-neuron-reference-for-megatron-lm/examples/pretrain_gpt3_6.7B_32layers_bf16.sh``
This shell script expects dataset files to be located in ~/examples_datasets/gpt2/ following the steps above. If you place the dataset files in another location, please update the DATA_PATH variable in the shell script.
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
To run the GPT example, first activate the Python virtual environment,
change to the Megatron-LM package location, and allow execute permission on the script:
::
source ~/aws_neuron_venv_pytorch/bin/activate
cd ~/aws-neuron-reference-for-megatron-lm/
chmod +x *.sh
Next, run the parallel compilations of graphs in order to reduce
compilation time during the actual run.
::
neuron_parallel_compile ./examples/pretrain_gpt3_6.7B_32layers_bf16.sh
This command performs a short trial run of the training script to
extract graphs and then do parallel compilations on those graphs before
populating the persistent cache with compiled graphs. This helps reduce
the compilation time during the actual run of the training script.
.. note::
Please ignore the results of the trial run as they are not the actual
execution results.
If some or all the graphs were already compiled and cached in
the persistent cache, then fewer or none of the graphs would need
compilation. To force recompilation, you can remove the cache directory
at ``/var/tmp/neuron-compile-cache/.``
Compilation is recommended if there are some changes in the script (such
as batch size, number of layers, number of workers, etc.). Compilation will only happen if the model graph or its parameters/compilation flags change.
Finally, run the script for the actual run:
::
./examples/pretrain_gpt3_6.7B_32layers_bf16.sh
During the run, you will see outputs like below, some lines showing
throughput and loss statistics every global step.
::
`iteration 4873/ 10000 | consumed samples: 311872 | elapsed time per iteration (ms): 8718.9 | learning rate: 1.500E-04 | global batch size: 64 | lm loss: 3.296875E+00 | grad norm: 0.430 | throughput: 7.340`
Monitoring Training Job Progress
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using a single Trn1 instance with 32 NeuronCores, the current GPT
pretraining will run for ~81 hours. During this time, you will see the
average loss metric begin at 11 and ultimately converge to ~3.2.
Throughput for the training job will be ~7.3 seq/sec.
Monitoring Training Job Progress using neuron-top
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the training job still running, launch a second SSH connection into
the trn1 instance, and use the ``neuron-top`` command to examine the
aggregate NeuronCore utilization.
Monitoring Training Job Progress using TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The demo includes TensorBoard-compatible logging, which allows the
learning rate and training metrics to be monitored in real-time. By
default, the training script logs metrics to the following TensorBoard
log directory ``~/aws-neuron-reference-for-megatron-lm/tb_*``.
In order to view your training metrics in TensorBoard, first run the
following commands in your SSH session:
::
source ~/aws_neuron_venv_pytorch/bin/activate
cd ~/aws-neuron-reference-for-megatron-lm/
tensorboard --logdir ./
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
Finishing the tutorial
~~~~~~~~~~~~~~~~~~~~~~
Once you are ready, and the
training throughput is as expected, there are a couple of options for
finishing the GPT pretraining demo:
**Allow the training script to run to completion**. If you would like to
observe the training script run to completion, it is recommended to
launch the training script from a terminal multiplexer such as ``tmux``
or ``screen``, and then detach the session so that the training script
can run in the background. With this approach, you can safely let the
training script run unattended, without risk of an SSH disconnection
causing the training job to stop running.
**Stop the training job early**. To stop the training job early, press
CTRL-C in the terminal window in which you launched the training script.
In some cases, if you manually cancel a job using CTRL-C and then later
want to run the job again, you might first need to terminate all the
python processes by the command ``killall -9 python3`` .
Running a multi-node GPT
~~~~~~~~~~~~~~~~~~~~~~~~
We use SLURM to launch multi-node GPT training jobs. Like single node runs,
we have a precompilation step followed by the actual run. To precompile:
::
sbatch examples/pretrain_gpt3_6.7B_compile.slurm
This will precompile the script ``examples/pretrain_gpt3_6.7B_32layers_bf16_bs1024_slurm.sh``
on all the nodes and populate the caches.
To run the compiled model:
::
sbatch examples/pretrain_gpt3_6.7B.slurm
The number of nodes is currently set to 16 and since the tensor-parallel degree used is
8, the data-parallel degree is automatically computed to be 64, resulting in a 8x64 two
dimensional mesh parallelism.
The tensorboard logs are written by the last rank and will be in the TensorBoard
log directory ``~/aws-neuron-reference-for-megatron-lm/tb_*``.
Compared to the single-node script, we use an increased batch size of 1024 which gives us
a throughput bump of ~98 seq/sec. The number of iterations is also increased with changes
in the hyperparameters pertaining to learning rates, weight decay.
Checkpointing GPT Model
~~~~~~~~~~~~~~~~~~~~~~~
A new mode of checkpointing using serialized tensor and staggered save/load is supported
to alleviate memory pressure. To save the model, add the lines:
::
--save-xser $CHECKPOINT_PATH
--save-interval 1500
This will save the checkpoint at path variable provided for every 1500 iterations.
.. note::
Please note that the model saves all the model weights, optimizer and rng states (~76GB for a
32 layermodel). And if checkpointed frequently can quickly lead to low disk storage.
Make sure there is enough disk space.
To load the checkpoint, we first need to remove ``--use-cpu-initialization`` from the script
and then add
::
--load-xser $CHECKPOINT_PATH
.. note::
Please note not removing the --use-cpu-initialization flag may lead to out-of-memory
execution and result in unstable resumption of training.
Preparing Wikipedia Dataset from Scratch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The process of preparing the Wikipedia dataset follows the original
`Megatron-LM
documentation <https://github.com/NVIDIA/Megatron-LM#user-content-datasets>`__. You
will need a large c5 machine like c5n.18xlarge and using the latest Deep
Learning AMI. First download the Wikipedia dataset. Depending on
the network bandwidth, this is expected to be about ~65 minutes.
::
export WIKI_DIR=~/examples_datasets/wiki
mkdir -p $WIKI_DIR && cd $WIKI_DIR
wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
Download the vocabulary and merge table files for the desired model. This
example uses the GPT-2 model:
::
export DATA_DIR=~/examples_datasets/gpt2
export GPT2_DATA=${DATA_DIR}/gpt2
mkdir -p ${GPT2_DATA} && cd ${GPT2_DATA}
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
mkdir -p ${GPT2_DATA}/checkpoint
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O ${GPT2_DATA}/checkpoint/megatron_lm_345m_v0.0.zip
Extract the downloaded data using WikiExtractor (this step takes about 2
hours):
::
git clone https://github.com/attardi/wikiextractor.git /tmp/wikiextractor
cd /tmp/wikiextractor
python -m wikiextractor.WikiExtractor --json ~/examples_datasets/wiki/enwiki-latest-pages-articles.xml.bz2 --output ~/examples_datasets/wiki/text/ -q --processes 70 2>&1 | tee wikiextract.out &
The Wikiextractor first preprocesses the template of all pages
sequentially, followed by a Map/Reduce process for extracting the pages
and converting to the loose json format required by Megatron-LM.
Once the extraction completes, we merge the text files with (~2
minutes):
::
conda activate pytorch_latest_p37
cd ~/examples_datasets/wiki
find ~/examples_datasets/wiki/text/ -name wiki* | parallel -m -j 70 "cat {} >> mergedfile.json"
The ``mergedfile.json`` size on disk is 16GB. With it, create the binary
data format for Megatron GPT2. NOTE: Refer to `this
solution <https://github.com/NVIDIA/Megatron-LM/issues/62>`__ if an
``IndexError: list index out of range`` occurs. To create the binary
data, type the following command:
::
python ~/aws-neuron-reference-for-megatron-lm/tools/preprocess_data.py \
--input ~/examples_datasets/wiki/mergedfile.json \
--output-prefix my-gpt2 \
--vocab ~/examples_datasets/gpt2/gpt2-vocab.json \
--dataset-impl mmap \
--tokenizer-type GPT2BPETokenizer \
--merge-file ~/examples_datasets/gpt2/gpt2-merges.txt \
--append-eod \
--workers 70
Files my-gpt2_text_document.\* are generated after about 12 minutes.
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No broadcast support
--------------------
Currently, the mpu.broadcast_data is unsupported on Trainium.
No pipeline parallel support
-------------------------------------------
Currently, only tensor parallel and data parallel are supported and there is no
pipeline parallel support in Neuron Reference For Megatron-LM.
Dropout is disabled
-------------------
Currently, dropout is disabled in the example.
"Failed accept4: Too many open files"
-------------------------------------
When running Megatron-LM GPT3 6.7B example above on `Ubuntu Server 20.04 LTS (HVM)` and `Ubuntu Server 22.04 LTS (HVM)` AMIs, you may encounter the following "Failed accept4: Too many open files" error:
.. code:: bash
E0301 08:06:14.272283286 72588 tcp_server_posix.cc:214] Failed accept4: Too many open files
2023-03-01 08:06:15.515834: F tensorflow/libtpu/neuron/neuron_compiler.cc:200] Check failed: fd != -1 Opening lock file failed with errno 24
The reason is that on this AMI, the "ulimit -n" is set to 1024, which is too low compared to for example `Amazon Linux 2 AMI (HVM) - Kernel 5.10` where it is set tp 65535 by default. To workaround this issue, please increase "ulimit -n" to a higher value, such as 65535 which matches `Amazon Linux 2 AMI (HVM) - Kernel 5.10` and is sufficient for the Megatron-LM GPT3 6.7B example. Additionally, this can be set within the shell script (which is ran using SLURM srun command) so that it is set for each worker process.
.. code:: bash
ulimit -n 65535
Error: cannot import name 'helpers' from 'megatron.data'
--------------------------------------------------------
You may encounter the error "cannot import name 'helpers' from 'megatron.data'" like below:
.. code:: bash
Exception in device=NEURONT:0: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py)
Traceback (most recent call last):
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 373, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 367, in _start_fn
fn(gindex, *args)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 138, in pretrain_mp
forward_step, args_defaults={'tokenizer_type': 'GPT2BPETokenizer'})
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 162, in pretrain
train_valid_test_dataset_provider)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 1021, in build_train_valid_test_data_iterators
train_val_test_num_samples)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 128, in train_valid_test_datasets_provider
skip_warmup=(not args.mmap_warmup))
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 43, in build_train_valid_test_datasets
seq_length, seed, skip_warmup)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 118, in _build_train_valid_test_datasets
train_dataset = build_dataset(0, 'train')
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 115, in build_dataset
seq_length, seed)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 156, in __init__
num_samples, seq_length, seed)
File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 274, in _build_index_mappings
from megatron.data import helpers
ImportError: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py)
To fix this, please go into aws-neuron-reference-for-megatron-lm/megatron/data/ and do "make":
.. code:: bash
pip install pybind11
pushd .
cd aws-neuron-reference-for-megatron-lm/megatron/data/
make
popd
Error: Out of space while checkpointing
--------------------------------------------------------
You may seem an error as follows. The model checkpoints are large as they dump all the model weights,
optimizer and rng states. And if these are frequently checkpointed, the storage can run out fast.
Please make sure you have enough disk space.
.. code:: bash
Traceback (most recent call last):
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 380, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 604, in _save
zip_file.write_record(name, storage.data_ptr(), num_bytes)
OSError: [Errno 28] No space left on device
Troubleshooting
~~~~~~~~~~~~~~~
See :ref:`pytorch-neuron-traning-troubleshooting`
</pre></body></html> | 2023-09-29T20:55:35.246Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuron-v2.rst.txt | ```
.. _tensorflow-modelserver-rn-v2:
TensorFlow-Model-Server-Neuron 2.x Release Notes
================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the
TensorFlow-Model-Server-Neuron package.
TensorFlow Model Server Neuron 2.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
TensorFlow Model Server Neuron 2.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Added support for tensorflow-model-serving 2.8.0.
TensorFlow Model Server Neuron 2.x release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Updated tensorflow-serving 2.5 to 2.5.4.
* Add support for tensorflow-model-serving 2.6 and 2.7.
TensorFlow Model Server Neuron 2.x release [2.1.6.0]
----------------------------------------------------
Date: 01/20/2022
* Updated tensorflow-model-server 2.5 to version 2.5.3
TensorFlow Model Server Neuron 2.x release [2.0.4.0]
----------------------------------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
TensorFlow Model Server Neuron 2.x release [2.0.3.0]
----------------------------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
* TensorFlow Model Server Neuron 2.x now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
.. _2511680:
TensorFlow Model Server Neuron 2.x release [1.6.8.0]
----------------------------------------------------
Date: 08/12/2021
Summary
^^^^^^^
TensorFlow 2.x - tensorflow-model-server-neuron now support TensorFlow 2.x, tensorflow-model-server-neuron package versions 2.1.4, 2.2.2, 2.3.0, 2.4.1, and 2.5.1 support TensorFlow 2.x.
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-modelserver-rn-v2:
TensorFlow-Model-Server-Neuron 2.x Release Notes
================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the
TensorFlow-Model-Server-Neuron package.
TensorFlow Model Server Neuron 2.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
TensorFlow Model Server Neuron 2.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Added support for tensorflow-model-serving 2.8.0.
TensorFlow Model Server Neuron 2.x release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Updated tensorflow-serving 2.5 to 2.5.4.
* Add support for tensorflow-model-serving 2.6 and 2.7.
TensorFlow Model Server Neuron 2.x release [2.1.6.0]
----------------------------------------------------
Date: 01/20/2022
* Updated tensorflow-model-server 2.5 to version 2.5.3
TensorFlow Model Server Neuron 2.x release [2.0.4.0]
----------------------------------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
TensorFlow Model Server Neuron 2.x release [2.0.3.0]
----------------------------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
* TensorFlow Model Server Neuron 2.x now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
.. _2511680:
TensorFlow Model Server Neuron 2.x release [1.6.8.0]
----------------------------------------------------
Date: 08/12/2021
Summary
^^^^^^^
TensorFlow 2.x - tensorflow-model-server-neuron now support TensorFlow 2.x, tensorflow-model-server-neuron package versions 2.1.4, 2.2.2, 2.3.0, 2.4.1, and 2.5.1 support TensorFlow 2.x.
</pre></body></html> | 2023-09-29T20:55:35.287Z | |
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/setup-troubleshooting.rst.txt | ```
.. _neuron-setup-troubleshooting:
Neuron Setup Troubleshooting
============================
.. contents:: Table of contents
:local:
:depth: 2
.. _gpg_key_update:
How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation
-----------------------------------------------------------------------------------
Description
^^^^^^^^^^^
The GPG key for the Neuron repository (https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22.
Any customer of Ubuntu or Debian using Neuron ``apt`` repository will get the following error:
.. code::
While running an apt-get update command on an AWS deep learning image (us-east-1/ami-01fce297f68912e45) I get this output:
Err:6 https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease
The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <neuron-maintainers@amazon.com>
Fetched 172 kB in 1s (161 kB/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error:https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease: The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <neuron-maintainers@amazon.com>
Solution
^^^^^^^^
To solve this issue, you need to run the following commands to fetch the new key before running ``apt-get update``
.. code::
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
# Update OS packages
sudo apt-get update -y
``pip install --upgrade`` wouldn't upgrade ``neuron-cc``
--------------------------------------------------------
Description
^^^^^^^^^^^
When trying to upgrade to a newer Neuron release, for example by calling:
``pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision``
``neuron-cc`` is not upgraded.
This can be a result of a bug in certain ``pip`` versions, for example `pip install upgrade will not upgrade package if extras_require specified <https://github.com/pypa/pip/issues/10173>`_
Solution
^^^^^^^^
To solve this issue you can either upgrade to a newer ``pip`` version or use ``--force`` when trying to upgrade, for example:
``pip install --force torch-neuron neuron-cc[tensorflow] torchvision``
``` | <html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-setup-troubleshooting:
Neuron Setup Troubleshooting
============================
.. contents:: Table of contents
:local:
:depth: 2
.. _gpg_key_update:
How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation
-----------------------------------------------------------------------------------
Description
^^^^^^^^^^^
The GPG key for the Neuron repository (https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22.
Any customer of Ubuntu or Debian using Neuron ``apt`` repository will get the following error:
.. code::
While running an apt-get update command on an AWS deep learning image (us-east-1/ami-01fce297f68912e45) I get this output:
Err:6 https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease
The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <neuron-maintainers@amazon.com>
Fetched 172 kB in 1s (161 kB/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error:https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease: The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <neuron-maintainers@amazon.com>
Solution
^^^^^^^^
To solve this issue, you need to run the following commands to fetch the new key before running ``apt-get update``
.. code::
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
# Update OS packages
sudo apt-get update -y
``pip install --upgrade`` wouldn't upgrade ``neuron-cc``
--------------------------------------------------------
Description
^^^^^^^^^^^
When trying to upgrade to a newer Neuron release, for example by calling:
``pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision``
``neuron-cc`` is not upgraded.
This can be a result of a bug in certain ``pip`` versions, for example `pip install upgrade will not upgrade package if extras_require specified <https://github.com/pypa/pip/issues/10173>`_
Solution
^^^^^^^^
To solve this issue you can either upgrade to a newer ``pip`` version or use ``--force`` when trying to upgrade, for example:
``pip install --force torch-neuron neuron-cc[tensorflow] torchvision``
</pre></body></html> | 2023-09-29T20:55:35.564Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.