1# Using the NN-API Test Generator 2 3## Prerequisites 4 5- Python3 6- Numpy 7 8## Writing a Test Specification 9 10You should create new test specs in `nn/runtime/test/specs/<version>/` and name it with `.mod.py` suffix, so that other tools can automatically update the unit tests. 11 12### Specifying Operands 13 14#### Syntax 15 16``` 17OperandType(name, (type, shape, <optional scale, zero point>), <optional initializer>) 18``` 19 20For example, 21 22```Python 23# p1 is a 2-by-2 fp matrix parameter, with value [1, 2; 3, 4] 24p1 = Parameter("param", ("TENSOR_FLOAT32", [2, 2]), [1, 2, 3, 4]) 25 26# i1 is a quantized input of shape (2, 256, 256, 3), with scale = 0.5, zero point = 128 27i1 = Input("input", ("TENSOR_QUANT8_ASYMM", [2, 256, 256, 3], 0.5, 128)) 28 29# p2 is an Int32 scalar with value 1 30p2 = Int32Scalar("act", 1) 31``` 32 33#### OperandType 34 35There are currently 10 operand types supported by the test generator. 36 37- Input 38- Output 39 * IgnoredOutput, will not compare results in the test 40- Parameter 41 * Int32Scalar, shorthand for parameter with type INT32 42 * Float32Scalar, shorthand for parameter with type FLOAT32 43 * Int32Vector, shorthand for 1-D TENSOR_INT32 parameter 44 * Float32Vector, shorthand for 1-D TENSOR_FLOAT32 parameter 45 * SubgraphReference, shortcut for a SUBGRAPH parameter 46- Internal, for model with multiple operations 47 48### Specifying Models 49 50#### Instantiate a model 51 52```Python 53# Instantiate a model 54model = Model() 55 56# Instantiate a model with a name 57model2 = Model("model_name") 58``` 59 60#### Add an operation 61 62``` 63model.Operation(optype, i1, i2, ...).To(o1, o2, ...) 64``` 65 66For example, 67 68```Python 69model.Operation("ADD", i1, i2, act).To(o1) 70``` 71 72#### Use implicit operands 73 74Simple scalar and 1-D vector parameters can now be directly passed to Operation constructor, and test generator will deduce the operand type from the value provided. 75 76```Python 77model.Operation("MEAN", i1, [1], 0) # axis = [1], keep_dims = 0 78``` 79 80Note that, for fp values, the initializer should all be Python fp numbers, e.g. use `1.0` or `1.` instead of `1` for implicit fp operands. 81 82### Specifying Inputs and Expected Outputs 83 84The combination of inputs and expected outputs is called an example for a given model. An example is defined like 85 86```Python 87# Example 1, separate dictionary for inputs and outputs 88input1 = { 89 i1: [1, 2], 90 i2: [3, 4] 91} 92output1 = {o1: [4, 6]} 93 94# Example 2, combined dictionary 95example2_values = { 96 i1: [5, 6], 97 i2: [7, 8], 98 o1: [12, 14] 99} 100 101# Instantiate an example 102Example((input1, output1), example2_values) 103``` 104 105By default, examples will be attached to the most recent instantiated model. You can explicitly specify the target model, and optionally, the example name by 106 107```Python 108Example((input1, output1), example2_values, model=model, name="example_name") 109``` 110 111### Specifying Variations 112 113You can add variations to the example so that the test generator can automatically create multiple tests. The following variations are supported: 114 115- DefaultVariation, i.e. no variation 116- DataTypeConverter 117- DataLayoutConverter 118- AxisConverter 119- RelaxedModeConverter 120- ActivationConverter 121- AllOutputsAsInternalCoverter 122 123#### DataTypeConverter 124 125Convert input/parameter/output to the specified type, e.g. float32 -> quant8. The target data type for each operand to transform has to be explicitly specified. It is the spec writer's responsibility to ensure such conversion is valid. 126 127```Python 128converter = DataTypeConverter(name="variation_name").Identify({ 129 op1: (target_type, target_scale, target_zero_point), 130 op2: (target_type, target_scale, target_zero_point), 131 ... 132}) 133``` 134 135#### DataLayoutConverter 136 137Convert input/parameter/output between NHWC and NCHW. The caller need to provide a list of target operands to transform, and also the data layout parameter to set. 138 139```Python 140converter = DataLayoutConverter(target_data_layout, name="variation_name").Identify( 141 [op1, op2, ..., layout_parameter] 142) 143``` 144 145#### AxisConverter 146 147Transpose a certain axis in input/output to target position, and optionally remove some axis. The caller need to provide a list of target operands to transform, and also the axis parameter to set. 148 149```Python 150converter = AxisConverter(originalAxis, targetAxis, dimension, drop=[], name="variation_name").Identify( 151 [op1, op2, ..., axis_parameter] 152) 153``` 154 155This model variation is for ops that apply calculation along certain axis, such as L2_NORMALIZATION, SOFTMAX, and CHANNEL_SHUFFLE. For example, consider L2_NORMALIZATION with input of shape [2, 3, 4, 5] along the last axis, i.e. axis = -1. The output shape would be the same as input. We can create a new model which will do the calculation along axis 0 by transposing input and output shape to [5, 2, 3, 4] and modify the axis parameter to 0. Such converter can be defined as 156 157```Python 158toAxis0 = AxisConverter(-1, 0, 4).Identify([input, output, axis]) 159``` 160 161The target axis can also be negative to test the negative indexing 162 163```Python 164toAxis0 = AxisConverter(-1, -4, 4).Identify([input, output, axis]) 165``` 166 167Consider the same L2_NORMALIZATION example, we can also create a new model with input/output of 2D shape [4, 5] by removing the first two dimension. This is essentially doing `new_input = input[0,0,:,:]` in numpy. Such converter can be defined as 168 169```Python 170toDim2 = AxisConverter(-1, -1, 4, drop=[0, 1]).Identify([input, output, axis]) 171``` 172 173If transposition and removal are specified at the same time, the converter will do transposition first and then remove the axis. For example, the following converter will result in shape [5, 4] and axis 0. 174 175```Python 176toDim2Axis0 = AxisConverter(-1, 2, 4, drop=[0, 1]).Identify([input, output, axis]) 177``` 178 179#### RelaxedModeConverter 180 181Convert the model to enable/disable relaxed computation. 182 183```Python 184converter = RelaxedModeConverter(is_relaxed, name="variation_name") 185``` 186 187#### ActivationConverter 188 189Convert the output by certain activation, the original activation is assumed to be NONE. The caller need to provide a list of target operands to transform, and also the activation parameter to set. 190 191```Python 192converter = ActivationConverter(name="variation_name").Identify( 193 [op1, op2, ..., act_parameter] 194) 195``` 196 197#### AllOutputsAsInternalCoverter 198 199Add a dummy ADD operation after each model output to make it as an internal operand. Will skip if the model does not have any output tensor that is compatible with the ADD operation or if the model has more than one operation. 200 201#### Add variation to example 202 203Each example can have multiple groups of variations, and if so, will take the cartesian product of the groups. For example, suppose we declare a model with two groups, and each group has two variations: `[[default, nchw], [default, relaxed, quant8]]`. This will result in 6 examples: `[default, default], [default, relaxed], [default, quant8], [nchw, default], [nchw, relaxed], [nchw, quant8]`. 204 205Use `AddVariations` to add a group of variations to the example 206 207```Python 208# Add two groups of variations [default, nchw] and [default, relaxed, quant8] 209example.AddVariations(nchw).AddVariations(relaxed, quant8) 210``` 211 212By default, when you add a group of variation, a unnamed default variation will be automatically included in the list. You can name the default variation by 213 214```Python 215example.AddVariations(nchw, defaultName="nhwc").AddVariations(relaxed, quant8) 216``` 217 218Also, you can choose not to include default by 219 220```Python 221# Add two groups of variations [nchw] and [default, relaxed, quant8] 222example.AddVariations(nchw, includeDefault=False).AddVariations(relaxed, quant8) 223``` 224 225The example above will result in 3 examples: `[nchw, default], [nchw, relaxed], [nchw, quant8]`. 226 227#### Default variations 228 229By default, the test generator will apply the following variations automatically. 230 231- **AllTensorsAsInputsConverter:** Convert all constant tensors in the model to model inputs. Will skip if the model does not have any constant tensor, or if the model has more than one operations. If not explicitly disabled, this variation will be automatically applied to all tests. 232 233- **AllInputsAsInternalCoverter:** Add a dummy ADD operation before each model input to make it as an internal operand. Will skip if the model does not have any input tensor that is compatible to the ADD operation, or if the model has more than one operations. If not explicitly disabled, this variation will be automatically applied to all tests. 234 235- **DynamicOutputShapeConverter:** Convert the model to enable dynamic output shape test. If not explicitly disabled, this variation will be automatically applied to all tests introduce in HAL version 1.2 or later. 236 237You can opt-out by invoking the corresponding methods on examples. 238 239```Python 240# Disable AllTensorsAsInputsConverter and AllInputsAsInternalCoverter. 241example.DisableLifeTimeVariation() 242 243# Disable DynamicOutputShapeConverter. 244example.DisableDynamicOutputShapeVariation() 245``` 246 247You may also specify a certain operand to be input/const-only that `AllInputsAsInternalCoverter` will skip converting this operand. 248 249```Python 250# "hash" will be converted to a model input when applying AllTensorsAsInputsConverter, 251# but will be skipped when further applying AllInputsAsInternalCoverter. 252hash = Parameter("hash", "TENSOR_FLOAT32", "{1, 1}", [0.123]).ShouldNeverBeInternal() 253``` 254 255#### Some helper functions 256 257The test generator provides several helper functions or shorthands to add commonly used group of variations. 258 259```Python 260# Each following group of statements are equivalent 261 262# DataTypeConverter 263example.AddVariations(DataTypeConverter().Identify({op1: "TENSOR_FLOAT16", ...})) 264example.AddVariations("float16") # will apply to every TENSOR_FLOAT32 operands 265 266example.AddVariations(DataTypeConverter().Identify({op1: "TENSOR_INT32", ...})) 267example.AddVariations("int32") # will apply to every TENSOR_FLOAT32 operands 268 269# DataLayoutConverter 270example.AddVariations(DataLayoutConverter("nchw").Identify(op_list)) 271example.AddVariations(("nchw", op_list)) 272example.AddNchw(*op_list) 273 274# AxisConverter 275# original axis and dim are deduced from the op_list 276example.AddVariations(*[AxisConverter(origin, t, dim).Identify(op_list) for t in targets]) 277example.AddAxis(targets, *op_list) 278 279example.AddVariations(*[ 280 AxisConverter(origin, t, dim).Identify(op_list) for t in range(dim) 281 ], includeDefault=False) 282example.AddAllPositiveAxis(*op_list) 283 284example.AddVariations(*[ 285 AxisConverter(origin, t, dim).Identify(op_list) for t in range(-dim, dim) 286 ], includeDefault=False) 287example.AddAllAxis(*op_list) 288 289drop = list(range(dim)) 290drop.pop(origin) 291example.AddVariations(*[ 292 AxisConverter(origin, origin, dim, drop[0:(dim-i)]).Identify(op_list) for i in dims]) 293example.AddDims(dims, *op_list) 294 295example.AddVariations(*[ 296 AxisConverter(origin, origin, dim, drop[0:i]).Identify(op_list) for i in range(dim)]) 297example.AddAllDims(dims, *op_list) 298 299example.AddVariations(*[ 300 AxisConverter(origin, j, dim, range(i)).Identify(op_list) \ 301 for i in range(dim) for j in range(i, dim) 302 ], includeDefault=False) 303example.AddAllDimsAndPositiveAxis(dims, *op_list) 304 305example.AddVariations(*[ 306 AxisConverter(origin, k, dim, range(i)).Identify(op_list) \ 307 for i in range(dim) for j in range(i, dim) for k in [j, j - dim] 308 ], includeDefault=False) 309example.AddAllDimsAndAxis(dims, *op_list) 310 311# RelaxedModeConverter 312example.Addvariations(RelaxedModeConverter(True)) 313example.AddVariations("relaxed") 314example.AddRelaxed() 315 316# ActivationConverter 317example.AddVariations(ActivationConverter("relu").Identify(op_list)) 318example.AddVariations(("relu", op_list)) 319example.AddRelu(*op_list) 320 321example.AddVariations( 322 ActivationConverter("relu").Identify(op_list), 323 ActivationConverter("relu1").Identify(op_list), 324 ActivationConverter("relu6").Identify(op_list)) 325example.AddVariations( 326 ("relu", op_list), 327 ("relu1", op_list), 328 ("relu6", op_list)) 329example.AddAllActivations(*op_list) 330``` 331 332#### Specifying SUBGRAPH conversions 333 334Converters that support nested control flow models accept the following syntax: 335 336``` 337converter = DataTypeConverter().Identify({ 338 ... 339 subgraphOperand: DataTypeConverter().Identify({ 340 ... 341 }), 342 ... 343}) 344``` 345 346### Specifying the Model Version 347 348If not explicitly specified, the minimal required HAL version will be inferred from the path, e.g. the models defined in `nn/runtime/test/specs/V1_0/add.mod.py` will all have version `V1_0`. However there are several exceptions that a certain operation is under-tested in previous version and more tests are added in a later version. In this case, two methods are provided to set the version manually. 349 350#### Set the version when creating the model 351 352Use `IntroducedIn` to set the version of a model. All variations of the model will have the same version. 353 354```Python 355model_V1_0 = Model().IntroducedIn("V1_0") 356... 357# All variations of model_V1_0 will have the same version V1_0. 358Example(example, model=model_V1_0).AddVariations(var0, var1, ...) 359``` 360 361#### Set the version overrides 362 363Use `Example.SetVersion` to override the model version for specific tests. The target tests are specified by names. This method can also override the version specified by `IntroducedIn`. 364 365```Python 366Example.SetVersion(<version>, testName0, testName1, ...) 367``` 368 369This is useful when only a subset of variations has a different version. 370 371### Specifying model inputs and outputs 372 373Use `Model.IdentifyInputs` and `Model.IdentifyOutputs` to explicitly specify 374model inputs and outputs. This is particularly useful for models referenced by 375IF and WHILE operations. 376 377```Python 378DataType = ["TENSOR_INT32", [1]] 379BoolType = ["TENSOR_BOOL8", [1]] 380 381def MakeConditionModel(): 382 a = Input("a", DataType) 383 b = Input("b", DataType) 384 out = Output("out", BoolType) 385 model = Model() 386 model.IdentifyInputs(a, b) # "a" is unused by the model. 387 model.IdentifyOutputs(out) 388 model.Operation("LESS", b, [10]).To(out) 389 return model 390 391def MakeBodyModel(): 392 a = Input("a", DataType) 393 b = Input("b", DataType) 394 a_out = Output("a_out", DataType) 395 b_out = Output("b_out", DataType) 396 model = Model() 397 model.IdentifyInputs(a, b) # The order is the same as in the WHILE operation. 398 model.IdentifyOutputs(a_out, b_out) 399 model.Operation("SUB", b, a, 0).To(a_out) 400 model.Operation("ADD", b, [1], 0).To(b_out) 401 return model 402 403a = Input("a", DataType) 404a_out = Output("a_out", DataType) 405cond = MakeConditionModel() 406body = MakeBodyModel() 407b_init = [1] 408Model().Operation("WHILE", cond, body, a, b_init).To(a_out) 409``` 410 411### Creating negative tests 412 413Negative test, also known as validation test, is a testing method that supplies invalid model or request, and expects the target framework or driver to fail gracefully. You can use `ExpectFailure` to tag a example as invalid. 414 415```Python 416Example.ExpectFailure() 417``` 418 419### A Complete Example 420 421```Python 422# Declare input, output, and parameters 423i1 = Input("op1", ("TENSOR_FLOAT32", [1, 3, 4, 1])) 424f1 = Parameter("op2", ("TENSOR_FLOAT32", [1, 3, 3, 1]), [1, 4, 7, 2, 5, 8, 3, 6, 9]) 425b1 = Parameter("op3", ("TENSOR_FLOAT32", [1]), [-200]) 426act = Int32Scalar("act", 0) 427o1 = Output("op4", ("TENSOR_FLOAT32", [1, 3, 4, 1])) 428 429# Instantiate a model and add CONV_2D operation 430# Use implicit parameter for implicit padding and strides 431Model().Operation("CONV_2D", i1, f1, b1, 1, 1, 1, act, layout).To(o1) 432 433# Additional data type 434quant8 = DataTypeConverter().Identify({ 435 i1: ("TENSOR_QUANT8_ASYMM", 0.5, 127), 436 f1: ("TENSOR_QUANT8_ASYMM", 0.5, 127), 437 b1: ("TENSOR_INT32", 0.25, 0), 438 o1: ("TENSOR_QUANT8_ASYMM", 1.0, 50) 439}) 440 441# Instantiate an example 442example = Example({ 443 i1: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], 444 o1: [0, 0, 0, 0, 35, 112, 157, 0, 0, 34, 61, 0] 445}) 446 447# Only use NCHW data layout 448example.AddNchw(i1, f1, o1, layout, includeDefault=False) 449 450# Add two more groups of variations 451example.AddInput(f1, b1).AddVariations("relaxed", quant8).AddAllActivations(o1, act) 452 453# The following variations are added implicitly. 454# example.AddVariations(AllTensorsAsInputsConverter()) 455# example.AddVariations(AllInputsAsInternalCoverter()) 456 457# The following variation is added implicitly if this test is introduced in v1.2 or later. 458# example.AddVariations(DynamicOutputShapeConverter()) 459``` 460 461The spec above will result in 96 tests if introduced in v1.0 or v1.1, and 192 tests if introduced in v1.2 or later. 462 463## Generate Tests 464 465Once you have your model ready, run 466 467``` 468$ANDROID_BUILD_TOP/frameworks/ml/nn/runtime/test/specs/generate_all_tests.sh 469``` 470 471It will update all CTS and VTS tests based on spec files in `nn/runtime/test/specs/V1_*/*`. 472 473Rebuild with mma afterwards. 474