1<html><body> 2<style> 3 4body, h1, h2, h3, div, span, p, pre, a { 5 margin: 0; 6 padding: 0; 7 border: 0; 8 font-weight: inherit; 9 font-style: inherit; 10 font-size: 100%; 11 font-family: inherit; 12 vertical-align: baseline; 13} 14 15body { 16 font-size: 13px; 17 padding: 1em; 18} 19 20h1 { 21 font-size: 26px; 22 margin-bottom: 1em; 23} 24 25h2 { 26 font-size: 24px; 27 margin-bottom: 1em; 28} 29 30h3 { 31 font-size: 20px; 32 margin-bottom: 1em; 33 margin-top: 1em; 34} 35 36pre, code { 37 line-height: 1.5; 38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; 39} 40 41pre { 42 margin-top: 0.5em; 43} 44 45h1, h2, h3, p { 46 font-family: Arial, sans serif; 47} 48 49h1, h2, h3 { 50 border-bottom: solid #CCC 1px; 51} 52 53.toc_element { 54 margin-top: 0.5em; 55} 56 57.firstline { 58 margin-left: 2 em; 59} 60 61.method { 62 margin-top: 1em; 63 border: solid 1px #CCC; 64 padding: 1em; 65 background: #EEE; 66} 67 68.details { 69 font-weight: bold; 70 font-size: 14px; 71} 72 73</style> 74 75<h1><a href="remotebuildexecution_v2.html">Remote Build Execution API</a> . <a href="remotebuildexecution_v2.actionResults.html">actionResults</a></h1> 76<h2>Instance Methods</h2> 77<p class="toc_element"> 78 <code><a href="#get">get(instanceName, hash, sizeBytes, inlineStdout=None, inlineOutputFiles=None, inlineStderr=None, x__xgafv=None)</a></code></p> 79<p class="firstline">Retrieve a cached execution result.</p> 80<p class="toc_element"> 81 <code><a href="#update">update(instanceName, hash, sizeBytes, body, resultsCachePolicy_priority=None, x__xgafv=None)</a></code></p> 82<p class="firstline">Upload a new execution result.</p> 83<h3>Method Details</h3> 84<div class="method"> 85 <code class="details" id="get">get(instanceName, hash, sizeBytes, inlineStdout=None, inlineOutputFiles=None, inlineStderr=None, x__xgafv=None)</code> 86 <pre>Retrieve a cached execution result. 87 88Implementations SHOULD ensure that any blobs referenced from the 89ContentAddressableStorage 90are available at the time of returning the 91ActionResult and will be 92for some period of time afterwards. The TTLs of the referenced blobs SHOULD be increased 93if necessary and applicable. 94 95Errors: 96 97* `NOT_FOUND`: The requested `ActionResult` is not in the cache. 98 99Args: 100 instanceName: string, The instance of the execution system to operate against. A server may 101support multiple instances of the execution system (with their own workers, 102storage, caches, etc.). The server MAY require use of this field to select 103between them in an implementation-defined fashion, otherwise it can be 104omitted. (required) 105 hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string 106exactly 64 characters long. (required) 107 sizeBytes: string, The size of the blob, in bytes. (required) 108 inlineStdout: boolean, A hint to the server to request inlining stdout in the 109ActionResult message. 110 inlineOutputFiles: string, A hint to the server to inline the contents of the listed output files. 111Each path needs to exactly match one path in `output_files` in the 112Command message. (repeated) 113 inlineStderr: boolean, A hint to the server to request inlining stderr in the 114ActionResult message. 115 x__xgafv: string, V1 error format. 116 Allowed values 117 1 - v1 error format 118 2 - v2 error format 119 120Returns: 121 An object of the form: 122 123 { # An ActionResult represents the result of an 124 # Action being run. 125 "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. 126 "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. 127 "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. 128 "queuedTimestamp": "A String", # When was the action added to the queue. 129 "worker": "A String", # The name of the worker which ran the execution. 130 "executionStartTimestamp": "A String", # When the worker started executing the action command. 131 "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. 132 "workerStartTimestamp": "A String", # When the worker received the action. 133 "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. 134 "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. 135 "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. 136 }, 137 "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those 138 # may be links to other output files, or input files, or even absolute paths 139 # outside of the working directory, if the server supports 140 # SymlinkAbsolutePathStrategy.ALLOWED. 141 # For each output file requested in the `output_files` field of the Action, 142 # if the corresponding file existed after 143 # the action completed, a single entry will be present either in this field, 144 # or in the `output_files` field, if the file was not a symbolic link. 145 # 146 # If an output symbolic link of the same name was found, but its target 147 # type was not a regular file, the server will return a FAILED_PRECONDITION. 148 # If the action does not produce the requested output, then that output 149 # will be omitted from the list. The server is free to arrange the output 150 # list as desired; clients MUST NOT assume that the output list is sorted. 151 { # An `OutputSymlink` is similar to a 152 # Symlink, but it is used as an 153 # output in an `ActionResult`. 154 # 155 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 156 "path": "A String", # The full path of the symlink relative to the working directory, including the 157 # filename. The path separator is a forward slash `/`. Since this is a 158 # relative path, it MUST NOT begin with a leading forward slash. 159 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 160 # The target path can be relative to the parent directory of the symlink or 161 # it can be an absolute path starting with `/`. Support for absolute paths 162 # can be checked using the Capabilities 163 # API. The canonical form forbids the substrings `/./` and `//` in the target 164 # path. `..` components are allowed anywhere in the target path. 165 }, 166 ], 167 "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which 168 # can be retrieved from the 169 # ContentAddressableStorage. 170 # and its hash. The hash algorithm to use is defined by the server, but servers 171 # SHOULD use SHA-256. 172 # 173 # The size is considered to be an integral part of the digest and cannot be 174 # separated. That is, even if the `hash` field is correctly specified but 175 # `size_bytes` is not, the server MUST reject the request. 176 # 177 # The reason for including the size in the digest is as follows: in a great 178 # many cases, the server needs to know the size of the blob it is about to work 179 # with prior to starting an operation with it, such as flattening Merkle tree 180 # structures or streaming it to a worker. Technically, the server could 181 # implement a separate metadata store, but this results in a significantly more 182 # complicated implementation as opposed to having the client specify the size 183 # up-front (or storing the size along with the digest in every message where 184 # digests are embedded). This does mean that the API leaks some implementation 185 # details of (what we consider to be) a reasonable server implementation, but 186 # we consider this to be a worthwhile tradeoff. 187 # 188 # When a `Digest` is used to refer to a proto message, it always refers to the 189 # message in binary encoded form. To ensure consistent hashing, clients and 190 # servers MUST ensure that they serialize messages according to the following 191 # rules, even if there are alternate valid encodings for the same message: 192 # 193 # * Fields are serialized in tag order. 194 # * There are no unknown fields. 195 # * There are no duplicate fields. 196 # * Fields are serialized according to the default semantics for their type. 197 # 198 # Most protocol buffer implementations will always follow these rules when 199 # serializing, but care should be taken to avoid shortcuts. For instance, 200 # concatenating two messages to merge them may produce duplicate fields. 201 "sizeBytes": "A String", # The size of the blob, in bytes. 202 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 203 # exactly 64 characters long. 204 }, 205 "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline 206 # stdout unless requested by the client in the 207 # GetActionResultRequest 208 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 209 # would cause the response to exceed message size limits. 210 "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline 211 # stderr unless requested by the client in the 212 # GetActionResultRequest 213 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 214 # would cause the response to exceed message size limits. 215 "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which 216 # can be retrieved from the 217 # ContentAddressableStorage. 218 # and its hash. The hash algorithm to use is defined by the server, but servers 219 # SHOULD use SHA-256. 220 # 221 # The size is considered to be an integral part of the digest and cannot be 222 # separated. That is, even if the `hash` field is correctly specified but 223 # `size_bytes` is not, the server MUST reject the request. 224 # 225 # The reason for including the size in the digest is as follows: in a great 226 # many cases, the server needs to know the size of the blob it is about to work 227 # with prior to starting an operation with it, such as flattening Merkle tree 228 # structures or streaming it to a worker. Technically, the server could 229 # implement a separate metadata store, but this results in a significantly more 230 # complicated implementation as opposed to having the client specify the size 231 # up-front (or storing the size along with the digest in every message where 232 # digests are embedded). This does mean that the API leaks some implementation 233 # details of (what we consider to be) a reasonable server implementation, but 234 # we consider this to be a worthwhile tradeoff. 235 # 236 # When a `Digest` is used to refer to a proto message, it always refers to the 237 # message in binary encoded form. To ensure consistent hashing, clients and 238 # servers MUST ensure that they serialize messages according to the following 239 # rules, even if there are alternate valid encodings for the same message: 240 # 241 # * Fields are serialized in tag order. 242 # * There are no unknown fields. 243 # * There are no duplicate fields. 244 # * Fields are serialized according to the default semantics for their type. 245 # 246 # Most protocol buffer implementations will always follow these rules when 247 # serializing, but care should be taken to avoid shortcuts. For instance, 248 # concatenating two messages to merge them may produce duplicate fields. 249 "sizeBytes": "A String", # The size of the blob, in bytes. 250 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 251 # exactly 64 characters long. 252 }, 253 "outputFiles": [ # The output files of the action. For each output file requested in the 254 # `output_files` field of the Action, if the corresponding file existed after 255 # the action completed, a single entry will be present either in this field, 256 # or the `output_file_symlinks` field if the file was a symbolic link to 257 # another file. 258 # 259 # If an output of the same name was found, but was a directory rather 260 # than a regular file, the server will return a FAILED_PRECONDITION. 261 # If the action does not produce the requested output, then that output 262 # will be omitted from the list. The server is free to arrange the output 263 # list as desired; clients MUST NOT assume that the output list is sorted. 264 { # An `OutputFile` is similar to a 265 # FileNode, but it is used as an 266 # output in an `ActionResult`. It allows a full file path rather than 267 # only a name. 268 "path": "A String", # The full path of the file relative to the working directory, including the 269 # filename. The path separator is a forward slash `/`. Since this is a 270 # relative path, it MUST NOT begin with a leading forward slash. 271 "isExecutable": True or False, # True if file is executable, false otherwise. 272 "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline 273 # file contents unless requested by the client in the 274 # GetActionResultRequest 275 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 276 # would cause the response to exceed message size limits. 277 "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. 278 # and its hash. The hash algorithm to use is defined by the server, but servers 279 # SHOULD use SHA-256. 280 # 281 # The size is considered to be an integral part of the digest and cannot be 282 # separated. That is, even if the `hash` field is correctly specified but 283 # `size_bytes` is not, the server MUST reject the request. 284 # 285 # The reason for including the size in the digest is as follows: in a great 286 # many cases, the server needs to know the size of the blob it is about to work 287 # with prior to starting an operation with it, such as flattening Merkle tree 288 # structures or streaming it to a worker. Technically, the server could 289 # implement a separate metadata store, but this results in a significantly more 290 # complicated implementation as opposed to having the client specify the size 291 # up-front (or storing the size along with the digest in every message where 292 # digests are embedded). This does mean that the API leaks some implementation 293 # details of (what we consider to be) a reasonable server implementation, but 294 # we consider this to be a worthwhile tradeoff. 295 # 296 # When a `Digest` is used to refer to a proto message, it always refers to the 297 # message in binary encoded form. To ensure consistent hashing, clients and 298 # servers MUST ensure that they serialize messages according to the following 299 # rules, even if there are alternate valid encodings for the same message: 300 # 301 # * Fields are serialized in tag order. 302 # * There are no unknown fields. 303 # * There are no duplicate fields. 304 # * Fields are serialized according to the default semantics for their type. 305 # 306 # Most protocol buffer implementations will always follow these rules when 307 # serializing, but care should be taken to avoid shortcuts. For instance, 308 # concatenating two messages to merge them may produce duplicate fields. 309 "sizeBytes": "A String", # The size of the blob, in bytes. 310 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 311 # exactly 64 characters long. 312 }, 313 }, 314 ], 315 "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other 316 # directories. Those may be links to other output directories, or input 317 # directories, or even absolute paths outside of the working directory, 318 # if the server supports 319 # SymlinkAbsolutePathStrategy.ALLOWED. 320 # For each output directory requested in the `output_directories` field of 321 # the Action, if the directory existed after the action completed, a 322 # single entry will be present either in this field, or in the 323 # `output_directories` field, if the directory was not a symbolic link. 324 # 325 # If an output of the same name was found, but was a symbolic link to a file 326 # instead of a directory, the server will return a FAILED_PRECONDITION. 327 # If the action does not produce the requested output, then that output 328 # will be omitted from the list. The server is free to arrange the output 329 # list as desired; clients MUST NOT assume that the output list is sorted. 330 { # An `OutputSymlink` is similar to a 331 # Symlink, but it is used as an 332 # output in an `ActionResult`. 333 # 334 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 335 "path": "A String", # The full path of the symlink relative to the working directory, including the 336 # filename. The path separator is a forward slash `/`. Since this is a 337 # relative path, it MUST NOT begin with a leading forward slash. 338 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 339 # The target path can be relative to the parent directory of the symlink or 340 # it can be an absolute path starting with `/`. Support for absolute paths 341 # can be checked using the Capabilities 342 # API. The canonical form forbids the substrings `/./` and `//` in the target 343 # path. `..` components are allowed anywhere in the target path. 344 }, 345 ], 346 "outputDirectories": [ # The output directories of the action. For each output directory requested 347 # in the `output_directories` field of the Action, if the corresponding 348 # directory existed after the action completed, a single entry will be 349 # present in the output list, which will contain the digest of a 350 # Tree message containing the 351 # directory tree, and the path equal exactly to the corresponding Action 352 # output_directories member. 353 # 354 # As an example, suppose the Action had an output directory `a/b/dir` and the 355 # execution produced the following contents in `a/b/dir`: a file named `bar` 356 # and a directory named `foo` with an executable file named `baz`. Then, 357 # output_directory will contain (hashes shortened for readability): 358 # 359 # ```json 360 # // OutputDirectory proto: 361 # { 362 # path: "a/b/dir" 363 # tree_digest: { 364 # hash: "4a73bc9d03...", 365 # size: 55 366 # } 367 # } 368 # // Tree proto with hash "4a73bc9d03..." and size 55: 369 # { 370 # root: { 371 # files: [ 372 # { 373 # name: "bar", 374 # digest: { 375 # hash: "4a73bc9d03...", 376 # size: 65534 377 # } 378 # } 379 # ], 380 # directories: [ 381 # { 382 # name: "foo", 383 # digest: { 384 # hash: "4cf2eda940...", 385 # size: 43 386 # } 387 # } 388 # ] 389 # } 390 # children : { 391 # // (Directory proto with hash "4cf2eda940..." and size 43) 392 # files: [ 393 # { 394 # name: "baz", 395 # digest: { 396 # hash: "b2c941073e...", 397 # size: 1294, 398 # }, 399 # is_executable: true 400 # } 401 # ] 402 # } 403 # } 404 # ``` 405 # If an output of the same name was found, but was not a directory, the 406 # server will return a FAILED_PRECONDITION. 407 { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a 408 # directory's full contents rather than a single file. 409 "path": "A String", # The full path of the directory relative to the working directory. The path 410 # separator is a forward slash `/`. Since this is a relative path, it MUST 411 # NOT begin with a leading forward slash. The empty string value is allowed, 412 # and it denotes the entire working directory. 413 "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded 414 # Tree proto containing the 415 # directory's contents. 416 # and its hash. The hash algorithm to use is defined by the server, but servers 417 # SHOULD use SHA-256. 418 # 419 # The size is considered to be an integral part of the digest and cannot be 420 # separated. That is, even if the `hash` field is correctly specified but 421 # `size_bytes` is not, the server MUST reject the request. 422 # 423 # The reason for including the size in the digest is as follows: in a great 424 # many cases, the server needs to know the size of the blob it is about to work 425 # with prior to starting an operation with it, such as flattening Merkle tree 426 # structures or streaming it to a worker. Technically, the server could 427 # implement a separate metadata store, but this results in a significantly more 428 # complicated implementation as opposed to having the client specify the size 429 # up-front (or storing the size along with the digest in every message where 430 # digests are embedded). This does mean that the API leaks some implementation 431 # details of (what we consider to be) a reasonable server implementation, but 432 # we consider this to be a worthwhile tradeoff. 433 # 434 # When a `Digest` is used to refer to a proto message, it always refers to the 435 # message in binary encoded form. To ensure consistent hashing, clients and 436 # servers MUST ensure that they serialize messages according to the following 437 # rules, even if there are alternate valid encodings for the same message: 438 # 439 # * Fields are serialized in tag order. 440 # * There are no unknown fields. 441 # * There are no duplicate fields. 442 # * Fields are serialized according to the default semantics for their type. 443 # 444 # Most protocol buffer implementations will always follow these rules when 445 # serializing, but care should be taken to avoid shortcuts. For instance, 446 # concatenating two messages to merge them may produce duplicate fields. 447 "sizeBytes": "A String", # The size of the blob, in bytes. 448 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 449 # exactly 64 characters long. 450 }, 451 }, 452 ], 453 "exitCode": 42, # The exit code of the command. 454 }</pre> 455</div> 456 457<div class="method"> 458 <code class="details" id="update">update(instanceName, hash, sizeBytes, body, resultsCachePolicy_priority=None, x__xgafv=None)</code> 459 <pre>Upload a new execution result. 460 461In order to allow the server to perform access control based on the type of 462action, and to assist with client debugging, the client MUST first upload 463the Action that produced the 464result, along with its 465Command, into the 466`ContentAddressableStorage`. 467 468Errors: 469 470* `INVALID_ARGUMENT`: One or more arguments are invalid. 471* `FAILED_PRECONDITION`: One or more errors occurred in updating the 472 action result, such as a missing command or action. 473* `RESOURCE_EXHAUSTED`: There is insufficient storage space to add the 474 entry to the cache. 475 476Args: 477 instanceName: string, The instance of the execution system to operate against. A server may 478support multiple instances of the execution system (with their own workers, 479storage, caches, etc.). The server MAY require use of this field to select 480between them in an implementation-defined fashion, otherwise it can be 481omitted. (required) 482 hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string 483exactly 64 characters long. (required) 484 sizeBytes: string, The size of the blob, in bytes. (required) 485 body: object, The request body. (required) 486 The object takes the form of: 487 488{ # An ActionResult represents the result of an 489 # Action being run. 490 "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. 491 "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. 492 "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. 493 "queuedTimestamp": "A String", # When was the action added to the queue. 494 "worker": "A String", # The name of the worker which ran the execution. 495 "executionStartTimestamp": "A String", # When the worker started executing the action command. 496 "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. 497 "workerStartTimestamp": "A String", # When the worker received the action. 498 "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. 499 "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. 500 "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. 501 }, 502 "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those 503 # may be links to other output files, or input files, or even absolute paths 504 # outside of the working directory, if the server supports 505 # SymlinkAbsolutePathStrategy.ALLOWED. 506 # For each output file requested in the `output_files` field of the Action, 507 # if the corresponding file existed after 508 # the action completed, a single entry will be present either in this field, 509 # or in the `output_files` field, if the file was not a symbolic link. 510 # 511 # If an output symbolic link of the same name was found, but its target 512 # type was not a regular file, the server will return a FAILED_PRECONDITION. 513 # If the action does not produce the requested output, then that output 514 # will be omitted from the list. The server is free to arrange the output 515 # list as desired; clients MUST NOT assume that the output list is sorted. 516 { # An `OutputSymlink` is similar to a 517 # Symlink, but it is used as an 518 # output in an `ActionResult`. 519 # 520 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 521 "path": "A String", # The full path of the symlink relative to the working directory, including the 522 # filename. The path separator is a forward slash `/`. Since this is a 523 # relative path, it MUST NOT begin with a leading forward slash. 524 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 525 # The target path can be relative to the parent directory of the symlink or 526 # it can be an absolute path starting with `/`. Support for absolute paths 527 # can be checked using the Capabilities 528 # API. The canonical form forbids the substrings `/./` and `//` in the target 529 # path. `..` components are allowed anywhere in the target path. 530 }, 531 ], 532 "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which 533 # can be retrieved from the 534 # ContentAddressableStorage. 535 # and its hash. The hash algorithm to use is defined by the server, but servers 536 # SHOULD use SHA-256. 537 # 538 # The size is considered to be an integral part of the digest and cannot be 539 # separated. That is, even if the `hash` field is correctly specified but 540 # `size_bytes` is not, the server MUST reject the request. 541 # 542 # The reason for including the size in the digest is as follows: in a great 543 # many cases, the server needs to know the size of the blob it is about to work 544 # with prior to starting an operation with it, such as flattening Merkle tree 545 # structures or streaming it to a worker. Technically, the server could 546 # implement a separate metadata store, but this results in a significantly more 547 # complicated implementation as opposed to having the client specify the size 548 # up-front (or storing the size along with the digest in every message where 549 # digests are embedded). This does mean that the API leaks some implementation 550 # details of (what we consider to be) a reasonable server implementation, but 551 # we consider this to be a worthwhile tradeoff. 552 # 553 # When a `Digest` is used to refer to a proto message, it always refers to the 554 # message in binary encoded form. To ensure consistent hashing, clients and 555 # servers MUST ensure that they serialize messages according to the following 556 # rules, even if there are alternate valid encodings for the same message: 557 # 558 # * Fields are serialized in tag order. 559 # * There are no unknown fields. 560 # * There are no duplicate fields. 561 # * Fields are serialized according to the default semantics for their type. 562 # 563 # Most protocol buffer implementations will always follow these rules when 564 # serializing, but care should be taken to avoid shortcuts. For instance, 565 # concatenating two messages to merge them may produce duplicate fields. 566 "sizeBytes": "A String", # The size of the blob, in bytes. 567 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 568 # exactly 64 characters long. 569 }, 570 "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline 571 # stdout unless requested by the client in the 572 # GetActionResultRequest 573 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 574 # would cause the response to exceed message size limits. 575 "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline 576 # stderr unless requested by the client in the 577 # GetActionResultRequest 578 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 579 # would cause the response to exceed message size limits. 580 "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which 581 # can be retrieved from the 582 # ContentAddressableStorage. 583 # and its hash. The hash algorithm to use is defined by the server, but servers 584 # SHOULD use SHA-256. 585 # 586 # The size is considered to be an integral part of the digest and cannot be 587 # separated. That is, even if the `hash` field is correctly specified but 588 # `size_bytes` is not, the server MUST reject the request. 589 # 590 # The reason for including the size in the digest is as follows: in a great 591 # many cases, the server needs to know the size of the blob it is about to work 592 # with prior to starting an operation with it, such as flattening Merkle tree 593 # structures or streaming it to a worker. Technically, the server could 594 # implement a separate metadata store, but this results in a significantly more 595 # complicated implementation as opposed to having the client specify the size 596 # up-front (or storing the size along with the digest in every message where 597 # digests are embedded). This does mean that the API leaks some implementation 598 # details of (what we consider to be) a reasonable server implementation, but 599 # we consider this to be a worthwhile tradeoff. 600 # 601 # When a `Digest` is used to refer to a proto message, it always refers to the 602 # message in binary encoded form. To ensure consistent hashing, clients and 603 # servers MUST ensure that they serialize messages according to the following 604 # rules, even if there are alternate valid encodings for the same message: 605 # 606 # * Fields are serialized in tag order. 607 # * There are no unknown fields. 608 # * There are no duplicate fields. 609 # * Fields are serialized according to the default semantics for their type. 610 # 611 # Most protocol buffer implementations will always follow these rules when 612 # serializing, but care should be taken to avoid shortcuts. For instance, 613 # concatenating two messages to merge them may produce duplicate fields. 614 "sizeBytes": "A String", # The size of the blob, in bytes. 615 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 616 # exactly 64 characters long. 617 }, 618 "outputFiles": [ # The output files of the action. For each output file requested in the 619 # `output_files` field of the Action, if the corresponding file existed after 620 # the action completed, a single entry will be present either in this field, 621 # or the `output_file_symlinks` field if the file was a symbolic link to 622 # another file. 623 # 624 # If an output of the same name was found, but was a directory rather 625 # than a regular file, the server will return a FAILED_PRECONDITION. 626 # If the action does not produce the requested output, then that output 627 # will be omitted from the list. The server is free to arrange the output 628 # list as desired; clients MUST NOT assume that the output list is sorted. 629 { # An `OutputFile` is similar to a 630 # FileNode, but it is used as an 631 # output in an `ActionResult`. It allows a full file path rather than 632 # only a name. 633 "path": "A String", # The full path of the file relative to the working directory, including the 634 # filename. The path separator is a forward slash `/`. Since this is a 635 # relative path, it MUST NOT begin with a leading forward slash. 636 "isExecutable": True or False, # True if file is executable, false otherwise. 637 "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline 638 # file contents unless requested by the client in the 639 # GetActionResultRequest 640 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 641 # would cause the response to exceed message size limits. 642 "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. 643 # and its hash. The hash algorithm to use is defined by the server, but servers 644 # SHOULD use SHA-256. 645 # 646 # The size is considered to be an integral part of the digest and cannot be 647 # separated. That is, even if the `hash` field is correctly specified but 648 # `size_bytes` is not, the server MUST reject the request. 649 # 650 # The reason for including the size in the digest is as follows: in a great 651 # many cases, the server needs to know the size of the blob it is about to work 652 # with prior to starting an operation with it, such as flattening Merkle tree 653 # structures or streaming it to a worker. Technically, the server could 654 # implement a separate metadata store, but this results in a significantly more 655 # complicated implementation as opposed to having the client specify the size 656 # up-front (or storing the size along with the digest in every message where 657 # digests are embedded). This does mean that the API leaks some implementation 658 # details of (what we consider to be) a reasonable server implementation, but 659 # we consider this to be a worthwhile tradeoff. 660 # 661 # When a `Digest` is used to refer to a proto message, it always refers to the 662 # message in binary encoded form. To ensure consistent hashing, clients and 663 # servers MUST ensure that they serialize messages according to the following 664 # rules, even if there are alternate valid encodings for the same message: 665 # 666 # * Fields are serialized in tag order. 667 # * There are no unknown fields. 668 # * There are no duplicate fields. 669 # * Fields are serialized according to the default semantics for their type. 670 # 671 # Most protocol buffer implementations will always follow these rules when 672 # serializing, but care should be taken to avoid shortcuts. For instance, 673 # concatenating two messages to merge them may produce duplicate fields. 674 "sizeBytes": "A String", # The size of the blob, in bytes. 675 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 676 # exactly 64 characters long. 677 }, 678 }, 679 ], 680 "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other 681 # directories. Those may be links to other output directories, or input 682 # directories, or even absolute paths outside of the working directory, 683 # if the server supports 684 # SymlinkAbsolutePathStrategy.ALLOWED. 685 # For each output directory requested in the `output_directories` field of 686 # the Action, if the directory existed after the action completed, a 687 # single entry will be present either in this field, or in the 688 # `output_directories` field, if the directory was not a symbolic link. 689 # 690 # If an output of the same name was found, but was a symbolic link to a file 691 # instead of a directory, the server will return a FAILED_PRECONDITION. 692 # If the action does not produce the requested output, then that output 693 # will be omitted from the list. The server is free to arrange the output 694 # list as desired; clients MUST NOT assume that the output list is sorted. 695 { # An `OutputSymlink` is similar to a 696 # Symlink, but it is used as an 697 # output in an `ActionResult`. 698 # 699 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 700 "path": "A String", # The full path of the symlink relative to the working directory, including the 701 # filename. The path separator is a forward slash `/`. Since this is a 702 # relative path, it MUST NOT begin with a leading forward slash. 703 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 704 # The target path can be relative to the parent directory of the symlink or 705 # it can be an absolute path starting with `/`. Support for absolute paths 706 # can be checked using the Capabilities 707 # API. The canonical form forbids the substrings `/./` and `//` in the target 708 # path. `..` components are allowed anywhere in the target path. 709 }, 710 ], 711 "outputDirectories": [ # The output directories of the action. For each output directory requested 712 # in the `output_directories` field of the Action, if the corresponding 713 # directory existed after the action completed, a single entry will be 714 # present in the output list, which will contain the digest of a 715 # Tree message containing the 716 # directory tree, and the path equal exactly to the corresponding Action 717 # output_directories member. 718 # 719 # As an example, suppose the Action had an output directory `a/b/dir` and the 720 # execution produced the following contents in `a/b/dir`: a file named `bar` 721 # and a directory named `foo` with an executable file named `baz`. Then, 722 # output_directory will contain (hashes shortened for readability): 723 # 724 # ```json 725 # // OutputDirectory proto: 726 # { 727 # path: "a/b/dir" 728 # tree_digest: { 729 # hash: "4a73bc9d03...", 730 # size: 55 731 # } 732 # } 733 # // Tree proto with hash "4a73bc9d03..." and size 55: 734 # { 735 # root: { 736 # files: [ 737 # { 738 # name: "bar", 739 # digest: { 740 # hash: "4a73bc9d03...", 741 # size: 65534 742 # } 743 # } 744 # ], 745 # directories: [ 746 # { 747 # name: "foo", 748 # digest: { 749 # hash: "4cf2eda940...", 750 # size: 43 751 # } 752 # } 753 # ] 754 # } 755 # children : { 756 # // (Directory proto with hash "4cf2eda940..." and size 43) 757 # files: [ 758 # { 759 # name: "baz", 760 # digest: { 761 # hash: "b2c941073e...", 762 # size: 1294, 763 # }, 764 # is_executable: true 765 # } 766 # ] 767 # } 768 # } 769 # ``` 770 # If an output of the same name was found, but was not a directory, the 771 # server will return a FAILED_PRECONDITION. 772 { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a 773 # directory's full contents rather than a single file. 774 "path": "A String", # The full path of the directory relative to the working directory. The path 775 # separator is a forward slash `/`. Since this is a relative path, it MUST 776 # NOT begin with a leading forward slash. The empty string value is allowed, 777 # and it denotes the entire working directory. 778 "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded 779 # Tree proto containing the 780 # directory's contents. 781 # and its hash. The hash algorithm to use is defined by the server, but servers 782 # SHOULD use SHA-256. 783 # 784 # The size is considered to be an integral part of the digest and cannot be 785 # separated. That is, even if the `hash` field is correctly specified but 786 # `size_bytes` is not, the server MUST reject the request. 787 # 788 # The reason for including the size in the digest is as follows: in a great 789 # many cases, the server needs to know the size of the blob it is about to work 790 # with prior to starting an operation with it, such as flattening Merkle tree 791 # structures or streaming it to a worker. Technically, the server could 792 # implement a separate metadata store, but this results in a significantly more 793 # complicated implementation as opposed to having the client specify the size 794 # up-front (or storing the size along with the digest in every message where 795 # digests are embedded). This does mean that the API leaks some implementation 796 # details of (what we consider to be) a reasonable server implementation, but 797 # we consider this to be a worthwhile tradeoff. 798 # 799 # When a `Digest` is used to refer to a proto message, it always refers to the 800 # message in binary encoded form. To ensure consistent hashing, clients and 801 # servers MUST ensure that they serialize messages according to the following 802 # rules, even if there are alternate valid encodings for the same message: 803 # 804 # * Fields are serialized in tag order. 805 # * There are no unknown fields. 806 # * There are no duplicate fields. 807 # * Fields are serialized according to the default semantics for their type. 808 # 809 # Most protocol buffer implementations will always follow these rules when 810 # serializing, but care should be taken to avoid shortcuts. For instance, 811 # concatenating two messages to merge them may produce duplicate fields. 812 "sizeBytes": "A String", # The size of the blob, in bytes. 813 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 814 # exactly 64 characters long. 815 }, 816 }, 817 ], 818 "exitCode": 42, # The exit code of the command. 819 } 820 821 resultsCachePolicy_priority: integer, The priority (relative importance) of this content in the overall cache. 822Generally, a lower value means a longer retention time or other advantage, 823but the interpretation of a given value is server-dependent. A priority of 8240 means a *default* value, decided by the server. 825 826The particular semantics of this field is up to the server. In particular, 827every server will have their own supported range of priorities, and will 828decide how these map into retention/eviction policy. 829 x__xgafv: string, V1 error format. 830 Allowed values 831 1 - v1 error format 832 2 - v2 error format 833 834Returns: 835 An object of the form: 836 837 { # An ActionResult represents the result of an 838 # Action being run. 839 "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. 840 "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. 841 "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. 842 "queuedTimestamp": "A String", # When was the action added to the queue. 843 "worker": "A String", # The name of the worker which ran the execution. 844 "executionStartTimestamp": "A String", # When the worker started executing the action command. 845 "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. 846 "workerStartTimestamp": "A String", # When the worker received the action. 847 "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. 848 "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. 849 "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. 850 }, 851 "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those 852 # may be links to other output files, or input files, or even absolute paths 853 # outside of the working directory, if the server supports 854 # SymlinkAbsolutePathStrategy.ALLOWED. 855 # For each output file requested in the `output_files` field of the Action, 856 # if the corresponding file existed after 857 # the action completed, a single entry will be present either in this field, 858 # or in the `output_files` field, if the file was not a symbolic link. 859 # 860 # If an output symbolic link of the same name was found, but its target 861 # type was not a regular file, the server will return a FAILED_PRECONDITION. 862 # If the action does not produce the requested output, then that output 863 # will be omitted from the list. The server is free to arrange the output 864 # list as desired; clients MUST NOT assume that the output list is sorted. 865 { # An `OutputSymlink` is similar to a 866 # Symlink, but it is used as an 867 # output in an `ActionResult`. 868 # 869 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 870 "path": "A String", # The full path of the symlink relative to the working directory, including the 871 # filename. The path separator is a forward slash `/`. Since this is a 872 # relative path, it MUST NOT begin with a leading forward slash. 873 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 874 # The target path can be relative to the parent directory of the symlink or 875 # it can be an absolute path starting with `/`. Support for absolute paths 876 # can be checked using the Capabilities 877 # API. The canonical form forbids the substrings `/./` and `//` in the target 878 # path. `..` components are allowed anywhere in the target path. 879 }, 880 ], 881 "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which 882 # can be retrieved from the 883 # ContentAddressableStorage. 884 # and its hash. The hash algorithm to use is defined by the server, but servers 885 # SHOULD use SHA-256. 886 # 887 # The size is considered to be an integral part of the digest and cannot be 888 # separated. That is, even if the `hash` field is correctly specified but 889 # `size_bytes` is not, the server MUST reject the request. 890 # 891 # The reason for including the size in the digest is as follows: in a great 892 # many cases, the server needs to know the size of the blob it is about to work 893 # with prior to starting an operation with it, such as flattening Merkle tree 894 # structures or streaming it to a worker. Technically, the server could 895 # implement a separate metadata store, but this results in a significantly more 896 # complicated implementation as opposed to having the client specify the size 897 # up-front (or storing the size along with the digest in every message where 898 # digests are embedded). This does mean that the API leaks some implementation 899 # details of (what we consider to be) a reasonable server implementation, but 900 # we consider this to be a worthwhile tradeoff. 901 # 902 # When a `Digest` is used to refer to a proto message, it always refers to the 903 # message in binary encoded form. To ensure consistent hashing, clients and 904 # servers MUST ensure that they serialize messages according to the following 905 # rules, even if there are alternate valid encodings for the same message: 906 # 907 # * Fields are serialized in tag order. 908 # * There are no unknown fields. 909 # * There are no duplicate fields. 910 # * Fields are serialized according to the default semantics for their type. 911 # 912 # Most protocol buffer implementations will always follow these rules when 913 # serializing, but care should be taken to avoid shortcuts. For instance, 914 # concatenating two messages to merge them may produce duplicate fields. 915 "sizeBytes": "A String", # The size of the blob, in bytes. 916 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 917 # exactly 64 characters long. 918 }, 919 "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline 920 # stdout unless requested by the client in the 921 # GetActionResultRequest 922 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 923 # would cause the response to exceed message size limits. 924 "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline 925 # stderr unless requested by the client in the 926 # GetActionResultRequest 927 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 928 # would cause the response to exceed message size limits. 929 "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which 930 # can be retrieved from the 931 # ContentAddressableStorage. 932 # and its hash. The hash algorithm to use is defined by the server, but servers 933 # SHOULD use SHA-256. 934 # 935 # The size is considered to be an integral part of the digest and cannot be 936 # separated. That is, even if the `hash` field is correctly specified but 937 # `size_bytes` is not, the server MUST reject the request. 938 # 939 # The reason for including the size in the digest is as follows: in a great 940 # many cases, the server needs to know the size of the blob it is about to work 941 # with prior to starting an operation with it, such as flattening Merkle tree 942 # structures or streaming it to a worker. Technically, the server could 943 # implement a separate metadata store, but this results in a significantly more 944 # complicated implementation as opposed to having the client specify the size 945 # up-front (or storing the size along with the digest in every message where 946 # digests are embedded). This does mean that the API leaks some implementation 947 # details of (what we consider to be) a reasonable server implementation, but 948 # we consider this to be a worthwhile tradeoff. 949 # 950 # When a `Digest` is used to refer to a proto message, it always refers to the 951 # message in binary encoded form. To ensure consistent hashing, clients and 952 # servers MUST ensure that they serialize messages according to the following 953 # rules, even if there are alternate valid encodings for the same message: 954 # 955 # * Fields are serialized in tag order. 956 # * There are no unknown fields. 957 # * There are no duplicate fields. 958 # * Fields are serialized according to the default semantics for their type. 959 # 960 # Most protocol buffer implementations will always follow these rules when 961 # serializing, but care should be taken to avoid shortcuts. For instance, 962 # concatenating two messages to merge them may produce duplicate fields. 963 "sizeBytes": "A String", # The size of the blob, in bytes. 964 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 965 # exactly 64 characters long. 966 }, 967 "outputFiles": [ # The output files of the action. For each output file requested in the 968 # `output_files` field of the Action, if the corresponding file existed after 969 # the action completed, a single entry will be present either in this field, 970 # or the `output_file_symlinks` field if the file was a symbolic link to 971 # another file. 972 # 973 # If an output of the same name was found, but was a directory rather 974 # than a regular file, the server will return a FAILED_PRECONDITION. 975 # If the action does not produce the requested output, then that output 976 # will be omitted from the list. The server is free to arrange the output 977 # list as desired; clients MUST NOT assume that the output list is sorted. 978 { # An `OutputFile` is similar to a 979 # FileNode, but it is used as an 980 # output in an `ActionResult`. It allows a full file path rather than 981 # only a name. 982 "path": "A String", # The full path of the file relative to the working directory, including the 983 # filename. The path separator is a forward slash `/`. Since this is a 984 # relative path, it MUST NOT begin with a leading forward slash. 985 "isExecutable": True or False, # True if file is executable, false otherwise. 986 "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline 987 # file contents unless requested by the client in the 988 # GetActionResultRequest 989 # message. The server MAY omit inlining, even if requested, and MUST do so if inlining 990 # would cause the response to exceed message size limits. 991 "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. 992 # and its hash. The hash algorithm to use is defined by the server, but servers 993 # SHOULD use SHA-256. 994 # 995 # The size is considered to be an integral part of the digest and cannot be 996 # separated. That is, even if the `hash` field is correctly specified but 997 # `size_bytes` is not, the server MUST reject the request. 998 # 999 # The reason for including the size in the digest is as follows: in a great 1000 # many cases, the server needs to know the size of the blob it is about to work 1001 # with prior to starting an operation with it, such as flattening Merkle tree 1002 # structures or streaming it to a worker. Technically, the server could 1003 # implement a separate metadata store, but this results in a significantly more 1004 # complicated implementation as opposed to having the client specify the size 1005 # up-front (or storing the size along with the digest in every message where 1006 # digests are embedded). This does mean that the API leaks some implementation 1007 # details of (what we consider to be) a reasonable server implementation, but 1008 # we consider this to be a worthwhile tradeoff. 1009 # 1010 # When a `Digest` is used to refer to a proto message, it always refers to the 1011 # message in binary encoded form. To ensure consistent hashing, clients and 1012 # servers MUST ensure that they serialize messages according to the following 1013 # rules, even if there are alternate valid encodings for the same message: 1014 # 1015 # * Fields are serialized in tag order. 1016 # * There are no unknown fields. 1017 # * There are no duplicate fields. 1018 # * Fields are serialized according to the default semantics for their type. 1019 # 1020 # Most protocol buffer implementations will always follow these rules when 1021 # serializing, but care should be taken to avoid shortcuts. For instance, 1022 # concatenating two messages to merge them may produce duplicate fields. 1023 "sizeBytes": "A String", # The size of the blob, in bytes. 1024 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 1025 # exactly 64 characters long. 1026 }, 1027 }, 1028 ], 1029 "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other 1030 # directories. Those may be links to other output directories, or input 1031 # directories, or even absolute paths outside of the working directory, 1032 # if the server supports 1033 # SymlinkAbsolutePathStrategy.ALLOWED. 1034 # For each output directory requested in the `output_directories` field of 1035 # the Action, if the directory existed after the action completed, a 1036 # single entry will be present either in this field, or in the 1037 # `output_directories` field, if the directory was not a symbolic link. 1038 # 1039 # If an output of the same name was found, but was a symbolic link to a file 1040 # instead of a directory, the server will return a FAILED_PRECONDITION. 1041 # If the action does not produce the requested output, then that output 1042 # will be omitted from the list. The server is free to arrange the output 1043 # list as desired; clients MUST NOT assume that the output list is sorted. 1044 { # An `OutputSymlink` is similar to a 1045 # Symlink, but it is used as an 1046 # output in an `ActionResult`. 1047 # 1048 # `OutputSymlink` is binary-compatible with `SymlinkNode`. 1049 "path": "A String", # The full path of the symlink relative to the working directory, including the 1050 # filename. The path separator is a forward slash `/`. Since this is a 1051 # relative path, it MUST NOT begin with a leading forward slash. 1052 "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. 1053 # The target path can be relative to the parent directory of the symlink or 1054 # it can be an absolute path starting with `/`. Support for absolute paths 1055 # can be checked using the Capabilities 1056 # API. The canonical form forbids the substrings `/./` and `//` in the target 1057 # path. `..` components are allowed anywhere in the target path. 1058 }, 1059 ], 1060 "outputDirectories": [ # The output directories of the action. For each output directory requested 1061 # in the `output_directories` field of the Action, if the corresponding 1062 # directory existed after the action completed, a single entry will be 1063 # present in the output list, which will contain the digest of a 1064 # Tree message containing the 1065 # directory tree, and the path equal exactly to the corresponding Action 1066 # output_directories member. 1067 # 1068 # As an example, suppose the Action had an output directory `a/b/dir` and the 1069 # execution produced the following contents in `a/b/dir`: a file named `bar` 1070 # and a directory named `foo` with an executable file named `baz`. Then, 1071 # output_directory will contain (hashes shortened for readability): 1072 # 1073 # ```json 1074 # // OutputDirectory proto: 1075 # { 1076 # path: "a/b/dir" 1077 # tree_digest: { 1078 # hash: "4a73bc9d03...", 1079 # size: 55 1080 # } 1081 # } 1082 # // Tree proto with hash "4a73bc9d03..." and size 55: 1083 # { 1084 # root: { 1085 # files: [ 1086 # { 1087 # name: "bar", 1088 # digest: { 1089 # hash: "4a73bc9d03...", 1090 # size: 65534 1091 # } 1092 # } 1093 # ], 1094 # directories: [ 1095 # { 1096 # name: "foo", 1097 # digest: { 1098 # hash: "4cf2eda940...", 1099 # size: 43 1100 # } 1101 # } 1102 # ] 1103 # } 1104 # children : { 1105 # // (Directory proto with hash "4cf2eda940..." and size 43) 1106 # files: [ 1107 # { 1108 # name: "baz", 1109 # digest: { 1110 # hash: "b2c941073e...", 1111 # size: 1294, 1112 # }, 1113 # is_executable: true 1114 # } 1115 # ] 1116 # } 1117 # } 1118 # ``` 1119 # If an output of the same name was found, but was not a directory, the 1120 # server will return a FAILED_PRECONDITION. 1121 { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a 1122 # directory's full contents rather than a single file. 1123 "path": "A String", # The full path of the directory relative to the working directory. The path 1124 # separator is a forward slash `/`. Since this is a relative path, it MUST 1125 # NOT begin with a leading forward slash. The empty string value is allowed, 1126 # and it denotes the entire working directory. 1127 "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded 1128 # Tree proto containing the 1129 # directory's contents. 1130 # and its hash. The hash algorithm to use is defined by the server, but servers 1131 # SHOULD use SHA-256. 1132 # 1133 # The size is considered to be an integral part of the digest and cannot be 1134 # separated. That is, even if the `hash` field is correctly specified but 1135 # `size_bytes` is not, the server MUST reject the request. 1136 # 1137 # The reason for including the size in the digest is as follows: in a great 1138 # many cases, the server needs to know the size of the blob it is about to work 1139 # with prior to starting an operation with it, such as flattening Merkle tree 1140 # structures or streaming it to a worker. Technically, the server could 1141 # implement a separate metadata store, but this results in a significantly more 1142 # complicated implementation as opposed to having the client specify the size 1143 # up-front (or storing the size along with the digest in every message where 1144 # digests are embedded). This does mean that the API leaks some implementation 1145 # details of (what we consider to be) a reasonable server implementation, but 1146 # we consider this to be a worthwhile tradeoff. 1147 # 1148 # When a `Digest` is used to refer to a proto message, it always refers to the 1149 # message in binary encoded form. To ensure consistent hashing, clients and 1150 # servers MUST ensure that they serialize messages according to the following 1151 # rules, even if there are alternate valid encodings for the same message: 1152 # 1153 # * Fields are serialized in tag order. 1154 # * There are no unknown fields. 1155 # * There are no duplicate fields. 1156 # * Fields are serialized according to the default semantics for their type. 1157 # 1158 # Most protocol buffer implementations will always follow these rules when 1159 # serializing, but care should be taken to avoid shortcuts. For instance, 1160 # concatenating two messages to merge them may produce duplicate fields. 1161 "sizeBytes": "A String", # The size of the blob, in bytes. 1162 "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string 1163 # exactly 64 characters long. 1164 }, 1165 }, 1166 ], 1167 "exitCode": 42, # The exit code of the command. 1168 }</pre> 1169</div> 1170 1171</body></html>