Stackdriver Logging API . projects . metrics

Instance Methods

create(parent, body, x__xgafv=None)

Creates a logs-based metric.

delete(metricName, x__xgafv=None)

Deletes a logs-based metric.

get(metricName, x__xgafv=None)

Gets a logs-based metric.

list(parent, pageToken=None, x__xgafv=None, pageSize=None)

Lists logs-based metrics.

list_next(previous_request, previous_response)

Retrieves the next page of results.

update(metricName, body, x__xgafv=None)

Creates or updates a logs-based metric.

Method Details

create(parent, body, x__xgafv=None)
Creates a logs-based metric.

Args:
  parent: string, The resource name of the project in which to create the metric:
"projects/[PROJECT_ID]"
The new metric must be provided in the request. (required)
  body: object, The request body. (required)
    The object takes the form of:

{ # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
    "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
    "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
    "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
      "description": "A String", # A detailed description of the metric, which can be used in documentation.
      "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
      "name": "A String", # The resource name of the metric descriptor.
      "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
      "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
      "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
        { # A description of a label.
          "valueType": "A String", # The type of data that can be assigned to the label.
          "description": "A String", # A human-readable description for the label.
          "key": "A String", # The label key.
        },
      ],
      "launchStage": "A String", # Optional. The launch stage of the metric definition.
      "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
          # "custom.googleapis.com/invoice/paid/amount"
          # "external.googleapis.com/prometheus/up"
          # "appengine.googleapis.com/http/server/response_latencies"
      "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
          # bit bit
          # By byte
          # s second
          # min minute
          # h hour
          # d dayPrefixes (PREFIX)
          # k kilo (10**3)
          # M mega (10**6)
          # G giga (10**9)
          # T tera (10**12)
          # P peta (10**15)
          # E exa (10**18)
          # Z zetta (10**21)
          # Y yotta (10**24)
          # m milli (10**-3)
          # u micro (10**-6)
          # n nano (10**-9)
          # p pico (10**-12)
          # f femto (10**-15)
          # a atto (10**-18)
          # z zepto (10**-21)
          # y yocto (10**-24)
          # Ki kibi (2**10)
          # Mi mebi (2**20)
          # Gi gibi (2**30)
          # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
          # / division (as an infix operator, e.g. 1/s).
          # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
          # Expression = Component { "." Component } { "/" Component } ;
          #
          # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
          #           | Annotation
          #           | "1"
          #           ;
          #
          # Annotation = "{" NAME "}" ;
          # Notes:
          # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
          # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
          # 1 represents dimensionless value 1, such as in 1/s.
          # % represents dimensionless value 1/100, and annotates values giving  a percentage.
      "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
        "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
        "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
        "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
      },
    },
    "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
      "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
        "scale": 3.14, # Must be greater than 0.
        "growthFactor": 3.14, # Must be greater than 1.
        "numFiniteBuckets": 42, # Must be greater than 0.
      },
      "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
        "width": 3.14, # Must be greater than 0.
        "numFiniteBuckets": 42, # Must be greater than 0.
        "offset": 3.14, # Lower bound of the first bucket.
      },
      "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
        "bounds": [ # The values must be monotonically increasing.
          3.14,
        ],
      },
    },
    "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
      "a_key": "A String",
    },
    "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
        # "resource.type=gae_app AND severity>=ERROR"
        # The maximum length of the filter is 20000 characters.
    "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
    "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
    "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
    "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
  }

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
      "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
      "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
      "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
        "description": "A String", # A detailed description of the metric, which can be used in documentation.
        "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
        "name": "A String", # The resource name of the metric descriptor.
        "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
        "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
        "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
          { # A description of a label.
            "valueType": "A String", # The type of data that can be assigned to the label.
            "description": "A String", # A human-readable description for the label.
            "key": "A String", # The label key.
          },
        ],
        "launchStage": "A String", # Optional. The launch stage of the metric definition.
        "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
            # "custom.googleapis.com/invoice/paid/amount"
            # "external.googleapis.com/prometheus/up"
            # "appengine.googleapis.com/http/server/response_latencies"
        "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
            # bit bit
            # By byte
            # s second
            # min minute
            # h hour
            # d dayPrefixes (PREFIX)
            # k kilo (10**3)
            # M mega (10**6)
            # G giga (10**9)
            # T tera (10**12)
            # P peta (10**15)
            # E exa (10**18)
            # Z zetta (10**21)
            # Y yotta (10**24)
            # m milli (10**-3)
            # u micro (10**-6)
            # n nano (10**-9)
            # p pico (10**-12)
            # f femto (10**-15)
            # a atto (10**-18)
            # z zepto (10**-21)
            # y yocto (10**-24)
            # Ki kibi (2**10)
            # Mi mebi (2**20)
            # Gi gibi (2**30)
            # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
            # / division (as an infix operator, e.g. 1/s).
            # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
            # Expression = Component { "." Component } { "/" Component } ;
            #
            # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
            #           | Annotation
            #           | "1"
            #           ;
            #
            # Annotation = "{" NAME "}" ;
            # Notes:
            # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
            # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
            # 1 represents dimensionless value 1, such as in 1/s.
            # % represents dimensionless value 1/100, and annotates values giving  a percentage.
        "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
          "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
          "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
          "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
        },
      },
      "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
        "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
          "scale": 3.14, # Must be greater than 0.
          "growthFactor": 3.14, # Must be greater than 1.
          "numFiniteBuckets": 42, # Must be greater than 0.
        },
        "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
          "width": 3.14, # Must be greater than 0.
          "numFiniteBuckets": 42, # Must be greater than 0.
          "offset": 3.14, # Lower bound of the first bucket.
        },
        "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
          "bounds": [ # The values must be monotonically increasing.
            3.14,
          ],
        },
      },
      "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
        "a_key": "A String",
      },
      "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
          # "resource.type=gae_app AND severity>=ERROR"
          # The maximum length of the filter is 20000 characters.
      "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
      "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
      "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
      "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
    }
delete(metricName, x__xgafv=None)
Deletes a logs-based metric.

Args:
  metricName: string, The resource name of the metric to delete:
"projects/[PROJECT_ID]/metrics/[METRIC_ID]"
 (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance:
      # service Foo {
      #   rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
      # }
      # The JSON representation for Empty is empty JSON object {}.
  }
get(metricName, x__xgafv=None)
Gets a logs-based metric.

Args:
  metricName: string, The resource name of the desired metric:
"projects/[PROJECT_ID]/metrics/[METRIC_ID]"
 (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
      "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
      "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
      "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
        "description": "A String", # A detailed description of the metric, which can be used in documentation.
        "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
        "name": "A String", # The resource name of the metric descriptor.
        "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
        "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
        "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
          { # A description of a label.
            "valueType": "A String", # The type of data that can be assigned to the label.
            "description": "A String", # A human-readable description for the label.
            "key": "A String", # The label key.
          },
        ],
        "launchStage": "A String", # Optional. The launch stage of the metric definition.
        "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
            # "custom.googleapis.com/invoice/paid/amount"
            # "external.googleapis.com/prometheus/up"
            # "appengine.googleapis.com/http/server/response_latencies"
        "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
            # bit bit
            # By byte
            # s second
            # min minute
            # h hour
            # d dayPrefixes (PREFIX)
            # k kilo (10**3)
            # M mega (10**6)
            # G giga (10**9)
            # T tera (10**12)
            # P peta (10**15)
            # E exa (10**18)
            # Z zetta (10**21)
            # Y yotta (10**24)
            # m milli (10**-3)
            # u micro (10**-6)
            # n nano (10**-9)
            # p pico (10**-12)
            # f femto (10**-15)
            # a atto (10**-18)
            # z zepto (10**-21)
            # y yocto (10**-24)
            # Ki kibi (2**10)
            # Mi mebi (2**20)
            # Gi gibi (2**30)
            # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
            # / division (as an infix operator, e.g. 1/s).
            # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
            # Expression = Component { "." Component } { "/" Component } ;
            #
            # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
            #           | Annotation
            #           | "1"
            #           ;
            #
            # Annotation = "{" NAME "}" ;
            # Notes:
            # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
            # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
            # 1 represents dimensionless value 1, such as in 1/s.
            # % represents dimensionless value 1/100, and annotates values giving  a percentage.
        "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
          "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
          "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
          "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
        },
      },
      "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
        "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
          "scale": 3.14, # Must be greater than 0.
          "growthFactor": 3.14, # Must be greater than 1.
          "numFiniteBuckets": 42, # Must be greater than 0.
        },
        "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
          "width": 3.14, # Must be greater than 0.
          "numFiniteBuckets": 42, # Must be greater than 0.
          "offset": 3.14, # Lower bound of the first bucket.
        },
        "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
          "bounds": [ # The values must be monotonically increasing.
            3.14,
          ],
        },
      },
      "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
        "a_key": "A String",
      },
      "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
          # "resource.type=gae_app AND severity>=ERROR"
          # The maximum length of the filter is 20000 characters.
      "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
      "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
      "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
      "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
    }
list(parent, pageToken=None, x__xgafv=None, pageSize=None)
Lists logs-based metrics.

Args:
  parent: string, Required. The name of the project containing the metrics:
"projects/[PROJECT_ID]"
 (required)
  pageToken: string, Optional. If present, then retrieve the next batch of results from the preceding call to this method. pageToken must be the value of nextPageToken from the previous response. The values of other method parameters should be identical to those in the previous call.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format
  pageSize: integer, Optional. The maximum number of results to return from this request. Non-positive values are ignored. The presence of nextPageToken in the response indicates that more results might be available.

Returns:
  An object of the form:

    { # Result returned from ListLogMetrics.
    "metrics": [ # A list of logs-based metrics.
      { # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
          "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
          "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
          "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
            "description": "A String", # A detailed description of the metric, which can be used in documentation.
            "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
            "name": "A String", # The resource name of the metric descriptor.
            "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
            "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
            "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
              { # A description of a label.
                "valueType": "A String", # The type of data that can be assigned to the label.
                "description": "A String", # A human-readable description for the label.
                "key": "A String", # The label key.
              },
            ],
            "launchStage": "A String", # Optional. The launch stage of the metric definition.
            "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
                # "custom.googleapis.com/invoice/paid/amount"
                # "external.googleapis.com/prometheus/up"
                # "appengine.googleapis.com/http/server/response_latencies"
            "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
                # bit bit
                # By byte
                # s second
                # min minute
                # h hour
                # d dayPrefixes (PREFIX)
                # k kilo (10**3)
                # M mega (10**6)
                # G giga (10**9)
                # T tera (10**12)
                # P peta (10**15)
                # E exa (10**18)
                # Z zetta (10**21)
                # Y yotta (10**24)
                # m milli (10**-3)
                # u micro (10**-6)
                # n nano (10**-9)
                # p pico (10**-12)
                # f femto (10**-15)
                # a atto (10**-18)
                # z zepto (10**-21)
                # y yocto (10**-24)
                # Ki kibi (2**10)
                # Mi mebi (2**20)
                # Gi gibi (2**30)
                # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
                # / division (as an infix operator, e.g. 1/s).
                # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
                # Expression = Component { "." Component } { "/" Component } ;
                #
                # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
                #           | Annotation
                #           | "1"
                #           ;
                #
                # Annotation = "{" NAME "}" ;
                # Notes:
                # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
                # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
                # 1 represents dimensionless value 1, such as in 1/s.
                # % represents dimensionless value 1/100, and annotates values giving  a percentage.
            "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
              "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
              "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
              "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
            },
          },
          "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
            "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
              "scale": 3.14, # Must be greater than 0.
              "growthFactor": 3.14, # Must be greater than 1.
              "numFiniteBuckets": 42, # Must be greater than 0.
            },
            "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
              "width": 3.14, # Must be greater than 0.
              "numFiniteBuckets": 42, # Must be greater than 0.
              "offset": 3.14, # Lower bound of the first bucket.
            },
            "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
              "bounds": [ # The values must be monotonically increasing.
                3.14,
              ],
            },
          },
          "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
            "a_key": "A String",
          },
          "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
              # "resource.type=gae_app AND severity>=ERROR"
              # The maximum length of the filter is 20000 characters.
          "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
          "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
          "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
          "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
        },
    ],
    "nextPageToken": "A String", # If there might be more results than appear in this response, then nextPageToken is included. To get the next set of results, call this method again using the value of nextPageToken as pageToken.
  }
list_next(previous_request, previous_response)
Retrieves the next page of results.

Args:
  previous_request: The request for the previous page. (required)
  previous_response: The response from the request for the previous page. (required)

Returns:
  A request object that you can call 'execute()' on to request the next
  page. Returns None if there are no more items in the collection.
    
update(metricName, body, x__xgafv=None)
Creates or updates a logs-based metric.

Args:
  metricName: string, The resource name of the metric to update:
"projects/[PROJECT_ID]/metrics/[METRIC_ID]"
The updated metric must be provided in the request and it's name field must be the same as [METRIC_ID] If the metric does not exist in [PROJECT_ID], then a new metric is created. (required)
  body: object, The request body. (required)
    The object takes the form of:

{ # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
    "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
    "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
    "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
      "description": "A String", # A detailed description of the metric, which can be used in documentation.
      "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
      "name": "A String", # The resource name of the metric descriptor.
      "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
      "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
      "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
        { # A description of a label.
          "valueType": "A String", # The type of data that can be assigned to the label.
          "description": "A String", # A human-readable description for the label.
          "key": "A String", # The label key.
        },
      ],
      "launchStage": "A String", # Optional. The launch stage of the metric definition.
      "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
          # "custom.googleapis.com/invoice/paid/amount"
          # "external.googleapis.com/prometheus/up"
          # "appengine.googleapis.com/http/server/response_latencies"
      "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
          # bit bit
          # By byte
          # s second
          # min minute
          # h hour
          # d dayPrefixes (PREFIX)
          # k kilo (10**3)
          # M mega (10**6)
          # G giga (10**9)
          # T tera (10**12)
          # P peta (10**15)
          # E exa (10**18)
          # Z zetta (10**21)
          # Y yotta (10**24)
          # m milli (10**-3)
          # u micro (10**-6)
          # n nano (10**-9)
          # p pico (10**-12)
          # f femto (10**-15)
          # a atto (10**-18)
          # z zepto (10**-21)
          # y yocto (10**-24)
          # Ki kibi (2**10)
          # Mi mebi (2**20)
          # Gi gibi (2**30)
          # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
          # / division (as an infix operator, e.g. 1/s).
          # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
          # Expression = Component { "." Component } { "/" Component } ;
          #
          # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
          #           | Annotation
          #           | "1"
          #           ;
          #
          # Annotation = "{" NAME "}" ;
          # Notes:
          # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
          # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
          # 1 represents dimensionless value 1, such as in 1/s.
          # % represents dimensionless value 1/100, and annotates values giving  a percentage.
      "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
        "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
        "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
        "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
      },
    },
    "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
      "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
        "scale": 3.14, # Must be greater than 0.
        "growthFactor": 3.14, # Must be greater than 1.
        "numFiniteBuckets": 42, # Must be greater than 0.
      },
      "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
        "width": 3.14, # Must be greater than 0.
        "numFiniteBuckets": 42, # Must be greater than 0.
        "offset": 3.14, # Lower bound of the first bucket.
      },
      "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
        "bounds": [ # The values must be monotonically increasing.
          3.14,
        ],
      },
    },
    "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
      "a_key": "A String",
    },
    "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
        # "resource.type=gae_app AND severity>=ERROR"
        # The maximum length of the filter is 20000 characters.
    "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
    "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
    "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
    "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
  }

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval.Logs-based metric can also be used to extract values from logs and create a a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options.
      "updateTime": "A String", # Output only. The last update timestamp of the metric.This field may not be present for older metrics.
      "name": "A String", # Required. The client-assigned metric identifier. Examples: "error_count", "nginx/requests".Metric identifiers are limited to 100 characters and can include only the following characters: A-Z, a-z, 0-9, and the special characters _-.,+!*',()%/. The forward-slash character (/) denotes a hierarchy of name pieces, and it cannot be the first character of the name.The metric identifier in this field must not be URL-encoded (https://en.wikipedia.org/wiki/Percent-encoding). However, when the metric identifier appears as the [METRIC_ID] part of a metric_name API parameter, then the metric identifier must be URL-encoded. Example: "projects/my-project/metrics/nginx%2Frequests".
      "metricDescriptor": { # Defines a metric type and its schema. Once a metric descriptor is created, deleting or altering it stops data collection and makes the metric type's existing data unusable. # Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the filter expression.The name, type, and description fields in the metric_descriptor are output only, and is constructed using the name and description field in the LogMetric.To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a value_extractor expression in the LogMetric.Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the label_extractors map.The metric_kind and value_type fields in the metric_descriptor cannot be updated once initially configured. New labels can be added in the metric_descriptor, but existing labels cannot be modified except for their description.
        "description": "A String", # A detailed description of the metric, which can be used in documentation.
        "displayName": "A String", # A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota.
        "name": "A String", # The resource name of the metric descriptor.
        "metricKind": "A String", # Whether the metric records instantaneous values, changes to a value, etc. Some combinations of metric_kind and value_type might not be supported.
        "valueType": "A String", # Whether the measurement is an integer, a floating-point number, etc. Some combinations of metric_kind and value_type might not be supported.
        "labels": [ # The set of labels that can be used to describe a specific instance of this metric type. For example, the appengine.googleapis.com/http/server/response_latencies metric type has a label for the HTTP response code, response_code, so you can look at latencies for successful responses or just for responses that failed.
          { # A description of a label.
            "valueType": "A String", # The type of data that can be assigned to the label.
            "description": "A String", # A human-readable description for the label.
            "key": "A String", # The label key.
          },
        ],
        "launchStage": "A String", # Optional. The launch stage of the metric definition.
        "type": "A String", # The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name custom.googleapis.com or external.googleapis.com. Metric types should use a natural hierarchical grouping. For example:
            # "custom.googleapis.com/invoice/paid/amount"
            # "external.googleapis.com/prometheus/up"
            # "appengine.googleapis.com/http/server/response_latencies"
        "unit": "A String", # The unit in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The supported units are a subset of The Unified Code for Units of Measure (http://unitsofmeasure.org/ucum.html) standard:Basic units (UNIT)
            # bit bit
            # By byte
            # s second
            # min minute
            # h hour
            # d dayPrefixes (PREFIX)
            # k kilo (10**3)
            # M mega (10**6)
            # G giga (10**9)
            # T tera (10**12)
            # P peta (10**15)
            # E exa (10**18)
            # Z zetta (10**21)
            # Y yotta (10**24)
            # m milli (10**-3)
            # u micro (10**-6)
            # n nano (10**-9)
            # p pico (10**-12)
            # f femto (10**-15)
            # a atto (10**-18)
            # z zepto (10**-21)
            # y yocto (10**-24)
            # Ki kibi (2**10)
            # Mi mebi (2**20)
            # Gi gibi (2**30)
            # Ti tebi (2**40)GrammarThe grammar also includes these connectors:
            # / division (as an infix operator, e.g. 1/s).
            # . multiplication (as an infix operator, e.g. GBy.d)The grammar for a unit is as follows:
            # Expression = Component { "." Component } { "/" Component } ;
            #
            # Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ]
            #           | Annotation
            #           | "1"
            #           ;
            #
            # Annotation = "{" NAME "}" ;
            # Notes:
            # Annotation is just a comment if it follows a UNIT and is  equivalent to 1 if it is used alone. For examples,  {requests}/s == 1/s, By{transmitted}/s == By/s.
            # NAME is a sequence of non-blank printable ASCII characters not  containing '{' or '}'.
            # 1 represents dimensionless value 1, such as in 1/s.
            # % represents dimensionless value 1/100, and annotates values giving  a percentage.
        "metadata": { # Additional annotations that can be used to guide the usage of a metric. # Optional. Metadata which can be used to guide usage of the metric.
          "launchStage": "A String", # Deprecated. Please use the MetricDescriptor.launch_stage instead. The launch stage of the metric definition.
          "ingestDelay": "A String", # The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors.
          "samplePeriod": "A String", # The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period.
        },
      },
      "bucketOptions": { # BucketOptions describes the bucket boundaries used to create a histogram for the distribution. The buckets can be in a linear sequence, an exponential sequence, or each bucket can be specified explicitly. BucketOptions does not include the number of values in each bucket.A bucket has an inclusive lower bound and exclusive upper bound for the values that are counted for that bucket. The upper bound of a bucket must be strictly greater than the lower bound. The sequence of N buckets for a distribution consists of an underflow bucket (number 0), zero or more finite buckets (number 1 through N - 2) and an overflow bucket (number N - 1). The buckets are contiguous: the lower bound of bucket i (i > 0) is the same as the upper bound of bucket i - 1. The buckets span the whole range of finite values: lower bound of the underflow bucket is -infinity and the upper bound of the overflow bucket is +infinity. The finite buckets are so-called because both bounds are finite. # Optional. The bucket_options are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values.
        "exponentialBuckets": { # Specifies an exponential sequence of buckets that have a width that is proportional to the value of the lower bound. Each bucket represents a constant relative uncertainty on a specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): scale * (growth_factor ^ i).  Lower bound (1 <= i < N): scale * (growth_factor ^ (i - 1)). # The exponential buckets.
          "scale": 3.14, # Must be greater than 0.
          "growthFactor": 3.14, # Must be greater than 1.
          "numFiniteBuckets": 42, # Must be greater than 0.
        },
        "linearBuckets": { # Specifies a linear sequence of buckets that all have the same width (except overflow and underflow). Each bucket represents a constant absolute uncertainty on the specific value in the bucket.There are num_finite_buckets + 2 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): offset + (width * i).  Lower bound (1 <= i < N): offset + (width * (i - 1)). # The linear bucket.
          "width": 3.14, # Must be greater than 0.
          "numFiniteBuckets": 42, # Must be greater than 0.
          "offset": 3.14, # Lower bound of the first bucket.
        },
        "explicitBuckets": { # Specifies a set of buckets with arbitrary widths.There are size(bounds) + 1 (= N) buckets. Bucket i has the following boundaries:Upper bound (0 <= i < N-1): boundsi  Lower bound (1 <= i < N); boundsi - 1The bounds field must contain at least one element. If bounds has only one element, then there are no finite buckets, and that single element is the common boundary of the overflow and underflow buckets. # The explicit buckets.
          "bounds": [ # The values must be monotonically increasing.
            3.14,
          ],
        },
      },
      "labelExtractors": { # Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the value_extractor field.The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its false.Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project.
        "a_key": "A String",
      },
      "filter": "A String", # Required. An advanced logs filter which is used to match log entries. Example:
          # "resource.type=gae_app AND severity>=ERROR"
          # The maximum length of the filter is 20000 characters.
      "valueExtractor": "A String", # Optional. A value_extractor is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: EXTRACT(field) or REGEXP_EXTRACT(field, regex). The argument are:  1. field: The name of the log entry field from which the value is to be  extracted.  2. regex: A regular expression using the Google RE2 syntax  (https://github.com/google/re2/wiki/Syntax) with a single capture  group to extract data from the specified log entry field. The value  of the field is converted to a string before applying the regex.  It is an error to specify a regex that does not include exactly one  capture group.The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution.Example: REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")
      "version": "A String", # Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed.
      "createTime": "A String", # Output only. The creation timestamp of the metric.This field may not be present for older metrics.
      "description": "A String", # Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters.
    }