Searched refs:attention_scores (Results 1 – 2 of 2) sorted by relevance
/external/tensorflow/tensorflow/python/keras/layers/ |
D | multi_head_attention.py | 417 def _masked_softmax(self, attention_scores, attention_mask=None): argument 424 for _ in range(len(attention_scores.shape) - len(attention_mask.shape)): 427 return self._softmax(attention_scores, attention_mask) 461 attention_scores = special_math_ops.einsum(self._dot_product_equation, key, 464 attention_scores = self._masked_softmax(attention_scores, attention_mask) 469 attention_scores, training=training) 474 return attention_output, attention_scores 499 attention_output, attention_scores = self._compute_attention( 504 return attention_output, attention_scores
|
D | dense_attention.py | 174 result, attention_scores = self._apply_scores( 181 return result, attention_scores
|