Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么key_masks等于 tf.not_equal(keys[:,:,0], 0) #111

Open
hsgui opened this issue Aug 21, 2024 · 1 comment
Open

为什么key_masks等于 tf.not_equal(keys[:,:,0], 0) #111

hsgui opened this issue Aug 21, 2024 · 1 comment

Comments

@hsgui
Copy link

hsgui commented Aug 21, 2024

请问一下各路大神,以下代码细节:

class AttentionPoolingLayer(Layer):
    def __init__(self, att_hidden_units=(256, 128, 64)):
        super(AttentionPoolingLayer, self).__init__()
        self.att_hidden_units = att_hidden_units
        self.local_att = LocalActivationUnit(self.att_hidden_units)

    def call(self, inputs):
        # keys: B x len x emb_dim, queries: B x 1 x emb_dim
        queries, keys = inputs 

        # 获取行为序列embedding的mask矩阵,将Embedding矩阵中的非零元素设置成True,
        key_masks = tf.not_equal(keys[:,:,0], 0) # B x len

key_masks = tf.not_equal(keys[:,:,0], 0) # B x len,为啥是tf.not_equal(keys[:,:,0], 0)的值,当做key_masks呢?百思不得其解

@hsgui
Copy link
Author

hsgui commented Aug 21, 2024

是因为mask向量的值都是0,而正常的向量的值,肯定不是0,所以,就选择Emb中的第一位来代替,是这样吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant