Off topic:ValueError: Cannot convert a partially known TensorShape to a Tensor: (?, 1024)
0
0
Entering edit mode
5.0 years ago
rsafavi ▴ 60

I am trying to apply a beta variational autoencoder into a 1D data. I have already found the code online, but the code is for image data. I am having problems adapting the code to 1D. I'm getting ValueError: Cannot convert a partially known TensorShape to a Tensor: (?, 1024) error, and I think it might be from the way the loss is calculated?

 mean = x[0]
    stddev = x[1]
    print('mean = ', mean)
    print('stddev = ', stddev)
    if self.reg == 'bvae':
        # kl divergence:
        latent_loss = -0.5 * K.mean(1 + stddev
                            - K.square(mean)
                            - K.exp(stddev), axis=-1)
        # use beta to force less usage of vector space:
        # also try to use <capacity> dimensions of the space:
        print("latent_loss", latent_loss)
        latent_loss = self.beta * K.abs(latent_loss - self.capacity/self.shape.as_list()[1])
        print("latent_loss", latent_loss)
        self.add_loss(latent_loss, x)

def Build(self):
    # create the input layer for feeding the netowrk
    inLayer = Input(shape=(16889,))
    net = Dense(1024, activation='relu',kernel_initializer='glorot_uniform')(inLayer)
    net = BatchNormalization()(net)
    net = Activation('relu')(net)
    mean = Dense(1024, name = 'mean')(net)
    stddev = Dense(1024, name = 'std')(net)
    sample = SampleLayer(self.latentConstraints, self.beta,self.latentCapacity, self.randomSample)([mean, stddev])
    return Model(inputs=inLayer, outputs=sample)
  
keras tensorflow autoencoder • 2.9k views
ADD COMMENT
This thread is not open. No new answers may be added
Traffic: 3290 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6