 First Production Sound Job in the Industry
 Will these equations give me the approximate apparent RA and Dec using positions from Horizons?
 How do I rotate an object that already has keyframes?
 Trouble adding texture to another object from texture panel
 Blender Math Nodes: which input is which (numerator vs denominator)
 How to save image texture externally, after editing it in material node editor?
 Animation presets for motion graphics
 Mirrored UV export to Unity
 How to delete a reroute node?
 Proper preposition?
 I graduated high school vs. I graduated from high school?
 We got a PM who’s [sic] 93 years old
 Cheated his parents vs cheated on his parents
 Is past perfect obligatory in this text of reported speech once the time is set?
 “I am (on) every Wednesday in the college”
 'Give a damn' OR 'Don't give a damn'?
 In reliance on it(phrase meaning)
 Why is the monokuma eye in the school crest?
 Boruto episodes and corresponding manga chapters.
 Approve an email before sending
First layer weights for transfer learning with new input tensor in keras.applications models?
In the preimplemented models in keras (VGG16 ect) it is specified that we can change shape of the inputs of the models and still load the pretrained imagenet weights.
What I am confused about is then what happens to the first layer weights? If the input tensor has a different shape, then the number of weights will be different than for the pretrained models. So more granular questions are:
If there are less weights, are they discarded at random?
If there are more weights, are they randomly initialised?
Should we always set the first layer as trainable when doing transfer learning and changing the input tensor shape?
Here is the implementation of the Keras VGG16 model for reference.
The first layers are convolution and pooling ones:
For the convolutional layers, the only weights are the kernels and the biases, and they have fixed size (e.g. 3x3x3, 5x5x3) and do not depend on the input tensor shape.
The pooling layers do not have weights at all.
That's

The first layers are convolution and pooling ones:
For the convolutional layers, the only weights are the kernels and the biases, and they have fixed size (e.g. 3x3x3, 5x5x3) and do not depend on the input tensor shape.
The pooling layers do not have weights at all.
That's why you can reuse the weights independently from the input tensor shape.
With dense layers (i.e. the final layers), you need shapes to match, so you cannot reuse them if they do not.
20171204 14:39:37