- What software is used for aerodynamic wing shape optimization?
- First Production Sound Job in the Industry
- Will these equations give me the approximate apparent RA and Dec using positions from Horizons?
- How do I rotate an object that already has keyframes?
- Trouble adding texture to another object from texture panel
- Blender Math Nodes: which input is which (numerator vs denominator)
- How to save image texture externally, after editing it in material node editor?
- Animation presets for motion graphics
- Mirrored UV export to Unity
- How to delete a reroute node?
- Proper preposition?
- I graduated high school vs. I graduated from high school?
- We got a PM who’s [sic] 93 years old
- Cheated his parents vs cheated on his parents
- Is past perfect obligatory in this text of reported speech once the time is set?
- “I am (on) every Wednesday in the college”
- 'Give a damn' OR 'Don't give a damn'?
- In reliance on it(phrase meaning)
- Why is the monokuma eye in the school crest?
- Boruto episodes and corresponding manga chapters.
Is there an intuitive explanation why some neural networks have more than one fully connected layers?
I have searched online, but am still not satisfied with answers like this and this.
My intuition is that fully connected layers are completely linear. That means no matter how many FC layers are used, the expressiveness is always limited to linear combinations of previous layer. But mathematically, one FC layer should already be able to learn the weights to produce the exactly the same behavior. Then why do we need more? Did I miss something here?
There is a nonlinear activation function inbetween these fully connected layers. Thus the resulting function is not simply a linear combination of the nodes in the previous layer.
There is a nonlinear activation function inbetween these fully connected layers. Thus the resulting function is not simply a linear combination of the nodes in the previous layer.2017-12-25 09:48:33