- Dashboards using Lightning Component
- Insert Account Record by REST API and Postman
- what is the daily limit of time based workflow email alert?
- Send email using email templates in Scheduler apex
- Move “actions” dropdown in Lightning:datatable in front of other page elements
- Invalid Type Triggered Send Definition
- Customization of lightning components in Managed Package
- pdf generation with arabic content -not display arabic content
- How to Rename standard field label 'Contact Name' on Case Object
- Return on investment of cryptocurrency
- job permissibility
- Raspberry pi shutdown when running python code
- Broken SD Card Cannot Open
- How to regain internet connection on raspberry pi using ENC28J60?
- How can I change the RAM split?
- I want to learn . Exe - can you recommend best place in chennai, india
- Why does the size of hybrid orbitals vary as sp³>sp²>sp?
- What is the structure of PH₃?
- Chain length and buffer capacity
- Which chemical reaction has two catalysts?
Erratic performance of Adam optimizer on object segmentation task
I've taked a pre-trained model (FCN8s) and finetuned it to my data solving a very challenging instance segmentation task. I've tried many optimizers from Caffe library, but only Adam seems to be able to avoid bad saddle points (I understand that local minimum is not a good term in deep learning).
The problem is, its behavior is hard to understand. What I mean is when I take, for example, SGD or Adagrad, and take their performance after 10K, 15K, 20K, etc iterations, it seems to be going in the same direction (not always good, of course). But you can kinda see the convergence. So when I run the model on the test data, a 20K algorithm usually outperforms a 10K and so on.
I don't have the same clarity with Adam. Although training error overall goes down, when I compare results after (say) 5K and 15K of training, they are truly baffling: after 15K an algorithm can do much worse than after, say, 12K and then one of a sudden improve after 3K more iterations. There does not