You’re free to have this opinion. I don’t see how it could possibly be justified.
> that requires data of still huge amount
This is true for some problem spaces, but not true in general. If your exposure to Deep Learning is relatively casual, then I can see why you would think this. So while it’s not a totally unfair criticism, if you’re in a problem space wth lots of data, and you have a method that performs well under those circumstances, then you’ll have to do more work to convince me it’s a bad idea to use it.
> and still has terrible failure modes. It is not yet ready, not until we can reason about those failure modes better than using highest end statistics.
This just feels like parroting others’ criticisms. Yes, our primary method for understanding failure modes of stochastic function approximators are statistical, the same as they are for stochastic processes. Statistics is precisely what is used to rigorously describe behaviour in the aggregate that cannot be well explained in the particulate.
It also ignores the fact that there is a huge amount of theory that is currently being developed around deep learning. You won’t see it linked on HN, and you likely won’t find many in the typical software engineering crowd who know about it (which is fine! - software engineers are highly skilled specialists, who should not be expected to closely follow the mathematical literature), but it does exist, and several of my friends who remained in acadaemia are building careers on developing it.
As a general aside, I have to say that the glibness of the responses objecting to my comment really does speak volumes.
You’re free to have this opinion. I don’t see how it could possibly be justified.
> that requires data of still huge amount
This is true for some problem spaces, but not true in general. If your exposure to Deep Learning is relatively casual, then I can see why you would think this. So while it’s not a totally unfair criticism, if you’re in a problem space wth lots of data, and you have a method that performs well under those circumstances, then you’ll have to do more work to convince me it’s a bad idea to use it.
> and still has terrible failure modes. It is not yet ready, not until we can reason about those failure modes better than using highest end statistics.
This just feels like parroting others’ criticisms. Yes, our primary method for understanding failure modes of stochastic function approximators are statistical, the same as they are for stochastic processes. Statistics is precisely what is used to rigorously describe behaviour in the aggregate that cannot be well explained in the particulate.
It also ignores the fact that there is a huge amount of theory that is currently being developed around deep learning. You won’t see it linked on HN, and you likely won’t find many in the typical software engineering crowd who know about it (which is fine! - software engineers are highly skilled specialists, who should not be expected to closely follow the mathematical literature), but it does exist, and several of my friends who remained in acadaemia are building careers on developing it.
As a general aside, I have to say that the glibness of the responses objecting to my comment really does speak volumes.