Usually I'm a harsh critic of text like this because casual language and what look like casual words but are actually strictly defined domain specific technical definitions are utterly indistinguishable.
However! This book back-references its appendix with grey underlines thus signaling that something has a technical definition and is jargon.
For instance, "loss" is really common English. In ML it's a loss function. You can clearly see it's a special word in this world in the text. Other examples include weights, capacity and channel.
I don't have to sit there confused trying to guess which English words are being used in special ways.
This discipline is fantastic. In some texts, such terms might be italicized but that behavior has seem to fallen out of practice.
As someone who isn't a professional mathematician, hints like these help greatly.
I fully agree with your general critique. It is a common bad habit in science to use common words with an uncommon meaning without any explanation. I remember me sitting in a lecture about cryptology and wondering how that "I can prove to know a SECRET without revealing it" is supposed to work. I just did not realize that SECRET just meant "random-looking string" instead of something like "I know where the money is hidden".
If it's a map to the hidden treasure (as in the exact bits of the image of the map) then it's kind of the same thing right?
But yes, the non-uniqueness of expressing similar ideas can make things difficult there.
You say this about exactness, but my experience with much of science and ML in particular is the opposite.
People still haven't decided what the difference between AI and ML is, for instance, but somehow still insist to treat the two as obviously separate in conversations.
My experience has been that the naming problem is alive and well in the sciences; and it'sfar more problematic than in programming and variable naming.
> AI and ML ... still insist to treat the two as obviously separate
Automated problem solving does not imply "learning". E.g. clustering does not. Also, expert systems are pretty static, and it is not really the "machine" that learns (a flowchart can hardly be said to "learn").
> decided
It is not a matter of deciding ("de-cidere" (cutting) and fuzziness do not really marry), it is just looking at the terms with some historic awareness.
The "brain" works through fuzzy patterns, and of course the concepts somehow relevant to said model also do. You don't cut over a blurry line, but separate points in space remain distant.
The way you present the distinction is really clear, but in the physical sciences it’s very common to use the terms AI and ML interchangeably, and I think it’s partly because people in those fields are not quite sure of the distinction between the two terms
>People still haven't decided what the difference between AI and ML is, for instance, but somehow still insist to treat the two as obviously separate in conversations.
Strange, in most of the papers and the books about this they define AI and ML and their relation.
Yes, but the definitions tend not to be consistent.
The best operational definition I got to was, it's ML when it involves a "machine" (possibly defined in software) that tries to 'optimise' some objective on behalf of a user; whereas AI involves some sort of Agent, who needs to demonstrate intelligence, that involve particular environments / states contexts, etc.
But, not everyone defines it like this. To many people AI is rapidly becoming "Neural Networks, particularly Deep ones", with ML becoming "Anything that is in scikit learn that isn't a neural network". Which isn't really a useful definition if you ask me.
1956 (or 57) the perceptron was made. The public mood was ecstatic. Generals were dreaming of robot armies, finance world was dreaming of automation, and the whole world was dreaming.
1964 (or 66?) a bestselling book was AI Winter. It explained in everyday English how the whole thing was a farce (at that time). Funding dried up. It wasn't sexy to say AI anymore. So people started saying ML. For funding they were doing ML and not AI.
Nowadays it is a top down vs bottom up approach. One is AI and the other is ML (i don't know which, it doesn't matter)
However! This book back-references its appendix with grey underlines thus signaling that something has a technical definition and is jargon.
For instance, "loss" is really common English. In ML it's a loss function. You can clearly see it's a special word in this world in the text. Other examples include weights, capacity and channel.
I don't have to sit there confused trying to guess which English words are being used in special ways.
This discipline is fantastic. In some texts, such terms might be italicized but that behavior has seem to fallen out of practice.
As someone who isn't a professional mathematician, hints like these help greatly.