Machine Learning by Shalmali Joshi
My Causal Inference class had a Turing quote: "What we want is a machine that can learn from experience." One of Dr. Joshi's slides mentioned a Herbert Simon quote about learning itself, how you improve performance from experience. Gave me something to think about.
ML is how you improve performance (P) at some task (T) with experience (E). A tuple of sorts. Typically, and when we use computers to perform tasks
Inputs/Data < ----(operate upon)---- Programs ----(produce)---- > Output
This is still the case in ML but the outputs are not data in the usual sense. Rather, the output is a 'model' which you can use to operate upon other/new input data to yield insights, make predictions, and generate knowledge. ML is good for when human expertise doesn't yet exist.
Learning itself in ML comes in various flavors: Supervised (involves experts), Unsupervised, a blend of the two, and Reinforcement (that involves experts with cookies for the machine if it's doing a good job). At the end of the day, and under the covers, it's statistics and so these learning flavors can be applied in descriptive (what is), prescriptive (what should be), predictive (what will be), and generative (new stuff) contexts.
The lecture introduced these ML flavors in some depth and ends with a discussion on the Hot New Thing: GPTs.