The quote in the image here (from Kathy Hepinstall Parks) is one that I came across this week that originates in the FAQ from a writers workshop. For my purposes I could paraphrase: Why should we learn physics (or any other science or engineering discipline) when a machine already knows the formalism and the answers? On some level, this has been a serious question since the real advent of search engines. The sum total of human knowledge is available at a few keystrokes. Teaching students just rote recall of facts is approaching pointless (though proficiency can be hugely important in some circumstances - I want a doctor who can diagnose and treat ailments without having to google a list of my symptoms.).
My answer to this question is layered. First, I would argue that beyond factual content we are teaching students how to think and reason. This is and I believe will remain important, even in an era when AI tools are more capable and reliable than at present. I like to think that there is some net good in training your brain to work hard, to reason your way through complicated problems (in the case of physics, formulating and then solving and testing models of reality). It's hard for me to believe that this is poor long-term strategy. Second, while maybe not as evocative as the way creative expression is described in the quote, there is real accomplishment (in your soul?) in actually learning something yourself. A huge number of people are better at playing music than I am, but that doesn't mean it wasn't worthwhile to me to play the trumpet growing up. Overworked as referencing Feynman is, the pleasure of finding things out is real.
AI/LLMs can be great tools for teachers. There are several applet-style demos that I've put off making for years because of how long it would take for me to code them up nicely. With these modern capabilities, I've been able to make some of these now, in far less time than it would otherwise have taken, and students will get the chance to play with them. Still, the creativity involved in what demos to make and how they should look and act was mine, based on knowledge and experience. People still have a lot to bring to the process, and I don't think that's going to change for a very long time.
1 comment:
I had a professor in grad school (15 years ago) who had a very interesting set of homework assignments. He’d create about 10-15 professionally written fake paragraphs that could have been in a journal article. We would have to be the “reviewer” and determine if what had been written was correct, or if the “author” was pulling a fast one over us.
It was very humbling to see how easily we could be fooled, and how identifying the errors required subtlety and nuanced understanding.
I’ve been wondering if gen AI and LLMs might offer new opportunities to build on this concept for designing homeworks and exams. Emphasis really moving less from just knowing how to answer well posed questions, to understanding that science is truly the belief in the ignorance of experts, that the leaders in a field are not infallible, but the mistakes they may make just appear at a higher level than beginners.
Post a Comment