I know that teachers have heard all too often that technology is changing everything, and that they need to adjust to these changes. But this one is for real.
The introduction of ChatGPT in the last few weeks has created a seismic shift in education. And if teachers, administrators, “thought leaders” don’t know that, well, they’re not paying attention. This really does change everything. A conversational AI which can write English or history essays, solve math and computer science programs, even create art, poetry and music! Are you kidding me?
Nope. It’s here to stay. On a very basic level, I can tell you from teaching middle school for 20 years that every middle school student today knows that ChatGPT can do their homework for them and has probably already tried it! I’m not saying that they understand the material. But this is a “shortcut” only dreamed of before — a computer can do my homework? I’m all in!
I suppose some teachers get what this means and are either shrieking in horror and figuring out ways to stop this from happening (good luck with that), or perhaps they’re reflecting on what this really means in their classes and planning how to adjust. And by adjust, I don’t mean somehow punish students who use AI.
The amazing thing about this is that it exposes the fatal flaw in so many current educational systems. In so many schools, what passes for learning is really just information transfer from teacher to student, without any thought for critical thinking, and also without any relevance to student interest. Of course, there are amazing schools and teachers who don’t teach like that! I applaud them. But I’m pretty sure that’s not the case with most schools.
So where do we go from here? I don’t see it clearly enough to give The Answer. We’ll have to see how this all unfolds. But I have a few thoughts already.
First of all, project-based learning. If you follow my blog, you knew that was coming! But I’ll give you a great example. I saw a university CS professor recently offer up a classic coding problem he traditionally gave his students, followed by the ChatGPT answer. Which he got in about 5 seconds. And he said, “We’re in trouble.” Here’s my response:
If that’s the way you teach, yes, you’re in trouble. If your view of education is giving students problems to solve that have no relevance to them or their lives. yes, you’re in trouble. If the students who traditionally do the best in your courses are the ones who can solve those problems, yes, you’re in trouble. Sure, it’s better to solve problems than to parrot back memorized information. It shows that you can apply the basic concepts in a new context, and that does mean something.
But I like to apply the “so what” test to everything I teach. I did this when I taught English and history, and I do it now that I teach computer science. So what? The Union won the Battle of Gettysburg in July 1865. So what? I was able to make an array in Python and loop through it. So what? And if you the teacher can’t give a convincing answer to that question, you’re failing your students. And by “a convincing answer”, I don’t mean “This will be useful someday.”
I’ll end this with an example from the computer science field. The school where I taught middle school CS has an amazing CS program. It has packed classes in high school, where students have to apply to be in the program. A majority of them are female in a school with a majority male population. Plus students have to give up a spare to be in the program.
In the junior year, students spend a whole semester learning to code VR using Unity. There are a few introductory “lessons” to familiarize them with VR, C#, the Unity IDE. But 90% of the semester students spend building their own app. They choose an app that solves a problem that means something to them. There is no final exam. Instead, they have a shark tank presentation one evening, and judges are brought in from business, technology, and other areas to hear their pitches. At the end of the evening, judges distribute “funny money” to students based on how much they support their venture. And the results are counted up and announced.
I was a judge for 2 evenings of the shark tank this month. I got a chance to hear student pitches, where they explained why they built the app they did and demonstrated the app. I had time to ask questions, ask them to demonstrate their code, or whatever else interested me. Since I’m a CS guy, I always want to see the code and have them explain it. But as a former humanitities teacher, I also want to know what their personal connection to this is. One student built an app to help people face their fears. She has had crippling phobias her whole life. She talked to pychologists about fear and got their input on the best way that VR could help people with their fears. And there was a whole room full of students with similar projects based on their own interests and passions. They all started knowing nothing about VR or C#. And they finished with some very impressive creations.
Now you tell me — how does giving students problems to solve in coding stack up against learning the same material in this way? I don’t think there’s any comparison. I’ll talk more in a future post about why this works and how it can work in other subjects besides CS. Comments welcome!