Mary, Karl, and ChatGPT

Well, perhaps a misleading subject line, but (kinda) relates to my point.

I’ve been playing with ChatGPT and toying with ideas for how to use it in teaching (both constructively, as a way to learn with it, and defensively, to let the students know I’m paying attention). I played with some documents from my upcoming Acadie/Mi’kma’ki course and it was very clear that it struggled with (a) a fairly obscure topic, and (b) one where context really mattered. For example, it discussed Mi’kmaq treaties with the British, but blurred that into negotiations for the treaties of Utrecht (1714) and Paris (1763) and discussed Mi’kmaw roles at Paris, when they were not even there (and we should note that it will actually make things up – last week, I asked it to review a book and when I told them its review bore no resemblance to the book I’d read, it responded by telling me that it wasn’t a review of the book, but what a review of the book might look like! Honest if not not at all transparent). 

For teaching we can’t always find obscure things (well, some of us more than others, but …), so the most obvious idea is to make the assignment very specific as a way to get around AI’s tendency to offer not much more than reworked Wikipedia pages (that’s an exaggeration – it’s better than that, but not much better).

So, just now, I followed up on an exercise from the Atlantic World course I teach with Mike Driedger and Trudy Tattersall where we ask students to read an excerpt from Wollstonecraft’s A Vindication of the Rights of Woman and then compare with Voyant visualisations for the entire text. In Voyant, the main (Cirrus) output highlights some obvious key terms:

And the “Links” output shows the most common thematic associations (where a political philosopher (and even a student) might see an outline of her thinking). You can see the live output here.

These outputs clearly highlight some important terms: (1) mind/reason, (2) education, and (3) it then moves onto slipperier, more complex, words like virtue, nature, children, heart, and so on. Thus, a good student using Voyant would emphasise two themes – the intellect and education – and a better student will get those and then sort through the other terms where they might arrive at gender/gender norms.

How does ChatGPT deal with Wollstonecraft? I asked it to write a short essay identifying three key themes and it did so, choosing education, gender roles, and rationality (it didn’t rank them – each is “a theme”). That’s all predictable – Wikipedia-esque, but turned into an essay – and so I then sought a way to get it to do something less obvious. I asked it to include “property” as a theme.

The link is below, and if you look you’ll see that it added a discussion of property, but just tacked it onto the end as a fourth theme. So I then asked it to integrate “property” into its analysis. I had to take two tries at that, but it eventually came up with an essay that says Wollstonecraft argued that the limits of wealth/property were limits on education (and not just as a class element but women’s restrictions on property were a particular limit on most women’s capacity to learn), and so on.

In other words, it wrote a Marxist analysis of Wollestonecraft! Ok, yes, I’m still exaggerating – Marxism-very-lite! – but it did write an analysis that put property at the centre of its analysis.

So, it was better than I’d hoped – that my channeling the assignment into some particular dimension of analysis still produced a pretty good result. There’s still plenty of tell-tale signs – its voice/style is very recognisable, it still sucks at context – but the lesson for me is that even this entry-level AI it does a pretty good job of following your lead and adapting what it says to the criteria you give it. I’d hoped that that might produce bad results (like my Mi’kmaw example above), but it’s actually pretty good.

Of course this is a but a few months into this technology, and I’m using the cheap-o free version. The results are probably better in the pay-walled versions, and they’ll no doubt be improving rapidly. Our ability to keep up will be limited. Seminars, and just talking to students may become our best guides to teaching and assessing student progress. I keep thinking of the near moral panics around calculators and Wikipedia, but this is very different. There’s no need for panic, and certainly not the kind of moral panics we’ve seen, but our teaching will require a lot of rethinking – and a lot more talking. I think much of that conversation will lead us to the simple fact that we need to work with AI, not against it. Working with students, engaging with the tool, discussing its ethical use and their own learning promises greater returns than trying to circumvent these new technologies.

The entire ChatGPT conversation can be read here: https://chat.openai.com/share/f5b884c2-8004-4d71-8590-f6961bfe76af

css.php