top of page
Writer's pictureRoger Kennett

Training the new electricity: Teachers and students crafting bespoke GPTs


Creating generative-AI agents just became more accessible for teachers and students.

Let's start by introducing the idea of "agents". An agent is a device which exploits the power of AI to achieve a specific task. If generative AI is a new form of electricity, agents are the washing machines and toasters.

If, like me, you have dabbled with chatGPT as a teaching tool you probably found fairly quickly that it is a blunt tool. I tried to use it to probe and correct students' misconceptions around Newton's 3 laws of motion. This area of physics is probably the least well understood where even tertiary physics students revert to misconceptions when presented novel contexts. Using raw chatGPT, I found it was more likely to reinforce misconceptions than correct them! It was certainly was not nuanced to appreciate how underlying misconceptions are reflected in student language when responding to scenarios. This is demonstrated in the chat below where chatGPT(3.5) congratulated the student (me), when in fact I had described a significant and common misconception.

Enter the agent.

At the simplest level, an agent has three components; a library of training data that you provide, explicit instructions about what you want it to achieve, instructions of how it should go about that, and some sundry settings.

Here is the beauty of platforms like playlab.ai and now OpenAi's GPT (paid version) : you can create an agent with no coding at all! The only computer "language" you need to master is your native tongue.

Allow me to briefly demonstrate by an example: creating an agent to probe and correct students' misconceptions about Newtonian Physics. For the "library", I drew on 30 years of experience teaching physics to write, in plain english, a document which outlined the common student misconceptions, how they manifest in their responses, and how to gently challenge and correct those misconceptions. Apart from this project, it was a cathartic document to write! Because that document was written using my "voice" the resulting agent also sounded disturbingly like me. Maybe that is better than the default vanilla America voice?

The second component was the instructions. I have written a number of computer applications in a variety of languages from C+ to Python. The skill in this is translating what you want to achieve into a set of syntactically perfect, logically correct instructions for a dumb processor to follow. By harnessing the LLM, you can go from the design brief to implementation without the programming step. In my example, the instructions took a while to perfect and I need to "de-bug" the agent by trying it out multiple times. For instance, I noted that its responses were too long for my students to read, so I just adjusted the instructions with, "keep each of your responses to less that 120 words. When I found it stopped probing after a few questions and defaulted to the standard "Let me know if I can be of further assistance", I simply added, "keep probing until the student ask you to stop". It is that simple.

I found, by trial and error, that the best instructions had a few key parts.

  • How you want it to use the items in your library, e.g., "Draw primarily on documents x and y in your detection of student misconceptions and for scenarios you pose"

  • The role you want it to play, for instance, "You are an expert secondary science physics teacher, with a focus on detecting and correcting common misconceptions relating to Newton's 3 laws of motion. Your audience is students aged 14 to 16 years old."

  • What you want it to achieve and details about the language to use, length of responses, opening question, etc. In my example this grew to nearly a full page of writing. I did find wording this so it paid attention to all the instruction took some trial and error.

Here is my first iteration of this agent, Newton's Tutor.

This was my very first agent (I was a complete novice when I made this) and I set a budget of 2 hours to reflect what is realistic for busy teachers. It is far from perfect, but it does detect and correct common misconceptions, unlike the "raw electricity".

What a novel piece of homework to have your students chat for 5 minutes with this agent and record the transcript. Having students analyse the responses in groups next class could help consolidate their understanding. This activity could not only collect information for your to improve your agent, but also challenge some of your more able students to create a better one! Game on!

Don't forget to imagine an agent to assist you, the teacher. What's a laborious task that you could create an agent to generate the first draft of...?



by Roger Kennett, Learning Forge


Warning: Never put any confidential information into a LLM or agent that you are not certain is protected.

38 views0 comments

Comments


bottom of page