ChatGPT tackles propaedeutic Fontys HBO-ICT
The rise of generative artificial intelligence (AI) is keeping education, and perhaps even more so ICT education, busy. The transformative power of AI is forcing teachers to look differently at assessing student work, especially portfolios and argumentation. Lecturers Ruud Huijts and Koen Suilen decided to test whether AI can make it into the propaedeutic year at HBO-ICT.
"Is it possible to pass the first year of HBO-ICT with only AI-generated professional products?" asked ICT lecturers Ruud Huijts and Koen Suilen. After locking themselves in a conference room for several hours, they had to conclude that this was indeed the case. Assessing the software and documentation generated using ChatGPT 3.5 against the learning outcomes formulated for the first two semesters, they had no choice but to award the AI its propaedeutic HBO-ICT.
Large Language Model
To carry this out, a ChatGPT was used as a Large Language Model (LLM). An LLM is a type of artificial intelligence, which uses deep learning and neural networks and large data sets to understand, summarise, generate and predict new content. They gave the LLM as input the literal text of the assignment as it was prepared for the students, with the only addition being the programming language in which the AI had to generate the code (C# in this case, or R or Python). Koen does stress the importance of good instruction as essential in this study. With a first-year student, the result would be seen as outstanding.
Is it worrying news that ChatGPT is tackling the propaedeutic phase? Ruud and Koen don't think so, it does mean that the role of a teacher is changing. The lessons both drew from this experiment are the following:
1. It is more important than ever to engage in a meaningful conversation with students about the decisions they make when designing their code, the trade-offs they make in engineering choices & documentation, and provide feedback on the quality of the solution. So blindfolding and judging the final product makes little sense, as this can be child's play, generated by an LLM.
2. Validating output is crucial: by giving good instructions and being able to validate well, an LLM's output can be used effectively. Again, dialogue, feedback and testing for process is key.
3. A meaningful (innovative) context is needed in the learning process to challenge students and make them learn, because it is precisely in this area that a human being is more 'handy' than machine.
4. We may also start demanding higher standards in terms of output and quality if you can generate and complement products in this way.