ChatGPT: The Good, The Bad and LearnerShape

March 3, 2023

So much has already been written about ChatGPT that it may be either foolhardy or irrelevant to wade into the fray, but we still want to jump in briefly with thoughts about (1) whether ChatGPT is a good thing or a bad thing for education and (2) what ChatGPT means for LearnerShape.

ChatGPT: Good or Bad?

The question whether ChatGPT is good or bad for education reminds us of the answer by Chinese prime minister Zhou Enlai, when asked in the early 1970s for comments on the French revolution: “Too early to say.” In fact, this exchange is apocryphal – Zhou was actually referring to the Paris riots of 1968 – but you get the point. There is no way to know yet the likely impact of ChatGPT, or of the future advances in AI and machine learning that will certainly follow (more on that in the LearnerShape discussion below).

Accepting this uncertainty, two initial points appear central:

LearnerShape and Large Language Models

When LearnerShape was established in 2018, our planning assumed that we would need extensive training data for our machine learning models, which was the norm for convolutional neural networks and similar technologies that were then state-of-the-art. However, with the release of the BERT LLM by Google in that same year, we quickly realized (ahead of the pack) that the relationship between skills can be modeled via their relationships in language, which is exactly what LLMs like BERT do. This allowed us to substantially reduce our data requirements, both by using limited data for fine-tuning BERT and even relying on base LLMs without fine-tuning (so called ‘zero-shot’ learning).

It also quickly became obvious to us that LLMs and similar technologies would continue to advance rapidly, to our significant benefit. These advances allow us to focus on how to apply AI and machine learning to skills and learning, rather than requiring the much larger investments needed to develop the underlying technologies.

For instance, we have been working on the concept of ‘skills extraction’ – i.e. determining which skills are associated with a set of textual resources, for purposes such as determining the skills requirements for a company. Four years ago we called this our ‘moonshot’, but it is becoming increasingly doable with a straightforward application of evolving LLM technologies.

ChatGPT itself so far appears to be of limited usefulness for LearnerShape (because we do not require extensive text generation), but the underlying models such as GPT3 and other technologies recently released by ChatGPT’s developer OpenAI, such as a new embeddings service, are directly relevant to what we do.

It is very exciting for us to watch this continued evolution of AI technologies, and to apply them in building our open source learning infrastructure.

Maury Shenk, Founder & CEO, LearnerShape
Dr Jonathan Street, Head of Data Science, LearnerShape