Vignette 8 | Credential fluency: measuring human ability in a brave new world

James Keevy and Kelly Shiohira

Vignette 8.jpg

In a new era of machine learning, algorithms and strong artificial intelligence (AI) that may soon be able to pass the infamous ‘Turing test’, a measure of whether a machine or programme exhibits behaviour which is distinguishable from human behaviour, measuring human ability has become all the more important (Shiohira et al, 2021). Of course, this intent to measure and recognise ability has preoccupied humanity since long before the advent of AI. Dating back to the Middle Ages, and even much earlier ancient civilisations, specialised career structures, craft guilds and the early development of university structures in the eleventh century could be described as the antecedents to contemporary concepts of competency, lifelong learning, qualifications, and also jobs and careers (Keevy et al, 2021). As in Huxley’s classic 1932 novel, we use the metaphor of a brave new world as we look to the post-pandemic future. In this future, we argue, emerging technologies will, for the first time in human history, make it possible to measure and recognise human ability in a more seamless and articulated manner than has been possible until now (Spencer-Keyse, Luksha & Cubista, 2020; UIL, 2020).

This contribution draws on a wide range of current initiatives that, in one way or another, touch on the application of technology to the measurement of human ability. The perspective is largely African in its origin, but the scope and the scale of collaboration is global. The contribution is presented as a meta review of three main initiatives that the authors are directly involved in, and not as new empirical research. The first initiative included in our review is the development of the African Continental Qualifications Framework (ACQF) (Keevy et al, 2021) with its potential to become a new generation framework, in some ways following the emergent thinking embraced in the Credential Framework for the United States (Rein 2019, cited in Keevy, Rein, Chakroun & Foster, 2019). The ambitious undertaking in South Africa to develop a fully interoperable data ecosystem for the post-schooling sector (Rajab et al, 2020), currently in its fourth year, is a second source. The third initiative we draw on is an international collaboration on new qualifications and competencies that started as recently as December 2020 and builds on similar work within the European context (UNEVOC, 2020; Euler, 2021). Additional research and tangential publications are also considered. Our intention is not to repeat the findings of these initiatives, nor to specifically present condensed summaries, as these can be found in the source publications. What is of interest to us is how the initiatives come together on a new canvas to paint a landscape that we will broadly refer to as ‘credential fluency’.

Drawing on the findings of our meta review, we will argue that in order to achieve credential fluency, a more interdisciplinary approach is needed (Herrera, Teras, & Gougoulakis, 2020), an approach that builds on the current interplay between trends in three sectors: education and training, proxied through notions of learning outcomes and qualifications; technical vocational education and training (TVET), proxied through notions of competencies and, more recently, new skills (Independent Commission on the College of the Future, 2020); and the world of work, proxied through workforce readiness and data-driven recruitment. In some way, the three initiatives included in our meta review resemble these three sectors, although we would be the first to point out that the last sectoral analysis, the world of work, is also the weakest. From our work in South Africa and Africa, the paucity of research from an employer perspective is a limitation, but this seems to be a concern globally (Knox, Wang & Gallagher, 2019). An important cross-cutting concept, although not defined in the same manner, is that of human ability.

In brief, we view credential fluency as a future state wherein human ability is measured and recognised in a seamless and digital manner. In our view, credential fluency has the potential to build on the interplay between new thinking in education, TVET and the world of work and, as such, presents a new, unified model. Such an emerging interdisciplinary model will be able to draw on the tenets being developed in each separate discipline; but, to quote ancient Greek philosophy, the whole can be greater than the sum of its parts. In our view, the development of new technologies, specifically AI, makes it possible to have this conversation for the first time. The foundational constructs needed to be in place to move towards increased credential fluency, as identified from our meta review, are listed and briefly elaborated below. The more detailed discussions will be elaborated in the full paper.

The first construct we have identified is interoperability, specifically semantic interoperability. As noted by Shiohira and Dale-Jones (2019:22), drawing on Morales and Orrell (2018), ‘interoperability is the ability of discrete computer systems or software to exchange and make meaningful use of shared data or other resources’. In practice this will mean that data can be used and processed in different applications by different users. Various frameworks exist for interoperability, the most well-known and most widely used being the Data Commons Framework (Goldstein, Gasser & Budish, 2018) which captures ‘both the technical and the societal elements of an interoperable system, as well as the relationships between them, and serves as an important tool for understanding the hierarchies implicit in the development (infrastructure, standards and semantic architecture) and use (interactions between organisations, policy, broader society and individual users) of an interoperable system’ (Shiohira & Dale-Jones, 2019:23). Semantic interoperability provides a more focused interpretation of the construct. Drawing on the linguistic notion of semantics, in this case, the emphasis is on consistency of meaning, relations between items and generally, the ability for metadata to be read more constantly and reliably by machines (Ibid.). In the context of this contribution and its focus on credential fluency, semantic interoperability is a necessary precondition for more seamless and digital ways to measure and recognise human ability; importantly, this approach must focus on both technical and societal elements.

A move towards greater interoperability may, in our view, very well be a precondition for credential fluency, but the construct itself also raises ethical concerns about data privacy and ownership. This, we argue, is the second foundational construct that must be understood and further developed. In modern day society, citizens have limited control over their own data. Third party applications, both private and public, aggregate to massive data lakes which are, paradoxically, centrally owned, lack portability and with limited interoperability (Dale-Jones and Keevy, 2021). In our view, and drawing on an increasingly large research base, the future trend is towards decentralised, peer-to-peer data exchange systems wherein “no one party controls the relationship with the other” (Preukschat & Reed 2019, cited in Dale-Jones & Keevy, 2021:4). The notion of self-sovereign identity (SSI) is often used as an example of such an emerging new trend (US Chamber of Commerce Foundation, 2020). Building on this thinking, there is a definite emergence of a rights-based approach to data ownership that in some cases even extends to the proposition that connectivity in itself should be a human right (Murahwi & Ntuli, 2021). Considering huge inequalities globally related to internet access, and the deepening of the same as demonstrated during the pandemic as online learning and working became more mainstream, continental and global efforts to address data privacy and ownership concerns are critically important (Dale-Jones et al, 2020).

The third and remaining precondition for credential fluency is a common taxonomy for the measurement and recognition of human ability. Such a taxonomy does not have to replace well-developed nomenclatures and concepts across education, training and the world of work, but, as a global interdisciplinary community, we need at least a translation device. In the education and training context, qualifications frameworks represent an attempt over the past three decades or more to leverage the increased use of learning outcomes to describe knowledge, skills and competencies. In the world of work, we note conversations about workforce readiness and, increasingly, also the use of technology to match humans with jobs, not to mention jobs that do not yet exist. Semantic interoperability is well aligned to the construct of common taxonomies, and so too, to the international discourse on data privacy and ownership. Moves towards developing consensus on some of these constructs, such as the Beijing Consensus on AI and Education (UNESCO, 2019) and also, from an African perspective, the ACQF (Keevy et al, 2021), are pointing in the right direction, but will, in all likelihood, require many years to consolidate into something as ambitious as global agreement on connectivity as a human right.

In summary, our contribution attempts to navigate uncharted waters as we imagine a new world wherein human ability is measured and recognised more seamlessly and more digitally. We argue that this can only be realised through an interdisciplinary approach, and in a context wherein semantic interoperability, data privacy and ownership, as well as more common taxonomies are prioritised, jointly developed, and agreed on. In this brave new world, the Turing test may be less important as machines and algorithms better serve humanity by measuring and recognising human ability in ways not possible before.

Stay up to date
with PSET CLOUD

Subscribe to our mailing list and stay on the pulse of the latest PSET CLOUD news.