Large Language Models versus Foundation Models for Assessing the Future-Readiness of Skills

Citation

Süsstrunk, Norman, Weichselbraun, Albert and Waldvogel, Roger. (2023). Large Language Models versus Foundation Models for Assessing the Future-Readiness of Skills. 17th International Symposium on Information Science (ISI 2023), Chur, Switzerland

Abstract

Automatization, offshoring and the emerging Gig economy further accelerate changes in the job market leading to significant shifts in required skills. As automation and technology continue to advance, new technical proficiencies such as data analysis, artificial intelligence, and machine learning become increasingly valuable. Recent research, for example, estimates that 60\textbackslash\% of occupations contain a significant portion of automatable skills. The Future of Work project uses scientific literature, experts and deep learning to estimate the automatability and offshorability of skills which are assumed to impact their future-readiness. This article investigates the performance of two deep learning methods towards propagating expert and literature assessments on automatability and offshorability to yet unseen skills: (i) a Large Language Model (ChatGPT) with few-shot learning and a heuristic that maps results to the target variables, and (ii) foundation models (BERT, DistilBERT) trained on a gold standard dataset. An evaluation on expert data provides initial insights into the systems' performance and outlines the strengths and weaknesses of both approaches.

Downloads and Resources

  1. Reference (BibTex)
  2. Full Article