AI Upskilling Framework

AI Courses at LinkedIn Learning

Are you interested in learning more about GenAI, and how it can support your work?  Check out the LinkedIn Learning Upskilling Framework that will help to guide your learning choices. Here is a summary of the opportunities:

Level & Featured SkillsLearning TypeWho will benefit?
Level One: AI literacy, generative AI, responsible AI UnderstandingFor all professionals, including leaders/managers. Includes responsible AI, GenAI fluency, and awareness.
Level Two: 
Prompt engineering, strategy, copilots and AI pair programming, AI productivity
ApplyingFor all professionals, including leaders/managers. Includes prompt engineering and copilots.
Level Three:
No/low code GAI, GPTs, APIs, databases
BuildingFor power users and developers building AI-powered applications and solutions. Includes low-code methodologies and hands-on practice with APIs.
Level Four: 
Building ML models, deep learning, neural networks, NLP, AI tools and libraries
Training and Maintaining ModelsFor engineers implementing AI/Machine Learning. Includes technical applications of AI and ML models, deep learning, and fine-tuning models.
Level Five:
AIOps, MLOps, LLMOps, AI security, AI cloud solutions
Deeply SpecializingFor tech specialists and R&D roles. Includes DevOps, ML researchers, cybersecurity, and data scientists.

AI Upskilling Framework: Abbreviations Glossary

Abbreviation / TermDefinition
AIArtificial Intelligence — machines or systems that simulate human intelligence (learning, reasoning, decision-making).
GenAIGenerative Artificial Intelligence — a subset of AI that can generate new content (text, images, audio) based on patterns learned from data.
MLMachine Learning — a subset of AI where systems improve at tasks over time by learning from data rather than being explicitly programmed.
DLDeep Learning — a branch of ML using neural networks with many (“deep”) layers to model complex patterns in data.
NLPNatural Language Processing — the field of AI that focuses on the interaction between computers and human (natural) language: understanding, interpreting, generating.
NLGNatural Language Generation — a sub-area of NLP concerned specifically with generating coherent, human-like text from structured or unstructured data.
LLMLarge Language Model — a language model (often transformer-based) trained on massive text corpora, used for tasks like generation, summarization, translation.
GPTGenerative Pre-trained Transformer — a type of LLM developed by OpenAI; “pre-trained” on large text data, then fine-tuned for specific tasks.
FMFoundation Model — large, versatile models (like LLMs) trained on broad data, which can be adapted (fine-tuned) for many downstream tasks.
MLOpsMachine Learning Operations — practices and tools to manage the deployment, monitoring, governance, and lifecycle of ML models in production.
LLMOpsLarge Language Model Operations — specialized operations practices for managing LLMs (training, serving, fine-tuning, monitoring) in production.
LoRALow-Rank Adaptation — a parameter-efficient fine-tuning technique that adapts large models by modifying a small number of additional parameters.
PEFTParameter-Efficient Fine-Tuning — methods such as Low-Rank Adaptation (LoRA) that adapt large models with minimal parameter updates, making fine-tuning more efficient.
RAGRetrieval-Augmented Generation — technique where an LLM retrieves external knowledge (documents, databases) to produce more accurate or up-to-date outputs.
RLHFReinforcement Learning from Human Feedback — a training method where humans provide feedback to model outputs, and the model is fine-tuned to align more with human preferences.
Prompt EngineeringThe practice of designing and structuring inputs (“prompts”) to guide AI models toward better, more accurate responses.
XAIExplainable AI — methods and approaches that make AI system decisions transparent, interpretable, and understandable to humans.
CI/CDContinuous Integration / Continuous Deployment — software development practices that help teams integrate code frequently and deploy updates reliably, often used in MLOps.
APIApplication Programming Interface — a set of protocols for building and interacting with software applications; used by developers to call AI services.
CopilotAI assistant tools (e.g., GitHub Copilot, Microsoft Copilot) that help users by generating content, code, or tasks, often via prompts.
DevOpsDevelopment + Operations — culture and set of practices that unify software development and IT operations; relevant in AI engineering for deploying and maintaining systems.
AI GovernanceFrameworks, policies, and processes to ensure AI is used responsibly, ethically, and in compliance with regulation.
Last modified: Nov 25, 2025