Understand Your AI Skills. Learn What Matters Next.
A practical self-assessment aligned with Singapore’s national AI upskilling initiatives. Find your current capability level and explore recommended SkillsFuture courses.
Time needed
~10–20 mins
Format
MCQ + scenarios
Report
Email report
Next step
Suggested courses
About the Assessment
Artificial Intelligence is becoming part of everyday work across many industries. This assessment helps individuals understand their current level of AI readiness and identify practical next steps for learning and skills development.
Under the National AI Impact Programme, the Ministry of Digital Development and Information (MDDI) and the Infocomm Media Development Authority (IMDA) introduced three AI user archetypes: AI Aware, AI Literate, and AI Fluent to describe progressive levels of AI understanding in the workforce.
Drawing on this national reference and synthesising capability expectations from the relevant SkillsFuture Singapore (SSG) Skills Framework for Infocomm Technology, this assessment translates these archetypes into four practical capability dimensions to help learners better understand their current strengths and learning opportunities.
Levels And Domains
Your overall score is normalised on a 0–100 scale. Levels are interpreted as: Yet to be AI Aware (0–18.5), AI Aware (18.6-40.5), AI Literate (40.6–79.5), and AI Fluent (79.6–100).
| Dimension | AI Aware | AI Literate | AI Fluent |
| Understanding of AI - GenAI Principles and Applications | Understands AI as a black box. Assessment focus: recognize basic AI concepts and simple examples. | Understands generative vs. discriminative AI. Assessment focus: interpret outputs and limitations using core concepts. | Understands AI fundamentals and parameter effects. Assessment focus: analyze outputs and implications in context (e.g. model limitations such as overfitting, prompt sensitivity, or hallucinations). |
| Knowledge of AI Tools | Familiar with common tools (ChatGPT, Gemini, Copilot). Assessment focus: identify tools and their basic functions. | Can access and navigate AI tools. Assessment focus: apply tools to specific tasks and compare outputs. | Uses domain-specific tools for business use cases. Assessment focus: select, combine, and justify tools for complex workflows. |
| Responsible Use and Ethics | Knows common risks and misuse patterns. Assessment focus: recognize ethical concerns in AI use. | Understands and applies privacy and ethical guidelines. Assessment focus: recognize ethical concerns and risks in AI (e.g. bias, privacy risks, or misleading outputs). | Recognizes misuse patterns and mitigation options. Assessment focus: evaluate misuse risks and implement mitigation strategies. |
| Application in Context | Limited contextual application. Assessment focus: not a primary focus at this level. | Uses available AI tools in context. Assessment focus: prompt effectively to achieve specific goals. | Defines use cases, redesigns workflows, and evaluates outputs. Assessment focus: apply AI solutions to business problems and evaluate relevance. |
How It Works
Three simple steps, designed to be quick, accessible, and useful.
Answer everyday scenarios and short questions about how AI tools are used in practical situations. |
Get a clear summary of your current level and strengths across the four assessment domains. |
Use your profile to identify suitable SkillsFuture learning options and practical next steps. |
Ready to get started?This is a screening assessment for learning guidance. It is not a certification exam. Tip: Use a laptop/desktop for the best experience. |
Your Report
After you complete the assessment, you will receive a short email report with your overall level, a clear breakdown across key domains, and practical suggestions for next steps.
Overall result Score & level You will receive an overall score (0–100) and your corresponding AI user archetype level: AI Aware, AI Literate, or AI Fluent. | Breakdown Strengths across key domains The report includes a domain-level breakdown (in %) across the four assessment dimensions, so you can quickly see what you are doing well and where to focus next. | Actionable guidance Feedback & next steps You will receive brief feedback on your open-ended questions to support improvement. The report also includes a simple summary chart and suggested SkillsFuture courses. |
Using Your Results
A simple way to turn assessment results into practical action.
Think about what you want to improve at work, then identify where AI can support those goals in a safe and practical way. |
Strengthen the domains that matter most for your role to build a solid foundation before moving to advanced use cases. |
Use recommended SkillsFuture options as a guide, and track progress over time as your responsibilities evolve. |
FAQ
Quick answers on purpose, interpretation, and how to use the results.
How long does it take?
Most participants complete the assessment in about 10–20 minutes. It is designed to be short and accessible while still providing useful insights.
Is this a certification exam?
No. This is a screening assessment intended to support learning and course recommendations. It is not a formal certification of competency.
How should I interpret the results?
Use the results as a starting point. Consider your role and work context, then focus on the recommended learning areas that best support your goals.
Can teams and organisations use this?
Yes. Aggregated insights can support role-based training plans and help structure an organisation’s learning roadmap. Organisations interested in exploring team-based assessments may contact us at skills.assessment@SingaporeTech.edu.sg.
Feedback
- "This AI screening tool helped me to learn the different terminologies of AI and reflect on my usage of AI."
- - WenHui
- "The self-assessment revealed hidden gaps in my assumptions and knowledge, and pushed me to improve."
- - Anthony
- "This screening tool has made me aware of my current knowledge in basic AI use and potential areas for me to build competency amidst a growing technological landscape!"
- - Raferty
Contact
For partnerships, deployment enquiries, or feedback.
Behind the Assessment
How the assessment was designed and validated.
The AI capability assessment tool was developed as a light touch, time efficient screening instrument to support Singapore's national AI upskilling efforts. Its primary purpose is not certification, but rapid sense making: to help individuals understand their current level of AI capability and to guide them towards appropriate, differentiated training pathways.
The design was informed by close consultation with SkillsFuture Singapore (SSG) and anchored to relevant Skills Framework references, ensuring coherence with national workforce development priorities. The tool is designed to minimise time burden, and even short-answer responses are expected to be concise, it should be accessible to the general public, and yet still yield meaningful distinctions in capability that could inform course recommendations.
To achieve this balance, the assessment was intentionally designed around everyday and work relevant scenarios, rather than technical knowledge tests. This approach reflects a core design principle: for broad based AI upskilling, what matters first is how people understand, use, and judge AI in context, not whether they can describe algorithms or architectures. The tool also went through multiple iterations, including validation with individuals across different levels of AI proficiency. Feedback from these pilots was used to refine scenario framing, calibrate difficulty, and improve the clarity and discrimination of assessment items.
We welcome feedback from users and partners as the assessment continues to evolve. As AI becomes increasingly embedded in everyday work and life, improving how people understand and use AI responsibly will remain an ongoing effort.