Faculty Futures 2035: Global Roadmap for Human-Centred AI Education
Detailed Summary
- Moderator’s welcome – Dr. Aigerim Shilibekova (founding dean) welcomed attendees, underscored that the future of higher education is shaped by faculty, and highlighted the presence of Dr. T. P. Singh as a symbol of the high‑level commitment required.
- Panel composition – The moderator explained that the panel brings together three institutions operating under distinct governance regimes (Canada, India, Australia) and that the session will move from a global framing to country‑specific reflections, ending with a synthesis of convergent themes.
2. Origin of Faculty Futures 2035
-
Video introduction (Dr. Aiga Shilibekova)
- Genesis – The initiative began with an informal online conversation between Dr. Aiga Shilibekova and Dr. Amandeep Sehmi via the Oxford University AI‑in‑Education hub.
- Catalyst question – “If AI reshapes higher education, who is preparing the faculty that hold the system together?”
- Two‑stage proposal – (1) Pre‑summit workshop series delivered by the Centre for Teaching Excellence, University Canada West, in partnership with the other two institutions; (2) This panel to move from local conversations to a global roadmap.
- Scope – Over a hundred faculty members from the three institutions participated in structured foresight conversations to explore the evolving role of faculty in AI‑augmented higher‑education ecosystems.
-
Conceptual framing
- Human‑centred AI – Defined not as a rhetorical label but as a design orientation that preserves human accountability for academic standards while leveraging generative systems.
- Readiness reframed – Readiness is not merely a set of individual skills (AI literacy, prompt design, etc.) but an alignment across three layers: individual faculty, collective professional development, and institutional coordination. Mis‑alignment in any layer creates strain; alignment yields stability amid rapid technological change.
- Strategic horizon – Three guiding questions (to 2035) were identified: (1) Coordination – how institutions align policies, resources, and governance; (2) Redesign – how teaching, assessment, and research practices must evolve; (3) Capability – what new faculty competencies are required.
3. Methodology: Rapid Foresight Labs
- Design rationale – The labs were deliberately not surveys or hypothesis‑testing exercises; they were time‑boxed, structured workshops meant to surface strategic sense‑making under pressure.
- Structure –
- Six prompts (two per strategic question) were placed on six tables.
- Each table hosted a note‑taker and a facilitator to keep discussion on‑topic.
- Faculty groups rotated every ten minutes, moving to the next prompt. This rotation ensured cross‑pollination of ideas while preserving continuity in note‑taking.
- Prompt categories – Aligned with the three layers of the Faculty Futures model:
- Individual faculty perspective – Nature of faculty work, required capabilities, shifting authority.
- Collective professional development – How faculty collectively build and sustain capability amid ever‑changing tools.
- Institutional responsibility – Structural conditions that enable or impede faculty agency, and redesign needs for assessment and governance.
- Time‑boxing effect – The ten‑minute limit forced participants to focus on core challenges rather than peripheral frustrations, fostering concise, actionable insights.
4. Country‑Specific Findings
4.1 Canada (University Canada West) – Report by Dr. Sana Jamil
-
Institutional context
- 254 faculty across two schools, 30+ course families, hybrid delivery model predating widespread AI use.
- Operates under British Columbia’s Quality Assurance Framework, which includes legacy assessment policies not originally designed for AI integration.
-
Observed faculty dynamics
- High curiosity, but uneven adoption – Faculty voluntarily pursued AI learning because of classroom challenges, not because of top‑down mandates.
- Professional calibration – Faculty were less worried about AI tools per se and more about preserving evaluative judgment (distinguishing credible reasoning from AI‑generated hallucinations).
-
Three key insights
- Individual level – Evaluative judgment – Faculty emphasized the need to assess the reasoning behind AI outputs, not just their surface fluency.
- Collective level – Shared clarity – A demand for discipline‑specific guidance that sets defensible boundaries without stifling pedagogical autonomy.
- Institutional level – Assessment redesign – Traditional exams and assignments are under pressure; faculty seek consistent, institution‑wide recalibration of expectations.
-
Strategic implications for UCW
- Embed AI capability in ongoing professional learning – Move beyond one‑off workshops to sustained, discipline‑specific experimentation cycles.
- Develop nuanced, actionable AI policy – Provide clear, flexible guidance that respects faculty judgment while safeguarding academic integrity.
- Recognise AI literacy as a core competency – Reflect this in workload planning, promotion criteria, and strategic budgeting.
4.2 India (Chandigarh University, Uttar Pradesh) – Report by Prof. T. P. Singh
-
Institutional snapshot
- Newly‑established multidisciplinary university (IRF rank 19) with ~150 faculty, 40 programmes, ~2,700 students.
- Faculty representation spanned law, liberal arts, engineering, management, computing, pharmacy, and health sciences.
-
Observations from the Indian foresight lab (≈100 participants)
- AI as embedded, not emergent – AI is already present across disciplines (e.g., forensic science, healthcare, management).
- Domain expertise remains paramount – AI cannot replace deep disciplinary knowledge; faculty must become curators and critical validators of AI‑generated insights.
- New literacies needed – Data literacy, basic programming, analytical thinking, interdisciplinary curiosity, and robust ethical awareness.
-
Three overarching themes
- Domain expertise + new literacies – Faculty must blend subject mastery with AI fluency to guide students responsibly.
- Lifelong learning as structural necessity – Continuous upskilling is no longer aspirational; it is required for institutional relevance.
- Ethical “human‑in‑the‑loop” governance – Faculty must model transparent AI use, verify outputs, attribute sources, and avoid “intellectual delegation” (over‑reliance on machines).
-
Systemic constraints highlighted
- Semi‑digital infrastructure – Fragmented knowledge systems, limited computational resources, and heavy administrative loads hamper faculty capacity to experiment with AI.
-
Five recommended institutional actions
- Infrastructure enablement – Dedicated AI labs, GPU resources, and tool access.
- Mandated, structured training – Prompt‑engineering modules, ethical AI workshops, and continuous skill‑loops.
- Workload rationalisation – Reduce non‑teaching burdens to free time for AI experimentation.
- Pedagogical transformation – Redesign curricula and assessments to integrate AI responsibly.
- Ethical & governance frameworks – Formal policies that embed human oversight and protect academic integrity.
-
Closing vision (by Dr. Amit Mishra) – Emphasised the diversity of participants (medicine, engineering, government, industry) and reiterated the three recommendations, stressing that AI infrastructure, ethical oversight, and ecosystem‑level integration are the cornerstones for Indian higher‑education readiness.
4.3 Australia (Canterbury Institute of Management) – Video Summary (presented by moderator)
-
Institutional profile
- Private higher‑education provider offering undergraduate, postgraduate, and research degrees; faculty comprise both industry professionals and PhD‑holding academics.
-
Key observations from Australian workshops
- Faculty development focus – Shift from treating AI as a “tool” to cultivating judgment and ethical competence.
- Assessment redesign imperative – Faculty need frameworks that enable responsible AI use while preserving academic rigor.
- Leadership & governance – Strong AI governance structures and policy leadership are required to guide sustainable adoption.
-
Strategic vision for the next ten years
- AI‑forward learning ecosystem – Embed AI across curriculum, assessment, research, and industry partnership.
- Continuous professional development – Invest in AI‑focused PD that nurtures pedagogical innovation and leadership capacity.
- Ethical framework & inclusive practice – Develop policies that protect academic integrity, ensure equity for a culturally diverse student body, and prevent unconscious outsourcing of thinking to machines.
5. Convergence Across the Three Contexts
- Structural similarity – Despite divergent regulatory environments, faculty in all three countries reported the same three layers of tension: (1) Individual faculty judgment, (2) Collective professional development, (3) Institutional coordination & redesign.
- Shared challenges
- Assessment pressure – Traditional evaluation models are being destabilised by AI‑generated outputs.
- Need for nuanced policy – Broad AI‑ethics statements are insufficient; discipline‑specific guidance is demanded.
- Infrastructure gaps – Both Canada and India highlighted limited computational resources; Australia stressed governance infrastructure.
- Common strategic direction – A global roadmap is emerging that is not a universal policy prescription but a template adaptable to local contexts, anchored in the three strategic questions (coordination, redesign, capability).
6. The Emerging Global Roadmap
-
Template nature – The roadmap will provide a framework rather than a fixed blueprint, allowing institutions to co‑create solutions that respect local regulations while addressing the universal structural pressures identified.
-
Iterative development – It will be refined through ongoing cross‑context dialogues, additional workshops, and continuous feedback loops.
-
Core pillars (derived from the convergence):
- Coordinated governance – Establish shared AI oversight bodies, standardised yet locally adaptable policies, and transparent accountability mechanisms.
- Pedagogical redesign – Redefine learning outcomes, assessment modalities, and curriculum maps to integrate AI as an augmentative collaborator rather than a substitute.
- Capability ecosystems – Build longitudinal AI‑literacy pathways for faculty, embed AI skill development in promotion criteria, and provide dedicated infrastructure (labs, GPU clusters, tool licences).
-
Call to action – The moderator urged participants to continue the conversation, expand the network of collaborating institutions, and treat the roadmap as a living document that evolves with technological and pedagogical change.
7. Closing Remarks
- The moderator thanked the panelists, the audience, and the summit organisers, reiterating that the future of higher education will be co‑authored by visionary institutions, empowered faculty, responsible governance, and ethically guided AI.
Key Takeaways
- Human‑centred AI is a design orientation that insists on preserving human accountability for academic standards while leveraging generative tools.
- Faculty readiness is a three‑layer construct – individual competence, collective professional development, and institutional coordination must be aligned; mis‑alignment creates systemic strain.
- Rapid foresight labs (time‑boxed, rotating prompts) proved effective for surfacing deep, structural insights across diverse contexts.
- Across Canada, India, and Australia, the same structural pressures emerged: need for evaluative judgment, demand for discipline‑specific guidance, and pressure on traditional assessment models.
- Strategic implications:
- Embed AI capability in ongoing, discipline‑specific professional learning (not one‑off workshops).
- Create nuanced, flexible AI policies that give faculty clear ethical boundaries while preserving pedagogical autonomy.
- Recognise AI literacy as a core faculty competency influencing workload planning, promotion, and budgeting.
- Institutional recommendations (from India, echoed elsewhere): build AI infrastructure, mandate structured training, rationalise workload, redesign pedagogy/assessment, and enact ethical governance frameworks.
- The global roadmap is a shared template, not a universal rulebook; it will be iteratively refined through continued cross‑country collaboration.
- Future faculty work will shift from content delivery to curation, critical validation, and ethical stewardship of AI‑generated knowledge.
- Equity, inclusion, and “human‑in‑the‑loop” governance are essential to prevent unconscious outsourcing of intellectual work to machines.
See Also: