Experimental Course

AI-Native Solution Engineering

A course designed for students who will co-evolve with AI throughout their careers—developing the judgment, agency, and integration skills that define human value in an AI-augmented world.

16 weeks / 4 sprints CSUMB Computer Science Spring 2026 Pilot

Rethinking Foundational Knowledge in the Age of AI

Traditional education is built on decades of experience determining what foundational knowledge students need—largely the same foundations their instructors developed over their careers. This knowledge emerged before AI could write code, explain concepts, or iterate on solutions. But today's students face a different reality: they will co-evolve with AI tools from the beginning of their learning journey.

This raises a fundamental question about which approach will help us bootstrap the judgment they need to direct AI: Would bolting AI onto our current approach to teaching and learning suffice? Or does this represent a shift that requires rethinking how teaching and learning is designed and delivered?

This course takes the position that rethinking is worth exploring—and that the questions go deeper than curriculum. They are epistemological: Does human value shift from knowing how to do things toward knowing whether something was done well and what purpose does it serve? If execution can be delegated, what does it mean to develop judgment without the traditional runway of hands-on experience?

We don't claim to have answered these questions. This course is designed as an experiment—a hypothesis that the nature of valuable knowledge may be shifting, and an attempt to test what education looks like if that hypothesis is correct.

"The goal isn't to make students AI-dependent or AI-resistant. It's to explore what it means to develop judgment in an age when execution can be delegated."

Why does this course introduce new terminology? +

Language shapes thinking. Familiar terms carry assumptions that can limit how we see new possibilities. When we say "critical thinking" or "problem solving," these concepts have become so broad they've lost precision for describing specific capabilities.

This course introduces terms designed to be more precise. Here are two examples:

Symbiotic Thinking (instead of "using AI")

Highlights that working with AI is fundamentally different from using a tool — it's a partnership that requires ongoing calibration.

Integrative Solver (instead of "problem solver")

Emphasizes connecting across domains, stakeholders, and contexts — not just solving puzzles in isolation.

These are just examples. Throughout the course, you'll encounter other terms that attempt similar expansions of familiar concepts. None are arbitrary jargon — each is grounded in the meaning of its words. And they're meant to be examined, not accepted uncritically. If a term isn't making thinking more precise and useful, it should evolve.

Superagency + Human Value Proposition

This course is built on two complementary ideas that together define success in AI-augmented problem-solving:

🚀

Superagency

The ability to attempt problems you wouldn't have tackled before. AI expands what's possible for individuals—but only if they can identify worthwhile problems, break them down effectively, and maintain direction through complexity.

Human Value Proposition

Superagency without human value is just delegation. Students learn to articulate what they specifically contribute—the judgment, taste, context, and integration that AI can't provide. This isn't about competing with AI; it's about understanding where human direction is essential.

By semester end, students must answer two questions with evidence:

1

"What problems are now within my reach that I would not have attempted before?"

2

"What would be worse about my solutions if I had simply handed the problems to AI?"

Self-Determination Theory as Design Principle

The course structure draws on Self-Determination Theory (SDT), which identifies three psychological needs essential for intrinsic motivation and effective learning: autonomy, competence, and relatedness.

Why SDT Matters for AI-Native Education

AI tools can easily undermine autonomy (by providing answers before students form their own thinking), competence (by making students feel their skills are obsolete), and relatedness (by isolating learners from peers and mentors). Our course design deliberately counters these risks.

Autonomy

Students choose their own problems. Productive reflections precede AI interaction. Sprint progression increases self-direction.

Competence

Explicit capability tracking (SDL, IS, AB). Human Value Statements affirm unique contributions. Progressive challenge scaffolding.

Relatedness

Weekly peer conversations. Stakeholder interviews. Sprint 2 builds for someone the student knows personally.

Bootstrapping Foundational Knowledge Differently

Rather than front-loading traditional content, we adopt a "just-in-time" approach where foundational knowledge emerges from attempting real problems. The three meta-habits—Slow Down, Know Yourself, Take the Lead—replace rote content with meta-cognitive skills.

Students learn to recognize what they need to learn (not what we prescribe), develop strategies to acquire it rapidly with AI assistance, and understand when depth matters versus when breadth suffices. This meta-learning capability may prove more durable than any specific technical knowledge.

SDL

Self-Directed Learner

Meta-learning architecture for rapid expertise acquisition

IS

Integrative Solver

Operating at intersections where human value concentrates

AB

Adaptive Builder

Executing through cycles of building, testing, and adapting

Four Sprints of Progressive Autonomy

Each sprint increases student autonomy while decreasing instructor scaffolding. Problems become more ambiguous; stakeholders become harder to access; solutions require more integration across domains.

1

Foundation: Superagency Over Self

Students build for themselves—understanding their own challenges, learning the frameworks, and experiencing the full cycle with maximum support.

2

Mirror: Learning Through Others

Students build for someone they know (family, friend, colleague)—learning to understand someone else's domain and validate solutions against real feedback.

3

Complexity: Navigating Ambiguity

Students work on a shared challenge with limited stakeholder access—building understanding from indirect sources and navigating team dynamics.

4

Mastery: Full Autonomy

Students identify their own problem, stakeholders, and approach—demonstrating the full capability set with minimal instructor guidance.

This Course Is an Experiment

We're not claiming to have definitive answers. We're testing hypotheses about how to prepare students for a world where AI capabilities will continue to expand throughout their careers. Our approach is:

  • Evidence-based: Grounded in Self-Determination Theory, deliberate practice research, and T-shaped expertise models
  • Iterative: Designed to evolve based on what we learn from students
  • Honest: Explicitly acknowledging uncertainty while providing structured support
  • Transferable: Focused on developing capabilities that will remain valuable as specific tools change

Interested in Collaborating?

We welcome conversations with educators, researchers, and funders interested in AI-native education.

snarayanan@csumb.edu · About Dr. Sathya Narayanan