Communications of the ACM - Artificial Intelligence 13小时前
They Can Include AI, But Should They?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在人工智能时代,技术教育的核心价值在于培养学生判断“是否应该实现某事物”的能力,而非仅仅掌握技术实现方法。通过重新设计的系统分析课程,作者发现,让学生面对真实客户和不确定性,促使他们主动思考AI的实际价值和应用场景。这种实践促使学生从关注技术工具转向培养分析性思维,提高他们解决问题的能力,能够识别需求、评估技术适用性,并做出明智的决策。文章强调教育者应通过引入模糊性、与真实利益相关者合作、评估推理能力、将学生视为顾问以及利用反思来培养学生的判断力。

🤔 核心观点:在AI时代,技术教育的核心应从“如何实现”转向“是否应该实现”。文章认为,培养学生在技术应用中的判断力比单纯的技术技能更重要。

💡 教学实验:作者通过重新设计的系统分析课程,在三年内采用了三种不同的教学模式,从固定需求的聊天机器人模拟,到具有一定模糊性的银行应用,最终到与真实客户合作的AI项目。结果表明,面对真实客户和不确定性,学生的思维方式发生了显著转变。

🤝 不确定性和客户责任:文章强调,在教学中引入不确定性,让学生决定AI是否适合解决问题,以及与真实客户合作,可以提高学生的参与度和责任感。这促使他们更深入地思考技术适用性和商业价值。

🔄 思维转变:学生在实践中逐渐从关注任务完成转变为关注问题解决,从使用“总是”和“肯定”等绝对性词汇转变为使用“可能”和“取决于”等更灵活的语言。他们开始像专业顾问一样思考和行动。

✅ 改进建议:文章提出了五项低成本的改进措施,包括引入模糊性、引入真实利益相关者、评估推理能力、将学生视为顾问以及利用反思来培养学生的判断力,旨在帮助教育者培养学生的决策能力。

I’ve come to believe the most valuable skill we can teach in technology education isn’t how to implement something. It’s how to decide whether something should be implemented at all. That question is especially urgent in the age of AI. We’ve trained our students to use tools like JIRA, write user stories, and diagram processes. But this kind of analytical reasoning—asking “Is this the right problem?” or “Does this solution make sense here?”—isn’t easily taught through lectures or textbooks or even abstract cases. It must be experienced and evolved through practice.

Nowhere is this gap more visible than in Systems Analysis and Design courses. These classes often focus on documentation, not judgment. They focus on what to build, not whether we should. As AI hype floods our classrooms and boardrooms, we risk producing graduates who can specify requirements for machine learning features but can’t explain why AI adds value to the problem they’re solving.

Can We Teach What Supposedly Cannot Be Taught?

Over the past three years, I redesigned my undergraduate Systems Analysis course across three cohorts:

• In 2023, students completed a chatbot simulation with fixed requirements.

• In 2024, they worked on a fictional banking app with modest ambiguity.

• In 2025, they partnered with real clients–student entrepreneurs–on three different projects. Two projects had AI already built into the concept, while one was “AI-agnostic,” requiring students to determine if AI belonged at all.

The contrast between these AI approaches proved important. Students working on the AI-agnostic project had to start from first principles: Does AI solve an actual problem here? Those with AI-embedded projects still had uncertainty, but of a different kind: What specific AI functionality made sense, and how should it be implemented?

Each week, students posted short reflections on what they were learning. These microblogs, along with their project artifacts, offered a window into how their thinking evolved.

The difference was striking. The 2025 students faced with real clients and genuine uncertainty didn’t just complete assignments. They framed business problems. They questioned assumptions. They made decisions that resembled real-world consulting.

One student wrote:

“The moment Business Analysis clicked was during our first meeting with the client. We weren’t there to take notes. We were there to understand the business context and propose solutions that could actually help.”

That wasn’t just a shift in skill. It was a shift in identity.

Why Uncertainty Mattered

The content across the three years was identical. What changed was the structure of the problem. Specifically, two elements had the greatest impact:

1. Structured Uncertainty

In 2025, students working on the AI-agnostic project weren’t told whether AI belonged in their solutions. They had to decide. Even those with AI-embedded projects had to determine which specific applications made sense. They had process guidance, but the outcome remained open.

Interestingly, the AI-agnostic project teams showed the most dramatic shifts in their thinking. When students had to justify AI from scratch rather than implement predetermined functionality, they engaged in deeper reasoning about technology fit and business value.

2. Client Accountability

The stakes were real. Students weren’t designing for a grade. They were advising a client who might actually use their recommendation. That created urgency and focus. They wanted to be right for reasons that went beyond school.

Together, these factors created an environment where judgment wasn’t an extra. It was the work.

From Tools to Thinking

What changed was not just what students did, but how they saw their role. Their language shifted. They used fewer certainty terms like “always” and “definitely” and more flexible language like “might” and “depends.” They started using first-person statements like “as an analyst.” They moved from checklist logic to professional reasoning.

One team, working on the AI-agnostic community-building app, reflected:

“We analyzed the non-functional requirements. Performance, scalability, reliability. We concluded that some features warranted AI, others didn’t. We proposed a roadmap that balanced value, cost, and complexity.”

That’s not just competent. It’s thoughtful. And thoughtful is what real analysts need to be.

What Educators Can Do Differently

Systems Analysis education doesn’t need a total overhaul. But it does need to move beyond applying techniques.

Here are five low-cost changes that made a difference in our course:

    Introduce ambiguity. Don’t decide the tech in advance. Let students determine if AI fits the problem.Bring in real stakeholders. Student entrepreneurs or campus orgs work fine. What matters is that the interaction feels real.Assess reasoning, not just artifacts. Ask: Why this solution? How does it create value?Treat students as advisors. Position them as consultants, not requirement scribes.Use reflection to build identity. Prompt students to articulate what they’re learning, not just about tools, but about themselves.

What They Learned to See

The biggest shift wasn’t in the wireframes students submitted. It was in what they noticed.

At the start of the semester, one student described business analysis as “gathering requirements and documenting them correctly.” By the end, they wrote:

“We’re not just documenting what clients say. We’re interpreting what they mean, identifying needs they haven’t voiced, and making judgment calls about what technologies actually fit.”

That’s the shift we should aim for. From task completion to judgment. From execution to reasoning. And that’s the kind of thinking we need in a world where AI systems are easy to build but hard to justify.

As educators, we must help students learn not just to build, but to decide why something should be built at all.

Shawn Ogunseye is Assistant Professor of Computer Information Systems at Bentley University, Waltham, MA. His work sits at the intersection of enterprise systems architecture, AI strategy, and data governance—where the hardest choices shape enduring advantage.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI教育 系统分析 决策能力 技术教育 批判性思维
相关文章