中美创新时报

使用人工智能是作弊吗?在许多学校教师只能自己摸索

【中美创新时报2024 年 10 月 25 日编译讯】(记者温友平编译)ChatGPT 广泛使用两年后,各州在推出人工智能使用指导方面进展缓慢。对此,《波士顿环球报》记者Steph Machado 和 Camilo Fonseca作了下述报道。

去年冬天,当 Patrick Wygant 给他的九年级世界历史学生布置一篇关于文艺复兴和宗教改革的论文时,一篇论文引起了他的注意。一名阅读和写作水平通常较低的学生提交了一份“写得非常精彩”的报告。

罗德岛州纽波特罗杰斯高中的老师 Wygant 质问了这名学生,后者很快就承认自己使用人工智能撰写了这篇论文。

Wygant 和一位历史老师询问其他九年级学生是否也在作业中使用过人工智能。158 名学生中约有 70 人承认使用人工智能生成论文。

“他们中的大多数人都很诚实,说‘哦,是的,我把它输入到人工智能中,然后复制粘贴’,”Wygant 说。“他们马上就承认了这一点。”

老师本可以因为抄袭而给他们不及格。但学生手册没有关于使用人工智能的任何指导,一些学生似乎没有意识到他们做错了什么。

“我们试图更多地将其用作学习机会,”Wygant 说,他现在教授人工智能课程,并在课堂上制定了明确的政策,说明如何使用和不能使用人工智能。

学校对人工智能的不确定性很普遍。在马萨诸塞州,一名高中生的家长起诉了欣厄姆公立学区,原因是该学生因使用人工智能帮助研究和概述历史论文而受到惩罚,辩称当时的学生手册并未禁止将该技术用于这些目的。学区认为该学生的行为相当于抄袭,老师苏珊·皮特里周二在法庭上作证说,该学生没有在其作品中引用人工智能的使用。(欣厄姆后来修改了其纪律准则,澄清“未经授权”使用人工智能等同于作弊。)

自最受欢迎的生成式人工智能工具之一 ChatGPT 公开推出两年以来,新英格兰各地的州教育部门一直没有向 K-12 学区发布明确的指导。这使得许多教师和学校只能自己应对人工智能。

“也许我们所有人都遭受的困扰是缺乏管理人员的指导,”沃本乔伊斯中学的科学老师克里斯·卡米尔说。“我们应该怎么做呢?我知道有些学区有一些政策,但这些政策非常模糊,也非常不明确。”

到目前为止,马萨诸塞州和罗德岛州的教育部门都没有发布关于如何在课堂上使用(或不使用)技术的指导,尽管罗德岛州正在制定建议,马萨诸塞州未来可能会这样做,这两个州的发言人说。

一些州开始提供培训:新罕布什尔州教育部表示,它正在通过 Khanmigo 项目在全州范围内试行专业发展。佛蒙特州还通过国际教育技术协会提供培训,该州教育局表示希望再次这样做。(该州机构去年的一项调查发现,只有三个学区有人工智能政策;预计 12 月将发布最新报告。)据州教育部称,康涅狄格州也在制定指导和政策,但目前还没有。

与此同时,教师们被迫自己回答一个关键问题:什么时候使用人工智能被认为是作弊?

例如,Wygant 采用了罗德岛大学写作课的一项政策,该政策明确规定从聊天机器人那里复制粘贴是抄袭。但他允许学生使用人工智能作为研究的起点,只要他们使用主要来源对结果进行事实核查,并明确引用人工智能的使用方式。

“它就像一个花哨的谷歌,一个花哨的维基百科,”Wygant 说。“我们想听听专家们怎么说,而不仅仅是他们电脑屏幕上弹出的内容。我们希望审查我们的信息,确保它确实是准确的。”

他向学生展示了这项技术如何有用,但也有缺点:他的课程包括被发现有种族偏见的聊天机器人,学生必须测试机器人吐出的信息的准确性。

“这是一种向学生强调这不是一项完美技术的方式,”Wygant 说。

相关:校园里的大人工智能:ChatGPT 将在秋季学期回归

新英格兰的一些学区已经完全禁止人工智能,而其他学区则在制定使用政策。但许多学区尚未解决这个问题。

麻省理工学院媒体实验室的 Eric Klopfer 和 Daniella DiPaola 认为这是一个错误。

“事后处理问题更难,”教师教育项目教授兼主任 Klopfer 说。“所以第一件事是要有非常明确的政策,说明在哪里可以使用它,在哪里不能使用它。”

在最近关于 K-12 学校人工智能的政策简报中,麻省理工学院的专家表示,联邦政府对学校应如何实施这项技术的指导“几乎没有”,各州和学区之间也几乎没有达成共识。

教育政策的设计主要由地方官员控制。但克洛普弗说,人工智能技术发展如此之快,各个学区很难跟上。

“这不是那种你为互联网制定可接受使用政策,然后就可以设定并忘记它,也许 10 年后再重新审视它的事情,”克洛普弗说。“人们只是没有专业知识和意识来做出这些决定,并以需要的频率更新这些决定。”

罗德岛州林肯市正在制定自己的人工智能政策,但缺乏州政府的指导。主管 Lawrence Filippelli 表示,在政策最终确定之前,该学区——和许多其他学区一样——会在其网络上屏蔽 ChatGPT。

“如果你要让这些孩子进入职场,无论他们做什么,他们都会接触到人工智能,”菲利佩利说。“完全关闭水龙头并不是真正的出路。”

麻省理工学院的研究人员赞成在学校强制教授人工智能知识,他们指出年龄是一个因素。就像年轻的学生在获得计算器之前必须学会手工加减运算一样,他们也应该在接触节省时间的人工智能工具之前学习正确的写作和研究技能。

对教师的培训也是关键。学生可能已经对人工智能有了更多的了解,因为该工具嵌入在包括 SnapChat 在内的流行应用中。去年,皮尤研究中心调查的四分之一教师表示,他们认为人工智能弊大于利。

“孩子们大多是第一批接触这些事情的人,所以他们在尝试,”Common Sense Media 教育推广副总裁 Merve Lapus 说,该公司致力于推广人工智能知识。 “如果没有这些指导或政策,我们就会开始遇到一些不幸的障碍。”

拉普斯说,如果没有经过培训,那些本能地打击人工智能使用的教师可能会把真正的作品误认为是人工生成的。

一些专家警告教师不要使用人工智能检测软件来抓作弊者。麻省理工学院媒体实验室的博士生迪保拉说:“这些软件的误报率非常高。有些学生被告知他们正在使用人工智能或 ChatGPT,但他们实际上并没有使用过。”

拥有 12 年教学经验的怀根特说,他没有依赖检测软件,而是让学生完成手写的大纲,这使得从机器人那里复制和粘贴变得更加困难。他们用 Google Docs 写论文,这样老师就可以查看修订历史。

“如果我去查看学生论文的版本历史,发现整篇论文是在三分钟内写成的,唯一发生的事情就是有人点击粘贴,然后突然出现了七段文字,这是一个很大的危险信号,”他说。

最后,麻省理工学院的克洛普弗指出,学术不诚实并不是什么新鲜事——只是现在更容易被发现了。

“ChatGPT 并没有创造作弊行为,”他说。“ChatGPT 使作弊变得民主化。”

题图:ChatGPT 的开发商 OpenAI 于 2022 年 11 月推出了该产品。从那时起,生成式人工智能机器人随处可见,包括嵌入 Meta 产品和儿童友好型应用程序(如 SnapChat)。Gabby Jones/Bloomberg

附原英文报道:

Is using AI cheating? In many schools, teachers are left to figure it out on their own.

Two years after ChatGPT became widely available, states have been slow to roll out guidance on the use of artificial intelligence

By Steph Machado and Camilo Fonseca Globe Staff,Updated October 24, 2024

OpenAI, the developer of ChatGPT, launched the product in November 2022. Since then, generative AI bots have popped up all over, including embedded in Meta products and kid-friendly apps such as SnapChat.Gabby Jones/Bloomberg

When Patrick Wygant assigned his ninth-grade world history students a paper on the Renaissance and Reformation last winter, one submission caught his eye. A student who typically has low reading and writing levels turned in a “brilliantly written” report.

Wygant, a teacher at Rogers High School in Newport, R.I., confronted the student, who quickly owned up to using artificial intelligence to write the paper.

Wygant and a fellow history teacher asked the rest of their ninth-graders if they, too, had used AI on the assignment. Roughly 70 out of 158 admitted to using the technology to generate their papers.

“Most of them were honest and said, ‘Oh yeah, I typed it into an AI and copy-and-pasted it,’ ” Wygant said. “They owned it right away.”

The teachers could have flunked them for plagiarism. But the student handbook did not have any guidance on the use of AI, and some students seemed unaware what they did was wrong.

“We tried to use it more as a learning opportunity,” said Wygant, who now teaches a unit on AI and has adopted a clear policy in his classroom about how it can and cannot be used.

Uncertainty over AI in schools is widespread. In Massachusetts, parents of a high school student sued the Hingham public school district after the student was punished for using AI to help research and outline a history paper, arguing that the student handbook did not prohibit using the technology for those purposes at the time. The district contends the student’s actions amounted to plagiarism, and the teacher, Susan Petrie, testified in court Tuesday that the student did not cite the use of AI in his work. (Hingham has since amended its code of discipline to clarify that “unauthorized” use of AI amounts to cheating.)

Two years since the public launch of ChatGPT — one of the most popular generative AI tools — state education departments across New England have been slow to issue clear guidance to K-12 school districts. That leaves many teachers and schools to grapple with AI on their own.

“The thing that maybe we’re all suffering from is a lack of guidance from administrators,” said Chris Camille, a science teacher at Joyce Middle School in Woburn. “What should we be doing about this? I know some school districts have some policies, but they’re pretty vague and pretty unclear.”

To date, neither Massachusetts nor Rhode Island’s education departments have issued guidance about how to use (or not use) the technology in classrooms, though Rhode Island is crafting recommendations, and Massachusetts may do so in the future, spokespeople for both states said.

Some states are starting to offer training: New Hampshire’s Department of Education said it is piloting professional development statewide through the program Khanmigo. Vermont has also offered training through the International Society for Technology in Education, and the state’s Agency of Education said it hopes to do so again. (A survey by the state agency last year found just three districts had AI policies; an updated report is expected in December.) Connecticut is also working on guidance and policies, according to the state Department of Education, but doesn’t have any yet.

In the meantime, teachers are forced to answer a key question for themselves: When is using AI considered cheating?

Wygant, for example, adopted a policy from a University of Rhode Island writing class that makes clear that copying and pasting from a chatbot is plagiarism. But he allows students to use AI as a jumping-off point for research, as long as they fact-check the results with primary sources and clearly cite how AI was used.

“It’s like a fancy Google, a fancy Wikipedia,” Wygant said. “We want to hear what the experts have to say, not just what pops up on their computer screen. We want to vet our information, make sure it’s actually accurate.”

He shows students how the technology can be useful, but also that it has drawbacks: his lesson includes chatbots found to have racial bias, and students must test the accuracy of information the bot spits out.

“It’s a way to highlight for students that this is not a perfect technology,” Wygant said.

Related: Big AI on campus: ChatGPT is back for the fall semester

Some school districts in New England have banned AI altogether, while others are crafting policies on its use. But many have yet to tackle the subject.

That’s a mistake, argue Eric Klopfer and Daniella DiPaola of the MIT Media Lab.

“It’s harder to deal with a problem after the fact,” said Klopfer, a professor and director of the teacher education program. “So the first thing is to have really clear policies about where you can use it and where you can’t use it.”

In a recent policy brief on AI in K-12 schools, the MIT experts said there is “little to no” federal guidance on how schools should implement the technology, and also little consensus among states and across school districts.

Education policy, by design, is mostly controlled by local officials. But with AI, Klopfer said, the technology moves so fast it’s hard for individual districts to keep up.

“This is not the kind of thing where you set up an acceptable-use policy for the internet and you can sort of set it and forget it, maybe revisit it in 10 years,” Klopfer said. “People just don’t have the expertise and awareness to make those decisions and update those decisions at the frequency which they need to be updated.”

Lincoln, R.I., is developing its own AI policy in absence of guidance from the state. Superintendent Lawrence Filippelli said that until the policy is finalized, the district — like many others — blocks ChatGPT on its network.

“If you’re going to be sending these kids off into the workforce, no matter what they do, they’re going to be touching AI,” Filippelli said. “To shut the faucet off completely is not really the way to go.”

The MIT researchers, who favor mandatory AI literacy in schools, note that age is a factor. Just as younger students must learn to add and subtract by hand before they are given calculators as a tool, they should have to learn proper writing and research skills before being introduced to time-saving AI tools.

Training for teachers is also key. Students may already know more about AI, with the tool embedded in popular apps including SnapChat. A quarter of teachers surveyed by the Pew Research Center last year said they believed AI does more harm than good.

“Kids largely are the first ones to jump into these things, so they’re trying things out,” said Merve Lapus, vice president of education outreach for Common Sense Media, which promotes AI literacy. “If there’s not that guidance or those policies in place, we start to run into some of these kinds of unfortunate hiccups.”

Without training, Lapus said, teachers who instinctively crack down on AI use could mistake genuine work as artificially generated.

Some experts warn against teachers using AI detection software to catch cheaters. “Those have a really high rate of false positives,” said DiPaola, a PhD student at the MIT Media Lab. “There are students who are being told that they’re using AI or ChatGPT and they haven’t actually used it.”

Instead of relying on detection software, Wygant, who has been teaching for 12 years, said he has students complete handwritten outlines, making it more difficult to copy and paste from a bot. And they write essays in Google Docs, which allows teachers to review the revision history.

“If I go and check the version history of a student’s paper and the whole paper was written in three minutes and the only thing that happened was somebody hit paste and all of a sudden seven paragraphs showed up, that’s a big red flag,” he said.

In the end, MIT’s Klopfer notes, academic dishonesty isn’t new — it’s just more widely accessible now.

“ChatGPT didn’t create cheating,” he said. “ChatGPT democratized cheating.”

Exit mobile version