IE’s Statement on AI Is A Good First Step, But More Work Is Needed

Latest

“We sit on the cusp of a revolution on the scale of the development of the steam engine or the launch of the internet”, declares IE University’s new statement on AI. Even though we’re still in the early stages of the AI revolution, developments in the field are already impacting students’ professional futures and school lives. 40% of IE graduates from degrees such as the Master in Computer Science and Business Technologies are already finding themselves working in AI. Meanwhile, a quick poll of students in my class revealed that 100% of respondents have used generative AI tools like ChatGPT to help with coursework in the past semester.

To keep up with the scope of technological change, IE must anticipate the changes AI will have in different fields and prepare graduates to face them. One of the first areas that will be disrupted is computer science and data. Overall, I think IE’s AI policy tackles this change effectively. The creation of programs such as the Bachelor in Computer Science and Artificial Intelligence is both necessary and praiseworthy. Giving computing students the opportunity to specialise in AI early is crucial to their career development considering how quickly such tools will transform their field.

Content on AI is not limited to just technical, computing-based degrees. According to IE’s new manifesto, law students will be taught about how AI is being regulated, while students taking the Master in Applied Economics will be instructed on topics like big data and machine learning. It is also admirable that professors such as Brendan Anglin are incorporating AI-related content into their classes, such as by teaching students about the differences between research done by humans and research done by artificial intelligence tools. However, these changes are not enough. 

All students polled for this article believed that professors didn’t do enough to promote and encourage the use of AI tools in class. There could be plenty of reasons for this. Firstly, although all students use AI during their coursework, we have not received any formal instruction on how to do this and few classes regularly incorporate AI to help students learn. This is a shame because there is no shortage of tools that could improve students’ experience and work quality. For example, otter.ai can transcribe audio and video automatically and in real-time, before delivering a summary of the content covered at the end. If this tool was regularly used in class, nobody would struggle to get their hands on good-quality notes before exams.

Moreover, Tools such as ArXiv collect research papers using natural language processing, which can help students create literature reviews quickly. Meanwhile, Prowritingaid can help students improve their writing skills by providing editing suggestions. Instruction on how to take advantage of these tools should be taught in courses targeted at most IE students. For example, a lesson on using AI to improve pieces of writing and map out essay structures could be incorporated into the Writing Skills class. Research Methods could also teach students how to collect papers and conduct literature reviews with AI. This would not only help students throughout their academic career at IE but would also teach them skills that will be valuable in their professional futures.

Another reason students might feel AI isn’t being adequately promoted is how IE has responded to the threat of AI-facilitated cheating. This response has been both too vague and too severe. The new statement on AI does little to correct either of these shortcomings. I was surprised that no update to the IE Code of Ethical Conduct was included in the statement. Currently, the Code forbids plagiarism, but there is no mention of what counts as plagiarism when it comes to generative AI. Are citations needed to quote or paraphrase it? If a portion of an essay was generated using AI without citing it, should that be counted as an instance of plagiarism? 40% of students polled for this article admitted that they’d used tools such as ChatGPT to create first drafts which they later proof-checked and polished. These students desperately need guidance on what types of AI assistance are permissible, and how to avoid unwitting ethics violations.

Secondly, the shift to closed-book Respondus exams has widely been attributed to fears over AI. This shift is somewhat understandable. According to the BBC, ChatGPT and similar tools can score nearly 100% on multiple-choice tests and can generate human-like text. However, there are alternatives to bans. For example, students could copy-paste their prompts and the chatbot’s response into an appendix at the end, so that professors can see which parts of the work are the student’s and which are the AI’s. This method is already used by some British teachers and while it’s not perfect it could be a good alternative to the very unpopular Respondus policy.

IE’s new statement on AI is a great first step forward, but we can’t stop here. Currently, students have been largely left to figure out how to use AI by themselves. At the same time, rules over when and how AI can be used remain worryingly unclear. Not only could this deter students from taking full advantage of new technologies at their disposal, but it also opens the door to unwitting ethics violations by well-meaning students. To give its students a professional advantage, IE needs to promote the ethical use of AI in class. It should do so by updating its Code of Ethics and teaching all students how to use AI.

Featured image: Pixabay

Sabina Narvaez
Sabina Narvaez
Originally from Mexico, but mostly grew up abroad and has Spanish nationality. Studies Philosophy, Politics, Law and Economics and mostly writes about these topics. Also interested in sustainability.

More from Author

Related

LEAVE A REPLY

Please enter your comment!
Please enter your name here