Login to Continue Learning
Natasha Govender-Ropert, Head of AI for Financial Crimes at Rabobank, joined TNW founder Boris Veldhuijzen van Zanten to discuss AI ethics, bias, and whether we are outsourcing our brains to machines. You can watch the full interview on Kia’s all-electric EV9:
[Video Link]
One question that should concern us is: As we increasingly rely on generative AI for answers, what impact might this have on our own intelligence?
A recent study by MIT explored this issue. Researchers gave 54 Boston-area students an essay task. One group used ChatGPT, another used Google (without AI assistance), and the third wrote entirely on their own. Brain activity was measured during writing using electrodes.
After three sessions, the brain-only group showed the highest levels of mental connectivity. ChatGPT users had the lowest, indicating that those assisted by AI were less engaged. In a fourth session, roles reversed: The brain-only group used ChatGPT, while the AI group wrote solo. Results showed improvement for the former but struggle for the latter.
Overall, the study found that over four months, brain-only participants outperformed others in neural, linguistic, and behavioral levels. Those using ChatGPT spent less time on essays, simply copying and pasting content.
English teachers noted a lack of original thought and “soul.” While concerning, these findings are more about mental shortcuts than brain decay. Over-relying on LLMs can reduce mental engagement, but thoughtful use mitigates risks. The study was too small to draw definitive conclusions.
The researchers emphasized that sensationalized headlines misrepresent the findings. They created a website with an FAQ page urging reporters not to use inaccurate or sensational language.
Two safe conclusions from this study are:
1. More research into using LLMs in educational settings is crucial.
2. Students, reporters, and the public need to critically assess information, whether from media or generative AI.
Researchers from Vrije Universiteit Amsterdam warn that increasing reliance on LLMs may erode critical thinking—our ability to question and change social norms. Students might defer to authoritative AI output, overlooking underlying biases and assumptions.
These risks highlight a deeper issue in AI: when we take outputs at face value, we can overlook embedded biases and unchallenged assumptions. Addressing this requires critical reflection on what bias means.
Natasha Govender-Ropert noted that bias is subjective and needs to be defined for each individual and company. Social norms and biases are not fixed but evolve over time. We must remain critical of information from both humans and machines to build a more just and equitable society.