Grammarly’s new AI-powered “expert review” feature, designed to provide users with tailored writing advice inspired by subject matter experts, has sparked controversy after reports revealed it uses real identities without explicit permission.
What is Grammarly’s Expert Review Feature?
Grammarly recently introduced a functionality aimed at enhancing writing assistance by generating feedback “inspired by” experts across various fields. The feature allows users to receive advice that mimics the style and expertise of professionals such as professors and specialists, effectively creating AI-generated critiques tailored to specific subject matter.
The company promotes this as a way to bring deeper insight into writing improvement by simulating the perspective of acknowledged experts. However, this new service has raised ethical concerns regarding the use of real individuals’ identities.
Controversy Over Identity Use Without Permission
Reports surfaced indicating that the AI models behind the expert reviews utilize the names and personas of actual experts—sometimes including those who have recently passed away—without their consent or that of their estates. This approach has drawn criticism for potentially infringing on personal rights and intellectual property.
Some users testing the feature discovered surprising matches, including professionals currently active in their own workplaces, raising questions about privacy and consent. The use of real identities without prior approval has been described as a problematic practice by legal and tech commentators.
Potential Ethical and Legal Implications
The practice of generating AI content based on real individuals’ identities could expose Grammarly to legal challenges related to rights of publicity and defamation, especially if the AI-generated advice does not accurately represent the expert’s views. Furthermore, ethical concerns revolve around transparency, consent, and the possible distortion of an individual’s reputation.
Experts in AI ethics argue that companies deploying such capabilities should implement safeguards and clear disclosures to protect individual rights and maintain user trust. Without these protections, platforms risk losing credibility and inviting regulatory scrutiny.
Grammarly’s Response and Industry Impact
As of now, Grammarly has not released a detailed public statement addressing the backlash or clarifying the steps it plans to take to mitigate these issues. The company’s silence has left users and industry observers awaiting reassurances regarding privacy protections and the ethical use of AI-generated content.
This incident highlights broader challenges in the AI ecosystem, where innovation advances rapidly but regulatory frameworks and ethical standards lag behind. It serves as a case study about the balance between technological capability and respecting individual rights in AI applications.
Future Outlook for AI-Powered Writing Tools
The development of AI tools that simulate expert feedback marks significant progress in enhancing digital writing assistance. However, this controversy underscores the critical need for clear guidelines around data usage, consent, and transparency to ensure ethical adoption.
Going forward, companies in the AI writing space must navigate the complexities of legal rights and ethical considerations while continuing to innovate. User trust and responsible AI practices will likely become pivotal factors shaping the evolution of such technologies.
