On Sunday, 21-year-old Chungin “Roy” Lee revealed that his controversial startup, Cluely, has raised $5.3 million in seed funding. Backed by Abstract Ventures and Susa Ventures, Cluely has drawn attention—and criticism—for developing an AI tool that allows users to “cheat on everything.”
Born out of controversy, Cluely emerged after Lee and co-founder Neel Shanmugam were suspended from Columbia University. The suspension followed their development of an AI-powered assistant that helped software engineering candidates cheat during interviews. What started as a project called “Interview Coder” has now evolved into a broader platform aimed at helping users outsmart systems in job interviews, exams, and even sales calls.
The San Francisco-based startup’s AI tool runs in a hidden browser window, invisible to exam proctors or interviewers, and feeds users real-time responses. According to the company, this allows users to gain an edge in situations where traditional preparation may not suffice. Unsurprisingly, this technology has sparked ethical debates.
A Manifesto of Disruption
Cluely’s founders are leaning into the controversy. The startup published a manifesto comparing its technology to innovations like the calculator and spellcheck, arguing that those tools were once considered cheating but are now universally accepted. Cluely’s core message is that AI should be integrated seamlessly into human processes—even if that means disrupting long-standing norms.
To support their message, Cluely released a provocative marketing video showing CEO Roy Lee on a date, secretly using AI to fabricate knowledge about art and his age. Though meant to be humorous, the video has been polarizing. Some viewers applauded its creativity, while others found it unsettling, with comparisons made to dystopian scenarios seen in the TV series Black Mirror.
Despite the backlash, Lee is unapologetic. In fact, the Cluely platform has reportedly already passed $3 million in annual recurring revenue (ARR) as of early April. For a company that hasn’t even celebrated its first anniversary, those figures are impressive—though they raise serious concerns about how far users will go to gain an edge in professional settings.
The Founders and Their Origins
Both Roy Lee and Neel Shanmugam were students at Columbia University before launching Cluely. They’ve since dropped out, citing their disciplinary issues and the overwhelming momentum behind their startup. Columbia University declined to comment on their suspensions, citing student privacy laws.
The original product, Interview Coder, targeted developers preparing for technical interviews. Specifically, it helped users navigate platforms like LeetCode, which is known for its algorithm-focused coding challenges. Many software engineers view these assessments as outdated and irrelevant to real-world programming. Cluely’s founders argue that using AI to “cheat” on them is not dishonest—it’s efficient.
Lee even claims that he secured an internship at Amazon using Cluely’s technology. Although Amazon did not comment on Lee’s situation directly, the company did reiterate that all candidates must agree not to use unauthorized tools during the recruitment process.
A Broader AI Movement
Cluely isn’t the only AI startup making waves this month. A prominent AI researcher recently announced a venture with the bold goal of automating all human labor. That announcement stirred intense debate across social media platforms, particularly X (formerly Twitter), where some hailed it as visionary and others as dystopian.
The rise of startups like Cluely is sparking a critical conversation about the boundaries of artificial intelligence. Should AI be used to help people navigate systems deemed flawed or outdated? Or are these tools simply giving users an unfair advantage?
Cluely’s backers seem to believe it’s the former. Abstract Ventures and Susa Ventures are betting that users will embrace tools that save time and effort, especially if the alternative is months of studying for interviews or certifications.
Still, critics argue that normalizing these practices could erode trust in educational institutions, hiring processes, and professional interactions. If everyone uses AI to cheat, how can we measure real knowledge or capability?
What’s clear is that Cluely’s model challenges traditional norms. By promoting a system where people can rely on AI to navigate complex or high-pressure situations, the startup may reshape how society views intelligence, preparation, and fairness.
Whether Cluely becomes a long-term player in the AI landscape or simply sparks a short-lived controversy remains to be seen. But one thing is certain: it’s forcing institutions, companies, and individuals to confront some difficult questions about what it means to be qualified, competent, and honest in the age of artificial intelligence.