AI chatbots like OpenAI’s ChatGPT have frequently been found to provide inaccurate information, fabricate entirely fictional sources and facts, and confidently deliver incorrect answers that mislead users. Because of these issues, many educators approach AI tools with skepticism. Yet, despite these concerns, OpenAI and its competitors are aggressively targeting college campuses, promoting their services to students without hesitation.
According to a report by the New York Times, OpenAI is currently driving a major initiative to embed ChatGPT deeply into college life, aiming to replace many traditional elements of the college experience with AI-driven alternatives. The company envisions every student receiving a “personalized AI account” upon arrival at campus—akin to receiving a school email address. ChatGPT is expected to serve multiple roles, including that of a personal tutor, teaching assistant, and career advisor to help students navigate their studies and plan their futures.
Several universities have already begun adopting these AI tools despite initial resistance and outright bans in educational settings. Institutions such as the University of Maryland, Duke University, and California State University have subscribed to OpenAI’s premium ChatGPT Edu service and are integrating the chatbot into various educational activities. OpenAI isn’t alone in this space: Elon Musk’s xAI provided free access to its chatbot Grok during exam periods, and Google is offering its Gemini AI suite free to students through the 2025-26 academic year. However, unlike these offerings, OpenAI is striving to embed its tools directly within the core infrastructure of higher education.

The shift from initial skepticism to embracing AI in universities is concerning for many educators. Growing evidence suggests that AI may actually hinder deep learning and retention of accurate information. A study published earlier this year revealed that over-reliance on AI tools can diminish critical thinking abilities. Other research indicates that users tend to offload challenging cognitive tasks to AI, using it as a shortcut rather than engaging deeply with material. This runs counter to the fundamental purpose of higher education: to cultivate analytical thinking. Beyond cognitive impacts, the risk of misinformation is significant. For instance, researchers who tested various AI models on a patent law casebook found that the models frequently generated false information, invented non-existent cases, and made numerous errors. They reported that OpenAI’s GPT model produced answers deemed “unacceptable” and “harmful for learning” about 25% of the time—an alarming rate in academic contexts.
Moreover, as OpenAI and others push to embed chatbots into every facet of student life, additional drawbacks arise. Over-dependence on AI can negatively affect students’ social skills. University investments in AI may inadvertently reduce funding and focus on initiatives that foster human interaction. For example, meeting with a tutor involves social engagement, emotional intelligence, and building trust, all of which contribute to a supportive learning community. In contrast, a chatbot simply delivers an answer—whether accurate or not—without creating connection or nurturing belonging.