AI Agents: Guidelines and Concerns

Person interacting with Agentic AI interface on laptop

You may have heard the terms agentic AI or agentic browser. These refer to applications or web browsers (such as Comet, Atlas, Neon and Nanobrowser) that use AI to execute complex tasks or workflows. This goes beyond simple capabilities. The idea is that an agent can act autonomously or semi-autonomously on your behalf to automate routine.

Whereas you might ask a chatbot to write an email for you, you might ask the AI to draft the email, send it and monitor for replies. These are extremely powerful tools, but they come with limitations, vulnerabilities and implications for higher education. For now, they are found in standalone apps, as tools in some chatbots and as web browsers. These capabilities are being built into operating systems.

They suffer from the usual limitations of AI. They may hallucinate, producing wrong information. They can be slow. They may get stuck. On the other hand, they can automate routine work, help integrate the use of different applications, generally augment us in our work and can even take tests and complete some kinds of assignments.

Guidelines

  • Do not open Canvas and related tools with an unapproved agentic browser or agentic tool.
  • Do not use these tools with your University account without consulting with the Division of IT.
  • Consider how this may affect your course's AI policy.
    • If you allow AI for some purposes, you may wish to explicitly ban these more capable tools.
    • If you wish students to use them, please ask students not to use them with Canvas or related tools. They may be capable of picking up otherwise restricted information, such as the class roster in People.
  • Consider using Honorlock for purely online tests or LockDown Browser for in-seat tests with computers.
  • For face-to-face and hybrid courses, revert to paper quizzes and tests.
  • Use a flipped classroom so that assignments are completed in class, preferably in-seat where possible.
  • For online assignments in smaller courses, consider adding short audio or video components in which students explain aspects of their work focusing on process.
    • This might include explanations of how they researched for a writing assignment or how they worked through a program, problem or case.
    • This can also be used for assessments by adding follow-on assignments in which students pick a question or two and explain what challenged them, why they picked a wrong answer, etc.
    • Keep these recordings to under five minutes.
  • Where appropriate and practical, create assignments and assessments that focus on process, not product.

Concerns

Vulnerabilities: Privacy and security

As with other AI tools, there are substantial privacy concerns. In order to complete their tasks, AI agents need almost full access to your browser, your computers and your files in the cloud. This includes personal information, credit cards and accounts, your research and FERPA- and HIPAA-protected data. For this reason, your campus IT department may block installation of these tools on University-owned machines.

Because they have access to your accounts and can interact with applications on your computer or device, as well as online, and respond to other applications, they are open to hacking while also capable of doing more damage than traditional browsers or applications. This is particularly the case with agentic browsers. The main vulnerability is called a prompt injection attack. An agent may read hidden text in an email or on a webpage that instructs it to take specific actions circumventing security safeguards.

Agents and teaching

There have been several demonstrations that agents or agentic browsers can perform tasks in Canvas and other Learning Management Systems (LMS). In one example, by David Wiley, OpenAI's agent logged into Canvas, checked the To-Do list, took a quiz and wrote a short submission for an assignment. Anna Mills has also tried Comet and Atlas with quizzes and discussions in Canvas. Tim Mousel has demonstrated that this type of tool can log in, find assignments that are due and complete them. In another case, the Comet browser is said to have logged into Canvas and then begun grading and writing feedback without being asked to.

For purely online courses, we can no longer be certain that the work is being done by a human, let alone by the student submitting it. In such an environment, paranoia is at least as much a threat to teaching and an instructor's relationship with students as AI itself. It is important to maintain a balance. Whether you allow or disallow agentic AI, communicate with your students early and frequently about the tools, their limitations or dangers and the effects of using or not using them on their learning and cognition.

If you use AI yourself, be aware of privacy and security concerns, but also about how your students may view it. Even before agentic AI, there have been cases around the country of students dropping classes and asking for refunds because their instructors used generative AI to create course materials, grade or give feedback. Please check your campus or departmental policies on AI use before using agentic AI in your courses.

We are still at the beginning of the age of agentic AI and agentic browsers. In a sense, this is very much like the early part of 2023 when awareness of generative AI and its consequences were spreading, limitations were largely unknown and panic began to spread in academe. This time around, we have enough experience of AI tools to react more appropriately and deliberately.