Initial Howard University Guidelines for use of Generative AI Tools

Overview

Generative AI technologies are often conflated with the idea of Artificial General Intelligence (AGI) the construction of an autonomous and sentient intelligence like the android type of sentience presented in science fiction tales such as the Terminator Saga or the android DATA on Star Trek the Next Generation. Rather Generative AI refers to computer systems that produce various forms of human expression, such as written language summaries, composite images created images of humans, animals, and human-designed structures, videos, music, or vocal mimicry. Because these technologies will continue to develop, it is important to understand the complexities of their usage and their historical function as technological aids that many of us are already familiar with through word-processing spelling and grammar checkers, as well as other automated guides built into everyday technologies like cellphones and tablets.

It is imperative then to organize and affirm values, guidelines, and assumptions about these tools because, writing and voice production remain important modes of critical, artistic, and sense-based learning vital to understanding and cognitive development.  Tools such as ChatGPT, an AI writing tool, represent a concerted shift to our learning practices and definitions of integrity, and thus require informed engagement. Below is information and recommendations for learning and engaging these tools within academic settings by faculty, students, and other University stakeholders.

Core Values and Mitigating AI Bias

Within our Core Values, “Howard’s aim is to forward the development of scholars and professionals who drive change and engage in scholarship that provides solutions to contemporary global problems, particularly ones impacting the African Diaspora.” Artificial intelligence models or other content-generating tools fail to grasp the diverse nuances of human opinion, thought, language, etc…especially within the African Diaspora. Frequently AI technologies echo dominant viewpoints of certain groups, spread misinformation, or simply generate inaccurate and unknown information as facts. As scholars and learners, all stakeholders should be involved in mitigating such bias as a major priority. Howard University prepares students to be leaders, and our faculty engage and prepare students to be the architects and builders of new technologies, not solely the users. To that end, the University expects students and faculty to adhere to the highest standards of corresponding high ethical and moral standards of conduct.

Protect confidential data

Privacy remains an important consideration for all of these technologies as the collection and use of user information and habits has become a default setting within most of these tools. One should assume that information shared with generative AI tools is not private and one should be mindful of what sensitive information may be made accessible through usage of these AI tools and programs.

Prioritize understanding AI tools and their effects

Since such tools are being developed and integrated into common academic applications such as Microsoft 365, Google Docs, and others, you should spend time developing an awareness of how these programs operate and familiarize yourself with different processes that enable generative AI technologies to produce artifacts and documents.

Expand AI literacy

The popularity of generative AI tools demands an expansive set of AI literacies to be developed across colleges, disciplines, departments, and programs. The ethical implications of how AI models are developed and deployed must be areas faculty and students explore together.

Promote an ethic of transparency around any use of AI text

Schools, Colleges, instructors, and students should all participate in creating policies and guided materials that emphasize the value of intellectual and rigorous processes for learning and provide ethical guidelines for using such technologies as resources rather than replacements for intellectual work.

Take active responsibility for any research or information developed using AI tools

AI-tools tend to “scrape” or take information from a variety of unnamed sources to produce content or products. Users should be aware of this fact because AI tools can produce false, unverifiable, and confusing content or false images or products for which users are responsible for correcting such misleading information and providing verified sources for the documents or products developed using these tools.

Adhere to current policies on academic integrity

Schools and colleges, as well as departments and faculty should develop their own policies regarding the use of AI tools in accordance with the research, writing, and citation expectations of the different disciplines housed within each school or college. There will likely be no one size fits all approach for deciding how AI technologies should or can be used for particular forms of research or learning. But there should be a clear and understood set of parameters and expectations for usage of such tools that clearly outline penalties for violation of such expectations regarding academic work within the classroom, department, college, and university.  

Understand the ethical and labor implications of AI Writing tools

Since academe relies on credible citation practice to measure the reliability and clarity of academic research and knowledge, users should be aware of the wide range of research tools that do not offer the verification and correctness of source material within academic scholarship and culture. Because AI Tools do not adhere to such expectations and can actively distort information or create false information users of this technology should understand the limits of what they can ethically do with the information or products generated by these technologies.

Fairness in Policing AI

Academic dishonesty is a clear concern related to such technologies. According to the Academic Code of Student Conduct, it is a cause for concern when students submit work for assessment as their own, which has been substantially created using artificial intelligence tools or other content-generating tools without obtaining permission from the instructor. Any instructor has the right to inquire about a student's authorship of AI-related work, while at the same time, selected students also have the liberty to defend themselves against such allegations. Fairness and transparency are key to hosting an innovative and safe learning environment.

Be alert for AI-enabled phishing

Generative AI has made it easier for malicious actors to create sophisticated scams at a far greater scale. Cybersecurity remains a top priority for the University. Continue to follow security best practices and report suspicious messages to huhelpdesk@howard.edu.
 

Conclusion

The implications of Artificial Intelligence, and Generative AI tools on Academic Integrity at Howard University are broad and wide ranging. These technologies are relatively new to the general public and should be subject to responsible exploration and experimentation by both instructors and students within the classroom environments. However, it should be noted that Generative AI tools are not a substitute for critical thinking nor serious academic rigor, but may be useful adjunctive tools in the conduct of academic work.