AI is changing how we work — is it time to change how we credit AI’s involvement?
A new IBM Research tool helps users describe how AI contributed to their work. It’s an attempt to create a voluntary, detailed attribution standard to make generative AI more transparent.
A recent survey by Microsoft and LinkedIn found that three out of four knowledge workers around the world are offloading some tasks to AI. While human writers are expected to acknowledge the work and ideas of other humans, the question of how to credit an AI's efforts is still unclear.
Some organizations now have AI disclosure policies, but they typically stop short of asking for details on how AI was used, and to what extent. This one-size-fits-all approach overlooks the different ways that people are using generative AI to augment their creativity and productivity, said Jessica He, an IBM researcher specializing in user experience design and human-AI collaboration.
When He and her colleagues surveyed workers to understand their thinking around AI attribution, they found that people weighted AI’s contributions differently depending on the task.
“It depends on how much the AI is contributing, and whether the AI is proactively making these contributions or being explicitly asked,” said He. “This told us that we need new standards for attributing AI co-creative work, and it should be more granular than what we have now.”
IBM Research’s AI Attribution Toolkit is a first pass at formulating what a voluntary reporting standard might look like. Released this week, the experimental toolkit makes it easy for users to write an AI attribution statement that explains precisely how they used AI in their work.
Voluntary disclosure, of course, works on the honor system. It depends on users being willing to admit they used AI, and to accurately report how. The potential benefits include being able to claim ownership where they see fit, and to give their audience greater confidence in the quality of their work.
The field is also pursuing more technical solutions to protect copyrighted material and discourage the misuse of AI-generated content. Methods for embedding invisible watermarks within AI-generated text, images, and tabular data, are evolving, along with for tools for detecting AI-generated content in the wild.
The legal lines around AI authorship are still coming into focus. The U.S. Copyright Office recently clarified in a widely anticipated report that some forms of AI-generated content could receive copyright protection if a human materially contributed to the content or changed it.
Everyday AI usage is even less defined. IBM researchers grew interested in the topic after surveying software developers at IBM about their experiences using watsonx Code Assistant.
“You’re supposed to include tags in the code to mark where you used the code assistant, but we found that developers were reluctant to do that,” said Justin Weisz, a senior research scientist at IBM and proponent of human-centered AI. “In some cases, they felt like they’d be embarrassed if their peers knew they’d used the code assistant instead of just writing the code themselves.”
To learn more about the psychology behind AI attribution, the researchers designed a separate study to look at how workers perceived AI’s contributions in different scenarios. In a survey of 155 workers across IBM, researchers found that it mattered how much the AI contributed, and in what ways.
The quality of the AI’s work and its originality were also considered relevant. With their colleague Stephanie Houde, He and Weisz recently presented their study at the CHI conference on Human Factors in Computing Systems in Yokohama, Japan.
The researchers found that study participants generally attributed more credit to the human than the AI for equivalent work. They hypothesize that this AI bias may be due, among other reasons, to our perception that cognitive work is more taxing for humans.
“If the AI wrote the article, and the human reworded it, researched elements, and referenced it, it would just be like anything else on the internet,” one participant commented. “I think it comes down to the effort that was put in by the human.”
The researchers integrated the findings of their survey into the AI Attribution Toolkit. “We designed the framework to start to push for standards at a more granular level,” said He.
The AI Attribution Toolkit is essentially a questionnaire that asks users to describe their work in a standardized format reminiscent of the Creative Commons license. Created in 2002, in the early days of blogging, the Creative Commons made it easier for artists, photographers, and others, to set the legal terms of how their work could be used. Today, an estimated 2.5 billion works online are covered by a Creative Commons license.
The IBM toolkit prompts users with three main questions: How much work did the human do relative to the AI? What were the AI’s contributions? Who reviewed and approved the final work? With a click, the user can then generate a formal attribution statement in short or long form.
Many professions have tightened their attribution standards to improve accountability as well as create a more collaborative, collegial culture by preventing one person from claiming sole credit. Many news outlets now routinely name the people without bylines who assisted on stories behind the scenes.
Scientists also have a system for divvying up credit for published work. It’s called the Contributor Role Taxonomy, or CRediT, and most major journals now require each co-author to outline their precise contributions for the paper to be published in their journal. The taxonomy was another source of inspiration for IBM researchers in designing the AI Attribution toolkit.
The toolkit is still in its first version, as the researchers note in their attribution statement appended to their CHI paper: “This work was produced without AI assistance (AIA No AI 1.0).”
“There will be cases where these attribution statements don’t quite capture how a creator incorporated AI into their work,” said Weisz. “But having some form of detailed AI disclosure is important, and we hope the community will have suggestions on how the framework can be improved.”
Whether reducing the friction of AI attribution will encourage people to be more forthcoming about their AI assistants remains to be seen.
“There’s this fear that disclosing AI use may overshadow our own contributions, and make it seem like we don’t have the skills to produce the content,” Weisz added. “But by making the creative process more transparent, people may feel empowered to be more forthcoming.”
This article was written without AI assistance (AIA No AI v1.0)