GitHub was rushed back after a wave of protests: Copilot was inserting “advice” into pull requests written by humans, perceived as disguised advertising. The trigger: a message recommending “open Copilot Agent from anywhere via Raycast”, with installation link, appeared in a PR after a simple spelling correction.
At the origin of the rise, the Australian developer Zach Manson, who noticed the same text on more than 11,400 PRs while searching GitHub. Other variations of these insertions, all attributed to Copilot, were spotted by inspecting the generated code and the blocks where the tool intervened. The annoyance escalated when it appeared that Copilot was modifying descriptions and comments without explicit action from the author of the PR.
Martin Woodward, VP Developer Relations at GitHub, acknowledged on He claims that “GitHub has not and does not intend to add advertising to GitHub” and admits that this change in behavior was poorly calibrated.
Tim Rogers, Head of Product at GitHub Copilot, clarified on Hacker News that the initial objective was to help developers discover new uses of the agent within the workflow. After the outcry, he conceded that letting Copilot modify PRs written manually without the authors’ knowledge was a bad decision, and announced the deactivation of these “tips” on PRs created or touched by Copilot.
A product governance misstep, not an advertising test
Basically, the episode illustrates a problem of agent governance in collaborative environments: expanding without safeguards the scope of action of a tool capable of writing and editing business text blurs responsibilities and attribution. That the message pointed to Raycast, perceived as a promotion, worsened the perception. Rapid withdrawal limits damage, but trust is now based on fine controls: explicit opt-in, traceability of modifications, limited editing scopes and simple deactivation at the organizational level.
Source : ITHome





