The Commonwealth Attorney-General, Mark Dreyfus, yesterday announced that the Government will form a Copyright and Artificial Intelligence reference group “to better prepare for future copyright challenges emerging from AI.”
The Attorney-General and his department have held a number of roundtables during the course of the year to consult about a range of issues. One of the issues discussed included the issues arising from the use of AI tools.
According to the Media Release:
AI gives rise to a number of important copyright issues, including the material used to train AI models, transparency of inputs and outputs, the use of AI to create imitative works, and whether and when AI-generated works should receive copyright protection.
The reference group will be a standing mechanism for ongoing engagement with stakeholders across a wide range of sectors, including the creative, media and technology sectors, to consider issues in a careful and consultative way.
Engagement with a broad range of stakeholders and sectors will help Australia harness AI opportunities, while continuing to support the vitality of our creative sector.
The Media Release notes that the reference group will complement “other AI-related Government initiatives, including the work being led by the Minister for Industry and Science Ed Husic on the safe and responsible use of AI.”
The Media Release notes that further details, in addition to outcomes from the Roundtables, will be made available through the Attorney-General’s Department’s website in due course.
Some very quick thoughts
One would hope the complementing of other agencies’ work may involve some fairly close co-operation on some issues at least since the question of authorship for copyright seems to raise similar issues to who is the designer for the purposes of registered design or the inventor for patents – all three being predicated on the assumption of human agency.
It seems pretty clear under our law following Telstra v PDC ( –  and ) that works generated by one or two simple “prompts” will not qualify for copyright protection as original works in Australia. The situation where the material results from much more detailed instructions is much more up in the air – both here and overseas.
In the USA, the Register of Copyright’s Review Board has rejected the claim to copyright in a work resulting from 624 prompts and further ‘adjustments’ by the human ‘operator’ / claimant, Mr Allen:
There is increasing commentary likening the generation of materials through detailed prompts to the basis on which copyright is recognised as subsisting in photographs. According to the Review Board, however, Mr Allen’s arguments based on the inputting of detailed prompts did not establish authorship:
As the Office has explained, “Midjourney does not interpret prompts as specific instructions to create a particular expressive result,” because “Midjourney does not understand grammar, sentence structure, or words like humans.” It is the Office’s understanding that, because Midjourney does not treat text prompts as direct instructions, users may need to attempt hundreds of iterations before landing upon an image they find satisfactory. This appears to be the case for Mr. Allen, who experimented with over 600 prompts before he “select[ed] and crop[ped] out one ‘acceptable’ panel out of four potential images … (after hundreds were previously generated).” As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.” And because the authorship in the Midjourney Image is more than de minimis, Mr. Allen must exclude it from his claim. Because Mr. Allen has refused to limit his claim to exclude its non-human authorship elements, the Office cannot register the Work as submitted. (Footnotes and citations omitted)
Whether the US courts or, for that matter, an Australian court will follow that approach remains to be seen. Judge Howell, in rejecting Dr Thaler’s attempt to register copyright in “A Recent Entrance to Paradise” on purely administrative review grounds, outlined the argument in obiter:
A camera may generate only a “mechanical reproduction” of a scene, but does so only after the photographer develops a “mental conception” of the photograph, which is given its final form by that photographer’s decisions like “posing the [subject] in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation” crafting the overall image. Human involvement in, and ultimate creative control over, the work at issue was key to the conclusion that the new type of work fell within the bounds of copyright.
The position on the treatment of inputs is also up in the air. The Authors’ Guild of America and others have brought a number of cases against various AI operators including Open AI and LLaMA on the basis that the training of these LLMs involved the wholesale copying of the authors’ works into the LLM’s databases.
A number of commentators argue these cases are likely to fail, however, in light of the Second Circuit’s ruling that the Google Books Project, in which Google scanned thousands of in-copyright books to create a searchable digital database, did not infringe copyright as a “fair use”.
Arguably, however, the nature and purpose of the uses are different and it will be interesting to see if the US Supreme Court’s decision in Andy Warhol Foundation v Goldsmith with its emphasis on the balancing nature of the inquiry will lead to a different outcome.
On the other hand, if the conduct is found to be a non-infringing use in the USA, Australian law does not have a corresponding, broadly based “fair use” defence. Can one argue that the AI is engaged in “research or study”? If not, what will the policy ramifications be for Australia? Will anyone develop AIs in Australia if training an AI in Australia does infringe copyright while it is not an infringement in, say, the United States? If it’s open slather, though, how will authors and publishers get paid?
Then, there’s the question of infringement. It seems it is possible in at least some cases to find out what an LLM has been trained on – but how long that will remain the case must be a question. Then, ordinarily, a copyright owner under our law would approach this by demonstrating a close degree of resemblance to a copyright work and the potential for access. Then, a court is likely to see if the alleged infringer can explain how it developed the material independently (or there is some other defence).
We do have judicial statements that there is no infringement in copying the style or the ideas. The successful cases of emulating the style are pretty rare but I guess the point of asking an AI to produce something in the style of … is that the AI is going to produce something new rather than merely copied. Ultimately, that is going to depend on comparing what is produced to one (or much less likely, more) copyright works.
Apart from the uncertainties about how our law will deal with these issues, it seems clear that careful consideration of how things are developing overseas is required and, in Dr Pangloss’ world, development of uniform approaches.