AI

An AI is not an inventor after all (or yet)

A strong Full Bench of the Federal Court of Australia has ruled that DABUS, an artificial intelligence, is not an inventor for the purposes of patent law. So, Dr Thaler’s application for DABUS’ patent has been rejected.[1] No doubt the robot will be back again[2] and we can expect that an application for special leave will be pending soon.

A dalek on display
By Moritz B. – Self-photographed, CC BY 2.5,

Dr Thaler had applied for a patent, No. 2019363177 entitled “Food container and devices and methods for attracting enhanced attention”, naming DABUS – an acronym for ‘device for the autonomous bootstrapping of unified sentience’ – as the inventor.

The Commissioner had rejected the application under reg. 3.2C for failure to identify the inventor. That rejection was overturned by Beach J on appeal from the Commissioner. And this was the decision on the Commissioner’s appeal.

Essentially, the Full Court ruled that an inventor for the purposes of patent law must be a natural person, not an artificial intelligence.

The Full Court held that identification of the “inventor” was central to the scheme of the Act. This is because, under s 15, only the inventor or someone claiming through the inventor is entitled to a patent.

Under the legislation before the 1990 Act, their Honours considered that an ‘actual inventor’ could be only a person with legal personality. At [98], their Honours summarised:

In each of these provisions, the ability of a person to make an application for a patent was predicated upon the existence of an “actual inventor” from whom the entitlement to the patent was directly or indirectly derived. Paragraphs (a), (c) and (e) describe the actual inventor as, respectively, a person, one that is deceased and has a legal representative (which must be a person), and one that is not resident in Australia. Paragraphs (b), (d), (f) and (fa) all contemplate an assignment happening between the patent applicant and the actual inventor. It is clear from these provisions that only a person with a legal personality could be the “actual inventor” under this legislative scheme.

This scheme, and its consequences, did not materially change under the 1990 Act.

Acknowledging that a none of the case law had to consider whether an AI could be an inventor, the Full Court noted that the ‘entitlement’ cases proceeded on the basis that ‘inventor’ meant the ‘actual inventor’. Their Honours considered the cases interpreting this expression were all premised on the ‘actual inventor’ – the person whose mind devised the claimed invention – being a natural person. At [105] and [106], their Honours explained:

None of the cases cited in the preceding five paragraphs confronted the question that arose before the primary judge of whether or not the “inventor” could include an artificial intelligence machine. We do not take the references in those cases to “person” to mean, definitively, that an inventor under the Patents Act and Regulations must be a human. However, it is plain from these cases that the law relating to the entitlement of a person to the grant of a patent is premised upon an invention for the purposes of the Patents Act arising from the mind of a natural person or persons. Those who contribute to, or supply, the inventive concept are entitled to the grant. The grant of a patent for an invention rewards their ingenuity.

Where s 15(1)(a) provides that a patent for an invention may only be granted to “a person who is an inventor”, the reference to “a person” emphasises, in context, that this is a natural person. …. (emphasis supplied)

Given that conclusion, and the structure of s 15, Dr Thaler’s argument that he was entitled on the basis of ownership of the output of DABUS’ efforts was to no avail. At [113]:

… having regard to the view that we have taken to the construction of s 15(1) and reg 3.2C(2)(aa) [i]t is not to the point that Dr Thaler may have rights to the output of DABUS. Only a natural person can be an inventor for the purposes of the Patents Act and Regulations. Such an inventor must be identified for any person to be entitled to a grant of a patent under ss 15(1)(b)-(d). (emphasis supplied)

The Full Court then drew support from the High Court’s reasoning in D’Arcy v Myriad esp. at [6] in which the majority emphasised that patentable subject matter had to be the product of “human action”.

Although not put in this way, it is apparent that policy considerations played a significant role in their Honours’ conclusion. At [119] to [120], their Honours pointed out:

in filing the application, Dr Thaler no doubt intended to provoke debate as to the role that artificial intelligence may take within the scheme of the Patents Act and Regulations. Such debate is important and worthwhile. However, in the present case it clouded consideration of the prosaic question before the primary judge, which concerned the proper construction of s 15 and reg 3.2C(2)(aa). In our view, there are many propositions that arise for consideration in the context of artificial intelligence and inventions. They include whether, as a matter of policy, a person who is an inventor should be redefined to include an artificial intelligence. If so, to whom should a patent be granted in respect of its output? The options include one or more of: the owner of the machine upon which the artificial intelligence software runs, the developer of the artificial intelligence software, the owner of the copyright in its source code, the person who inputs the data used by the artificial intelligence to develop its output, and no doubt others. If an artificial intelligence is capable of being recognised as an inventor, should the standard of inventive step be recalibrated such that it is no longer judged by reference to the knowledge and thought processes of the hypothetical uninventive skilled worker in the field? If so, how? What continuing role might the ground of revocation for false suggestion or misrepresentation have, in circumstances where the inventor is a machine?

Those questions and many more require consideration. Having regard to the agreed facts in the present case, it would appear that this should be attended to with some urgency. However, the Court must be cautious about approaching the task of statutory construction by reference to what it might regard as desirable policy, imputing that policy to the legislation, and then characterising that as the purpose of the legislation …. (emphasis supplied)

Finally, in this quick reaction, it can be noted that the Full Court recognised that their Honours’ decision was consistent with the English Court of Appeal’s decision on the counterpart application. Their Honours considered, however, there were sufficient differences in the legislative schemes that a wholly autocthonous solution should be essayed.

Commissioner of Patents v Thaler [2022] FCAFC 62 (Allsop CJ, Nicholas, Yates, Moshinsky And Burley JJ)


  1. Patent application No. 2019363177 entitled “Food container and devices and methods for attracting enhanced attention”  ?
  2. With apologies to you know who.  ?

Thaler: the robots have arrived DownUnder

In what may well be a world first,[1] Beach J has upheld Thaler’s appeal from the Commissioner, ruling that an AI can be an inventor (or at least that someone who derives title to the invention from an AI can be an entitled person).

Stephen L. Thaler applied for a patent, AU 2019363177, entitled “Food container and devices and methods for attracting enhanced attention”.[2] The application named the inventor as:

DABUS, The invention was autonomously generated by an artificial intelligence

The application was made through the PCT so, as a result, reg. 3.2C(2)(aa) required the applicant to provide the name of the inventor. The Commissioner had used the identification provided to reject the application on the basis that an “inventor” must be a natural person, which an AI obviously was not.

Beach J rejected this approach. At [10], his Honour summarised his conclusions:

in my view an artificial intelligence system can be an inventor for the purposes of the Act. First, an inventor is an agent noun; an agent can be a person or thing that invents. Second, so to hold reflects the reality in terms of many otherwise patentable inventions where it cannot sensibly be said that a human is the inventor. Third, nothing in the Act dictates the contrary conclusion.

There are a number of strands to his Honour’s reasoning. Perhaps, the most striking feature of his Honour’s reasoning is his Honour’s emphasis the role of patents in encouraging technological development and the importance of not impeding progress by shutting out new developments.

DABUS

Dr Thaler was the owner of the copyright in DABUS’ source code. He was also responsible for and the operator of the computer on which DABUS operated. He did not, however, produce the claimed invention.

As the description of the “inventor” indicates, the claimed invention was an output from the operation of DABUS.

Beach J did not propose to offer a definition of “artificial intelligence”. His Honour considered that DABUS, at least as described in Dr Thaler’s evidence, was not just a “brute force computational tool”. Instead, it was appropriate to describe DABUS as semi-autonomous.[3]

DABUS itself consisted of multiple neural networks. At first, the connections between these networks was trained with human assistance by presentation of fundamental concepts. As the system accumulated knowledge, the networks connected themselves into increasingly longer chains. And then DABUS became capable of generating notions itself – the unsupervised (by humans) generative learning phase. Once in this phase, DABUS was capable of randomn generation of concepts and identifying those which were novel. In addition, it was trained or programmed to identify significant concepts which continued operation further reinforced. At [41], Beach J summarised:

The upshot of all of this, which I accept for present purposes, is that DABUS could be described as self-organising as a cumulative result of algorithms collaboratively generating complexity. DABUS generates novel patterns of information rather than simply associating patterns. Further, it is capable of adapting to new scenarios without additional human input. Further, the artificial intelligence’s software is self-assembling. So, it is not just a human generated software program that then generates a spectrum of possible solutions to a problem combined with a filtering algorithm to optimise the outcome.

Beach J accepted at [42] Dr Thaler’s evidence that:

DABUS, and its underlying neural paradigm, represents a paradigm shift in machine learning since it is based upon the transient chaining topologies formed among associative memories, rather than activation patterns of individual neurons appearing within static architectures ….

Beach J’s reasons

At [118], Beach J pointed out that no specific provision in the Act precluded an artificial intelligence from being an “inventor”.

Secondly, Beach J considered that patent law is different to copyright law which specifically requires human authors and recognition of moral rights.

Thirdly, as it was not defined in the Act, the term “inventor” should be given its ordinary meaning. Dictionary definitions did not help with this as more was required “than mere resort to old millennium usages of that world.”[4] Instead, at [120]:

the word “inventor” is an agent noun. In agent nouns, the suffix “or” or “er” indicates that the noun describes the agent that does the act referred to by the verb to which the suffix is attached. “Computer”, “controller”, “regulator”, “distributor”, “collector”, “lawnmower” and “dishwasher” are all agent nouns. As each example demonstrates, the agent can be a person or a thing. Accordingly, if an artificial intelligence system is the agent which invents, it can be described as an “inventor”.

Importantly, Beach J took into account the purpose of the patents system. At [121], his Honour explained:

in considering the scheme of the Act, it has been said that a widening conception of “manner of manufacture” is a necessary feature of the development of patent law in the twentieth and twenty-first centuries as scientific discoveries inspire new technologies”.[5]

Accordingly, it made little sense to apply a flexible conception of subject matter – “manner of (new) manufacture under s 18(1)(a) – but a restrictive interpretation of ”inventor“. This would mean an otherwise patentable invention could not be patented because there was no ”inventor”.

Beach J’s purposive approach was reinforced by reference to the newly enacted objects clause:

The object of this Act is to provide a patent system in Australia that promotes economic wellbeing through technological innovation and the transfer and dissemination of technology. In doing so, the patent system balances over time the interests of producers, owners and users of technology and the public.

It was not necessary to identify an ambiguity before resorting to the objects clause. The objects clause should always be considered when construing legislation.

It was consistent with the object of the Act to interpret “inventor” in a way which promoted technological innovation. Allowing “computer inventorship” would incentivise computer scientists to develop creative machines. Moreover, the object of the Act would not be advanced if the owners of creative computers resorted to trade secret protection instead of the patent system.

At least, with respect, his Honour’s approach shows that s 2A can operate in favour of technological development by encouraging patenting rather than, as many feared, a cat’s paw justifying resort to an ex post analysis[6] in favour of ‘user rights’.

In a further bold development, Beach J considered ([135] – [145]) that the Act is “really concerned” with inventive step. Under s 7(2), the issue was whether or not the patent claimed a sufficient technological advance over what had gone before to warrant the grant of a monopoly. How that advance was made was not relevant.

Descending to the detail of s 15, Beach J first pointed out at [157] that s 15 is directed to who may be granted a patent and does not define who is an inventor. Beach J accepted that DABUS was not a person and so under s15 could not be a person entitled to the grant of a patent. Even though s 15(1)(a) identified “a person who is the inventor” as the first person entitled to the patent, this was not determinative. It was directed to a different issue than definition of “an inventor”.

Although DABUS could not be an entitled person, Beach J found that Dr Thaler qualified as an entitled person at least on the basis of s 15(1)(b). At [167], his Honour explained:

Dr Thaler is the owner, programmer and operator of DABUS, the artificial intelligence system that made the invention; in that sense the invention was made for him. On established principles of property law, he is the owner of the invention. In that respect, the ownership of the work of the artificial intelligence system is analogous to ownership of the progeny of animals or the treatment of fruit or crops produced by the labour and expense of the occupier of the land (fructus industrialis), which are treated as chattels with separate existence to the land.

By this means, his Honour neatly side-stepped philosophical questions, some of which he had adverted to earlier. For example, at [131] Beach J asked:

If the output of an artificial intelligence system is said to be the invention, who is the inventor? And if a human is required, who? The programmer? The owner? The operator? The trainer? The person who provided input data? All of the above? None of the above? ….

Beach J returned to this aspect of the problem at [194]:

more generally there are various possibilities for patent ownership of the output of an artificial intelligence system. First, one might have the software programmer or developer of the artificial intelligence system, who no doubt may directly or via an employer own copyright in the program in any event. Second, one might have the person who selected and provided the input data or training data for and trained the artificial intelligence system. Indeed, the person who provided the input data may be different from the trainer. Third, one might have the owner of the artificial intelligence system who invested, and potentially may have lost, their capital to produce the output. Fourth, one might have the operator of the artificial intelligence system. But in the present case it would seem that Dr Thaler is the owner.

As Dr Thaler combined in the one person the roles of owner, programmer and operator, he was entitled to the fruits of its operation. Would different problems arise if, instead of being embodied in the one person, each of the functions identified lay in a different person?

Returning to [131], Beach J continued immediately following the extract quoted above, saying:

…. In my view, in some cases it may be none of the above. In some cases, the better analysis, which is consistent with the s 2A object, is to say that the system itself is the inventor. That would reflect the reality. And you would avoid otherwise uncertainty. And indeed that may be the case if the unit embodying the artificial intelligence has its own autonomy. What if it is free to trawl the internet to obtain its own input or training data? What about a robot operating independently in a public space, having its own senses, learning from the environment, and making its own decisions? ….

If one can start with the AI as the inventor, that arguably simplifies the analysis in terms of entitlement as the person claiming to be the entitled person will need to show some claim over the results of the operation of the machine.

One final point. It is not clear from the reasons the extent to which Beach J was referred to the controversies around the world arising from Dr Thaler’s applications. It didn’t matter.

At [220], his Honour pointed out that they were irrelevant to the task before him: the interpretation of the words of the Australian Act.

I am not at all sure that “the world” has reached a settled position about the question whether AIs can be inventors. One would hope, however, Australian law does not head down yet another path where “we” are granting patents for things which are not patentable in their ‘home’ countries. It will, for example, be another 12 years or so before we are finally in something like parity on inventive step with the “rest” of the world and have escaped the constraints of the pre-Raising the Bar tests of inventive step.

Two questions

This short summary cannot do justice to the detailed arguments developed over 228 paragraphs.

As discussed above, Beach J accepted that DABUS could not be granted a patent as it was not a person and s 15 specifies that the grantee of a patent must be a person. The Commissioner had proceeded on the basis that the terms of s 15 were predicated on the “old millennium” understanding that title to an invention flowed from the person who was the inventor just as subsistence and ownership of copyright flows from the person who is the author of the work. Beach J has sidestepped that on the basis that s 15 is concerned only with entitlement, not definition of who is an inventor. Nonetheless, one might think s 15 was drafted in this way on the basis that entitledment flowed from the inventor. It does also seem somewhat curious that an AI can be an inventor but not entitled to a patent because it is not a person.

Secondly, much of the controversy overseas has been about whether Dr Thaler’s creation acts autonomously or semi-autonomously or is just an exercise in brute computing. See for example Rose Hughes’ report on IPKat. Beach J appears to have had some evidence from Dr Thaler about how DABUS was designed and worked. Thus at [43], his Honour accepted Dr Thaler’s “assertion” that:

DABUS, and its underlying neural paradigm, represents a paradigm shift in machine learning since it is based upon the transient chaining topologies formed among associative memories, rather than activation patterns of individual neurons appearing within static architectures. From an engineering perspective, the use of network resonances to drive the formation of chaining topologies, spares programmers the ordeal of matching the output nodes of one [artificial neural network] with the input nodes of others, as in deep learning schemes. In effect, complex neural architectures autonomously wire themselves together using only scalar resonances.

Reinforcement or weakening of such chains takes place when they appropriate special hot button nets containing memories of salient consequences. Therefore, instead of following error gradients, as in traditional artificial neural net training, conceptual chains are reinforced in proportion to the numbers and significances of advantages offered. Classification is not in terms of human defined categories, but via the consequence chains branching organically from any given concept, effectively providing functional definitions of it. Ideas form as islands of neural modules aggregate through simple learning rules, the semantic portions thereof, being human readable as pidgin language.

Later his Honour asked at [127] – [128]:

Who sets the goal for the system? The human programmer or operator? Or does the system set and define its own goal? Let the latter be assumed. Further, even if the human programmer or operator sets the goal, does the system have free choice in choosing between various options and pathways in order to achieve the goal? Let that freedom also be assumed. Further, who provides or selects the input data? Let it be assumed that the system can trawl for and select its own data. Further, the larger the choice for the system in terms of the algorithms and iterations developed for the artificial neural networks and their interaction, the more autonomous the system. Let it be assumed that one is dealing with a choice of the type that DABUS has in the sense that I have previously described.

Making all of these assumptions, can it seriously be said that the system is just a brute force computational tool? Can it seriously be said that the system just manifests automation rather than autonomy? ….

If by “assumptions” his Honour is referring to the evidence from Dr Thaler at [42] and [43] which his Honour accepted, that is one thing. It may be quite another thing if they were in fact assumptions.

Where to now?

At the time of writing, the Commissioner is understood to be considering whether or not to appeal.

Thaler v Commissioner of Patents [2021] FCA 879


  1. The EPO refused the corresponding patent application with the oral hearing of the appeal to be heard on 21 December 2021. Apparently, the corresponding patent has been granted in South Africa which, I am given to understand, effectively does not operate a substantive examination system.  ?
  2. For an interesting discussion, see “The first AI inventor – IPKat searches for the facts behind the hype” and the later report on the USPTO’s consultations “Is it time to move on from the AI inventor debate? ?
  3. At [19] – [29], Beach J provides an overview of how his Honour understands artificial neural networks work.  ?
  4. At [15]. For the lawyers, [148] – [154] set out the technical arguments for the limitations of dictionaries in some detail.  ?
  5. Diplomatically (and consistently with precedent) citing D’Arcy v Myriad Genetics Inc (2015) 258 CLR 334 at [18] perhaps the single biggest retreat from the teleological approach declared in NRDC.  ?
  6. For example, Mark A Lemley, ‘Ex Ante versus Ex Post Justifications for Intellectual Property ?