When Garry Kasparov played Deep Blue in May 1997 and lost, the cultural perception of artificial intelligence changed almost immediately. When 18 years later, AlphaGo defeated Lee Sedol with its “Move 37,” a single placement of a stone on the fifth line, no human commentators could explain why it was significant in the moment, however, it was the deciding factor many hours later. Both incidents were presented as evidence that computers had entered a domain previously referred to as “creative” - movements no one would have made, created by systems without conscious intentionality, yet correct in a way that registered, after the fact, as elegant.
Ever since then, the art world has grappled with the implications. Institutions are exhibiting generative image systems; auction houses are listing and selling machine-made works for millions; collectors who laughed at the thought of purchasing machine-generated works in 2018 are now commissioning algorithmic editions in 2026. Yet, the underlying issue regarding whether the machine produces anything that can reasonably be called “creative” remains an open question, much like the questions surrounding chess and Go were overlooked in a surprisingly easy manner.
In reality, what these game-playing systems demonstrate is less than how the general public perceived their results. That is, they show that within a closed world (where every position of the board is observable and every possible move is enumerably defined) search and self-play can identify strategies that no human had identified. The creativity (if that’s the appropriate term) emanated from exploring the vastness of a finite set. Move 37 was not based on intuition or aesthetic judgment. Rather, it was the result of examining significantly more possibilities than any human could examine, and selecting the one option that survived the most rigorous counter-examinations.
That represents a unique type of creativity and it does not translate easily to art.
Closed-world problem at the root of everything
Chess, Go, and even Starcraft represent closed-world games. All known facts about the game are available. Constraints are identifiable. The goal is clear. Therefore, within this framework, computational systems can generate extraordinary novelty because they can exhaustively explore the space while humans can only approximate and reason heuristically.
Art functions differently. The boundaries defining what constitutes success for a work of art are not predetermined. The relevant context (the artist’s history; the time period during which the work was created; the wall upon which it hangs; the conversation into which it inserts itself) constitute part of the work and are not quantifiable. There is no objective function. There is no checkmate. The rules governing how a piece achieves success are determined both by the creation of the piece, and evolve as audiences, critics and the larger culture engage with it.
Generative systems trained on millions of pre-existing images can create surfaces that appear to be similar to artworks. However, due to their architectural design, such systems are unable to participate in the cultural conversation that creates meaning for a work. They are exceptional mimics of visual objects and are completely disconnected from any discursive context. An AlphaGo for painting would need to model not only the brush strokes applied to canvas, but also the evolving responses of the audience to those brush strokes, as well as the evolving responses of critics to both the audience and the artwork - recursively to multiple levels - precisely what adaptive decision-making models,
as detailed in engineering literature on adaptive decision systems currently do poorly.
Hidden-Information Problem
The distinction becomes even clearer when considering examples of decision-making where machines have not dominated until relatively recently. Negotiation analysis is one classic example; competitive intelligence and adversarial security analysis are two additional examples. In each case, each participant has access to only a portion of the total relevant state. Strategy involves estimating what another party believes you believe they know, as well as what they may choose to withhold. This is referred to in computer science as reasoning with incomplete knowledge - specifically imperfect information. The first systems capable of performing imperfect-information reasoning appeared in the late 2010s using algorithms fundamentally different from those employed in exhaustive searches to defeat chess.
Because every interaction between artist and viewer is inherently an exchange under conditions of hidden information, this affects art in important ways. The artist does not know what the viewer knows. The viewer does not know what the artist intended. A work succeeds when some element passes through this void that cannot be conveyed directly. Systems employing generative processes that learn from surfaces of prior art can mimic forms and appearance of past artworks, however, the void or gap between what is withheld and what is inferred by viewers as part of viewing experience is not included in their training data. That is because gaps are properties of interactions between artist and viewer over long periods - not properties of individual images.
An exhaustive-searching machine that defeats chess is demonstrating engineering skill on a closed problem. A machine attempting to create a portrait that registers as truthful would need to participate in an ongoing dialogue spanning centuries about what contributes to portraiture registering as believable. The first is impressive engineering on a closed problem. The second has not been demonstrated and may not be the kind of problem that yields to current methods at all.
What Collectors & Curators Are Actually Acquiring
This is not a rejection of generative art as an entity. The resultant works are physical artifacts; they possess aesthetic properties; they are already in museum collections. The framing matters though. For the most part, when collectors purchase generative artworks they are acquiring records of interesting computational processes - I.e., documentation showing how a specific process performs when prompted with specific inputs and parameters. Some of the “creativity” being purchased by collectors is related to their own curatorial expertise in identifying which outputs warrant display. Other aspects relate to novel technological capabilities inherent in generating such systems themselves. Additional aspects pertain to the cultural relevance of generating works utilizing a new medium.
Collectors are not acquiring a self-aware creative agent. The difference between Move 37 and an excellent late career portrait is not a difference in computing power. It is a difference in type. One is an unprecedented move within a closed world. The latter is contributing to an open-ended discussion that is largely comprised of the work itself.
Aspects of honest descriptions for generative art for the next decade likely will include acknowledging machines provide powerful new tools for artists and that artists who utilize them effectively will produce recognizable creative products in terms of their selection and direction toward producing those works. The instrument itself is not the artist. It’s the equivalent, in a way, of the camera in 1880 - a tool that disrupted the discipline, generated tremendous new output, and ultimately did not displace the human practice it threatened. While generative systems trained on adversarial environments continue
to teach engineers about other types of decision-making problems, namely ones that cannot be solved via search, the boundary separating those problems appears further away than recent news reports indicate.
Ultimately, what chess and Go truly demonstrated was that closed-world problems will eventually succumb to compute. Artistic open-world problems will require more time.