Uncovering Forged Artwork with Neural Networks

A new program can identify an artist’s unique style with 70 percent accuracy

26 August 2015

This post is part of a series highlighting the latest research from the IEEE Xplore Digital Library

Forged art has become a major criminal activity with estimated annual losses in the billions of dollars, according to the FBI. Forgeries are everywhere. A curator at Guangzhou Academy of Fine Arts in Guangdong, China, admitted in July to stealing more than 140 paintings by Chinese masters and replacing them with his forgeries. Knoedler Gallery, New York City’s oldest gallery, settled lawsuits in August that claimed it had sold US $60 million in fake Abstract Expressionists art.

Such forgeries, however, might be detected in the future with PigeoNET, a convolutional neural network that is capable of learning the characteristics of specific artists with an accuracy rate of more than 70 percent. This type of neural network adapts filters for such traits as painting techniques or materials used to respond to the presence of these features in an image. The process was explained in “Toward Discovery of the Artist’s Style,” published in the July issue of IEEE Signal Processing Magazine. It is available in the IEEE Xplore Digital Library.

The researchers showed that convolutional neural networks outperformed all existing learning algorithms on a variety of challenging image classification tasks. Nanne van Noord is a Ph.D. candidate working on the Reassessing Vincent van Gogh project at the Tilburg Center for Cognition and Communication, at Tilburg University, the Netherlands. Eric Postma is a professor of artificial intelligence at the center. Ella Hendriks is a senior conservator at the Van Gogh Museum, in Amsterdam. She is also an associate professor of conservation and restoration at the University of Amsterdam Training Programme for Conservation and Restoration of Cultural Heritage.


Art experts acquire their knowledge by studying a vast number of works and reading descriptions of the artist’s visual features. When examining a painting, an art expert can usually determine its style, genre, the artist, and the period to which it belongs. For example, the characteristic features of the Dutch painter Vincent van Gogh during his later French period include the outlines painted around objects, complementary colors, and rhythmic brush strokes.

Computers and high-resolution digital reproductions of artwork were used to automate the attribution process. But today, machine learning and feature engineering—which is the process of using knowledge of the data to create features that make machine-learning algorithms work—are the tools of the trade. But this process has its drawbacks because the art experts’ proficiency is based on their intuition, which is difficult to write code for.

Instead, the researchers believe feature learning has greater promise than feature engineering because it takes advantage of deep architectures, which are machine-learning methods inspired by biological neural networks. An example of deep architecture is a convolutional neural network which, when combined with a powerful learning algorithm, is capable of discovering visual features.

The feasibility of image-based automatic artist attribution is supported by biological studies. Pigeons have been successfully trained to discriminate between artists and, in tests, were able to correctly attribute photographs and videos of paintings by Monet and Picasso 90 percent of the time, according to the researchers. That’s why the researchers named their network PigeoNET, because it performs a task similar to the pigeons by conducting artist attribution based solely on visual characteristics. Its architecture is based on the Caffee implementation and consistes of five convolutional layers and three fully connected layers.

To recognize an artist’s traits, a convolutional neural network adapts filters to respond to specific features in an image. It does so by adjusting the parameters, or weights of the filters, until a suitable configuration is found. A learning algorithm called back-propagation gathers the proper weights, which requires no previous knowledge other than the images and the name of the artist.

The filters are grouped in layers, applying the first layer directly to images and subsequent layers to the responses generated by previous layers. This stacking of layers creates a multilayer architecture the filters can respond to, building increasingly complex features with each subsequent layer. Because convolution is used to apply the filters to an image or the response of a previous layer, the layers are called convolutional layers.


The researchers used more than 112,000 digital photographic reproductions of artworks by 6,600 artists exhibited in the Rijksmuseum, in Amsterdam. Within this data set there were more than 1,800 different types of artwork and 406 annotated materials such as paper, canvas, porcelain, iron, and wood. The objective of artist attribution is to identify the correct artist in the test set.

To learn a mapping from the filter responses to a certain artist, the convolutional layers are typically followed by a number of fully connected layers that translate the presence and intensity of the filter responses to a single certainty score per artist. The certainty score for an artist is high whenever the responses for filters corresponding to that artist are strong and low when the filter responses are weak or nonexistent. Therefore, an unseen artwork can be attributed to an artist for whom the certainty score is highest. The researchers say that PigeoNET can be further improved by expanding the data sets but believe the system is a “fruitful approach for future computer-supported examination of artworks.”

IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.

Learn More