Jump to content

Texture synthesis

From Wikipedia, the free encyclopedia

Texture synthesis is the process of algorithmically constructing a large digital image from a small digital sample image by taking advantage of its structural content. It is an object of research in computer graphics and is used in many fields, amongst others digital image editing, 3D computer graphics and post-production of films.

Texture synthesis can be used to fill in holes in images (as in inpainting), create large non-repetitive background images and expand small pictures.[1]

Contrast with procedural textures

[edit]

Procedural textures are a related technique which may synthesise textures from scratch with no source material. By contrast, texture synthesis refers to techniques where some source image is being matched or extended.

Textures

[edit]

"Texture" is an ambiguous word and in the context of texture synthesis may have one of the following meanings:

  1. In common speech, the word "texture" is used as a synonym for "surface structure". Texture has been described by five different properties in the psychology of perception: coarseness, contrast, directionality, line-likeness and roughness [1].
  2. In 3D computer graphics, a texture is a digital image applied to the surface of a three-dimensional model by texture mapping to give the model a more realistic appearance. Often, the image is a photograph of a "real" texture, such as wood grain.
  3. In image processing, every digital image composed of repeated elements is called a "texture."
A mix of photographs and generated images, illustrating the texture spectrum

Texture can be arranged along a spectrum going from regular to stochastic, connected by a smooth transition:[2]

  • Regular textures. These textures look like somewhat regular patterns. An example of a structured texture is a stonewall or a floor tiled with paving stones.
  • Stochastic textures. Texture images of stochastic textures look like noise: colour dots that are randomly scattered over the image, barely specified by the attributes minimum and maximum brightness and average colour. Many textures look like stochastic textures when viewed from a distance. An example of a stochastic texture is roughcast.

Goal

[edit]

Texture synthesis algorithms are intended to create an output image that meets the following requirements:

  • The output should have the size given by the user.
  • The output should be as similar as possible to the sample.
  • The output should not have visible artifacts such as seams, blocks and misfitting edges.
  • The output should not repeat, i. e. the same structures in the output image should not appear multiple places.

Like most algorithms, texture synthesis should be efficient in computation time and in memory use.

Methods

[edit]

The following methods and algorithms have been researched or developed for texture synthesis:

Tiling

[edit]

The simplest way to generate a large image from a sample image is to tile it. This means multiple copies of the sample are simply copied and pasted side by side. The result is rarely satisfactory. Except in rare cases, there will be the seams in between the tiles and the image will be highly repetitive.

Stochastic texture synthesis

[edit]

Stochastic texture synthesis methods produce an image by randomly choosing colour values for each pixel, only influenced by basic parameters like minimum brightness, average colour or maximum contrast. These algorithms perform well with stochastic textures only, otherwise they produce completely unsatisfactory results as they ignore any kind of structure within the sample image.

Single purpose structured texture synthesis

[edit]

Algorithms of that family use a fixed procedure to create an output image, i. e. they are limited to a single kind of structured texture. Thus, these algorithms can both only be applied to structured textures and only to textures with a very similar structure. For example, a single purpose algorithm could produce high quality texture images of stonewalls; yet, it is very unlikely that the algorithm will produce any viable output if given a sample image that shows pebbles.

Chaos mosaic

[edit]

This method, proposed by the Microsoft group for internet graphics, is a refined version of tiling and performs the following three steps:

  1. The output image is filled completely by tiling. The result is a repetitive image with visible seams.
  2. Randomly selected parts of random size of the sample are copied and pasted randomly onto the output image. The result is a rather non-repetitive image with visible seams.
  3. The output image is filtered to smooth edges.

The result is an acceptable texture image, which is not too repetitive and does not contain too many artifacts. Still, this method is unsatisfactory because the smoothing in step 3 makes the output image look blurred.

Pixel-based texture synthesis

[edit]

These methods, using Markov fields,[3] non-parametric sampling,[4] tree-structured vector quantization[5] and image analogies[6] are some of the simplest and most successful general texture synthesis algorithms. They typically synthesize a texture in scan-line order by finding and copying pixels with the most similar local neighborhood as the synthetic texture. These methods are very useful for image completion. They can be constrained, as in image analogies, to perform many interesting tasks. They are typically accelerated with some form of Approximate Nearest Neighbor method since the exhaustive search for the best pixel is somewhat slow. The synthesis can also be performed in multiresolution, such as through use of a noncausal nonparametric multiscale Markov random field.[7]

Image quilting

Patch-based texture synthesis

[edit]

Patch-based texture synthesis creates a new texture by copying and stitching together textures at various offsets, similar to the use of the clone tool to manually synthesize a texture. Image quilting[8] and graphcut textures[9] are the best known patch-based texture synthesis algorithms. These algorithms tend to be more effective and faster than pixel-based texture synthesis methods.

Deep Learning and Neural Network Approaches

[edit]

More recently, deep learning methods were shown to be a powerful, fast and data-driven, parametric approach to texture synthesis. The work of Leon Gatys[10] is a milestone: he and his co-authors showed that filters from a discriminatively trained deep neural network can be used as effective parametric image descriptors, leading to a novel texture synthesis method.

Another recent development is the use of generative models for texture synthesis. The Spatial GAN[11] method showed for the first time the use of fully unsupervised GANs for texture synthesis. In a subsequent work,[12] the method was extended further—PSGAN can learn both periodic and non-periodic images in an unsupervised way from single images or large datasets of images. In addition, flexible sampling in the noise space allows to create novel textures of potentially infinite output size, and smoothly transition between them. This makes PSGAN unique with respect to the types of images a texture synthesis method can create.

Implementations

[edit]

Some texture synthesis implementations exist as plug-ins for the free image editor Gimp:

A pixel-based texture synthesis implementation:

Patch-based texture synthesis:

Deep Generative Texture Synthesis with PSGAN, implemented in Python with Lasagne + Theano:

Literature

[edit]

Several of the earliest and most referenced papers in this field include:

  • Popat in 1993 - "Novel cluster-based probability model for texture synthesis, classification, and compression".
  • Heeger-Bergen in 1995 - "Pyramid based texture analysis/synthesis".
  • Paget-Longstaff in 1998 - "Texture synthesis via a noncausal nonparametric multiscale Markov random field"
  • Efros-Leung in 1999 - "Texture Synthesis by Non-parametric Sampling".
  • Wei-Levoy in 2000 - "Fast Texture Synthesis using Tree-structured Vector Quantization"

although there was also earlier work on the subject, such as

  • Gagalowicz and Song De Ma in 1986, "Model driven synthesis of natural textures for 3-D scenes",
  • Lewis in 1984, "Texture synthesis for digital painting".

(The latter algorithm has some similarities to the Chaos Mosaic approach).

The non-parametric sampling approach of Efros-Leung is the first approach that can easily synthesize most types of texture, and it has inspired literally hundreds of follow-on papers in computer graphics. Since then, the field of texture synthesis has rapidly expanded with the introduction of 3D graphics accelerator cards for personal computers. It turns out, however, that Scott Draves first published the patch-based version of this technique along with GPL code in 1993 according to Efros.

See also

[edit]

References

[edit]
  1. ^ "SIGGRAPH 2007 course on Example-based Texture Synthesis"
  2. ^ "Near-regular Texture Analysis and Manipulation". Yanxi Liu, Wen-Chieh Lin, and James Hays. SIGGRAPH 2004
  3. ^ "Texture synthesis via a noncausal nonparametric multiscale Markov random field." Paget and Longstaff, IEEE Trans. on Image Processing, 1998
  4. ^ "Texture Synthesis by Non-parametric Sampling." Efros and Leung, ICCV, 1999
  5. ^ "Fast Texture Synthesis using Tree-structured Vector Quantization" Wei and Levoy SIGGRAPH 2000
  6. ^ "Image Analogies" Hertzmann et al. SIGGRAPH 2001.
  7. ^ "Texture synthesis via a noncausal nonparametric multiscale Markov random field." Paget and Longstaff, IEEE Trans. on Image Processing, 1998
  8. ^ "Image Quilting." Efros and Freeman. SIGGRAPH 2001
  9. ^ "Graphcut Textures: Image and Video Synthesis Using Graph Cuts." Kwatra et al. SIGGRAPH 2003
  10. ^ Gatys, Leon A.; Ecker, Alexander S.; Bethge, Matthias (2015-05-27). "Texture Synthesis Using Convolutional Neural Networks". arXiv:1505.07376 [cs.CV].
  11. ^ Jetchev, Nikolay; Bergmann, Urs; Vollgraf, Roland (2016-11-24). "Texture Synthesis with Spatial Generative Adversarial Networks". arXiv:1611.08207 [cs.CV].
  12. ^ Bergmann, Urs; Jetchev, Nikolay; Vollgraf, Roland (2017-05-18). "Learning Texture Manifolds with the Periodic Spatial GAN". arXiv:1705.06566 [cs.CV].
[edit]