Patchwork

Technique Basics

Patchwork is a data hiding technique developed by Bender et alii and published on IBM Systems Journal, 1996. It is based on a pseudorandom, statistical model.

It works by invisibly embedding a specific statistic, with a Gaussian distribution, into the host image. Two sets of pixel, or patches, of the image are chosen, the first A and the second B. Then the algorithm works by slightly brightening points in A, while darkening of the same factor those in B.

To determine the points we have to touch in the image, a pseudorandom number generator is used, feeded with a secret key shared by both the transmitter and the receiver, because the decoding algorithm has to visit the very same points in the same order to extract the embedded data.

What happens in brief, is that while the expected value of the sum

S(n) = sum (a[i] - b[i], i = 1..n)

where a[i] and b[i] are the i-th point in patch A and B respectively, is equal to zero in a normal image.

After doing n times the operations:

a[i] = a[i] + delta b[i] = b[i] - delta

The expected value is going to be shifted to the right of

2 x delta x n
with respect to the average one of 0.

So if n is big enough, and we calculate the value of S(n) for an image, we are going to have a quite strong certainty if the image was watermarked or not by patchwork.

Actually, we can assign a "certainty level" related to n to the coding process, representing the trust we can give to the decoding answers. [GB98] The problem with increasing n to get a stronger mark, is that the more we repeat the process, the more the watermark is likely to become visible.[Ben96]

Data Rate

We have seen that the only answer that this technique can give is if an image was encoded with a particular patch or not, given the stego-image and the key used for the pseudorandom number generator. From this observation follows immediately that patchwork has an extremely low bit rate.

However, it must be said that usually multiple applications of this algorithm don't interfere with each other, because patches made using different keys are almost orthogonal.[GB98]

Robustness

One of the most important characteristics of patchwork is its resistance to cropping and to gamma and tone scale corrections.

The former will lead only to a logarithmic reduction of accuracy as the picture size diminishes. The second usually change in a regular way the luminance of pixels, not disturbing patchworks, that uses a difference measure to work.

Patchwork is destroyed by any affine transformation, like tranlation, rotation or scaling.[Ben96]

Successful attacks against it were accomplished by Anderson and Petitcolas using a "synchronization breaking" attack. Basically, they slightly displaced the data in order to change them in an invisible way, meanwhile changing the positions of the points sampled by patchworks. [AP97]

Ease of detection/extraction

Patchwork is nearly invisible, since there's the need for the key and the pseudorandom function in order to find the changed points. Anyway, if the same key is used to encrypt a large number of identical sized images, then by averaging them the patch will become visible. [Ben96]

Suitability for steganography or watermarking

Given its fair robustness and its extremely low data rate, patchwork is easily spotted as suitable for digital watermarking.

The usual procedure for applying it to copyright protection systems is to embed a first very strong bit (for instance with a 99.9999% certainty level), that works as a "marker", telling whether the image was watermarked or not at all. Then it uses other keys to embed the other bits, but in a much weaker way, so that the watermark doesn't compromise the image so much.

The decoding algorithm will look first for the strong bit, and if it finds it, then it will go on decoding the rest of the watermark. [GB98]

Problems and possible solutions

The first problem is that, using points as the units of patches, the noise we add to the image is in the high frequencies. These are likely to be filtered or disturbed by a lot of processes, from a simple lossy compression to DA/AD conversions. A solution is to use patches of small areas instead of points, with a distribution of the deltas that spreads the noise to a wide band or moves it to low frequencies, making it less likely to be filtered out.

Another issue is the big sensitivity to affine transformations, that can be reduced adding some heuristic based upon feature recognition, like aligning along the interocular line of a face, or using affine coding, or both. The last one uses a known pattern placed on the image so that it is possible to reconstruct the affine transformations applied to it, and invert them before decoding. [Ben96]

References:

[Ben96] [GB98]

Previous: texture block coding

Next: Orthogonal projection coefficients manipulation

Matteo Fortini