# Adaptive Partial Image Secret Sharing

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries

#### 2.1. Salience Detection

- Cluster the image into ${K}_{1}$ clusters, such as ${K}_{1}=6$.
- For each cluster, compute the contrast cue and spatial cue and combine the two saliency cues by multiplication. Herein, the contrast cue can represent the visual feature uniqueness and the contrast operator can simulate the human visual receptive fields; spatial cue is considered because of the “central bias rule” in single image saliency detection, also known as, the regions near the image center draw more attention than the other regions in human visual system.
- For each pixel, obtain the final saliency map by summing the joint saliency over all clusters.

#### 2.2. Image Inpainting

- Select a target part $\mathsf{\Omega}$ to be inpainted, and $\mathsf{\Phi}=S-\mathsf{\Omega}$, where S indicates the whole image.
- Determine the size of the template window by using the image texture feature, denoted by ${\mathsf{\Psi}}_{p}$, where any $p\in \partial \mathsf{\Omega}$ denotes the center of the template window. In addition, the size of the window should be larger than the largest texture element.
- Calculate patch priorities by using Equation (1), i.e., the product of the data term and the confidence term.$$W\left(p\right)=C\left(p\right)D\left(p\right)$$$$D\left(p\right)=\frac{\left|\nabla {S}_{p}^{\perp}\xb7{n}_{p}\right|}{a}$$$$C\left(p\right)=\frac{{\displaystyle \sum _{q\in {\psi}_{p}\cap \overline{\mathsf{\Omega}}}}C\left(q\right)}{\left|{\psi}_{p}\right|}$$The data term represents the difference between the direction of isophote and the direction of the normal vector. In the template window the confidence term is used to measure the amount of reliable information. In other words, if the difference between the normal vector direction and the isophote direction is smaller and the information contained in the template window is greater, the priority of patch will be higher.
- Find $\widehat{p}$ according to Equation (4), and the block ${\psi}_{\widehat{q}}\in \mathsf{\Phi}$ with the highest matching in the source image with the template window as specified in Equation (5), where the sum of squared differences (SSD) is utilized as the evaluation standard. Finally, the highest matching block replaces the patch of the current window.$$\widehat{p}=\underset{p\in \partial \mathsf{\Omega}}{arg\phantom{\rule{0.277778em}{0ex}}max}W\left(p\right)$$$$\widehat{q}=\underset{q\in \mathsf{\Phi}}{arg\phantom{\rule{0.277778em}{0ex}}min}d({\psi}_{\widehat{p}},{\psi}_{q})$$
- For any $q\in {\psi}_{\widehat{p}}\cap \mathsf{\Omega}$, after each filling process, renew the confidence terms $C\left(q\right)=C\left(\widehat{p}\right)$.
- Repeat the above steps 3–5 until the image is inpainted completely.

#### 2.3. Linear Congruence-Based Iss

## 3. The Introduced Apiss Scheme

- In Step 1, the salient part is adaptively detected and removed by salience detection method and Otsu’s threshold operation, thus the introduced scheme can achieve automatic processing. Moreover, the salient target part may include a single object or multiple objects.
- In Step 3, each share has its own filling order, i.e., $\widehat{{p}_{i}}$. To inpaint synchronously, the highest priority is selected from the n candidate orders as the the adopted order for all of the n shares.
**Algorithm 1:**The introduced APISS scheme for the $(k,n)$ threshold.**Input**: The threshold parameters $(k,n)$ and a color secret image S with size $H\times W$.

**Output**: n color shares $S{C}_{1},S{C}_{2},\cdots S{C}_{n}$.**Step****1:**- Utilize salience detection method on S and Otsu’s threshold operation to automatically obtain the target part $\mathsf{\Omega}$. Remove $\mathsf{\Omega}$ with green color from S to obtain ${C}_{i}$, for $i=1,2,\cdots n$, where $S{C}_{i}={C}_{i}$ denotes the input un-inpainted cover image.
**Step****2:**- Use the method in Section 2.2 to determine the size of the template window, denoted by ${\mathsf{\Psi}}_{{p}^{*}}$.
**Step****3:**- For each share, find $\widehat{{p}_{i}}$ with Equation (4). Find ${i}^{*}=\underset{i\in [1,n]}{arg\phantom{\rule{0.277778em}{0ex}}max}{W}_{i}\left({\widehat{p}}_{i}\right)$, and let $\widehat{{p}_{i}}=\widehat{{p}_{{i}^{*}}}$, $i=1,2,\cdots n$.
**Step****4:**- For each cover image, by using $\widehat{{p}_{i}}$ and Equation (5), search for the most matching block to gain ${\psi}_{\widehat{{q}_{i}}}$, and then, replace the patch of the current window by the most matching block, for $i=1,2,\cdots n$.
**Step****5:**- For each position $(h,w)\in \left\{(h,w)\right|{H}_{1}\le h\le {H}_{2},{W}_{1}\le w\le {W}_{2}\}$, where $({H}_{1},{W}_{1})$ and $({H}_{2},{W}_{2})$ denote the coordinates of the current processing template window, repeat Step 6.
**Step****6:**- For the input of $S{C}_{1}(h,w),S{C}_{2}(h,w),\cdots S{C}_{n}(h,w)$, use LC-based ISS for $(k,n)$ threshold to encrypt $S(h,w)$ to output updated $S{C}_{1}(h,w),S{C}_{2}(h,w),\cdots S{C}_{n}(h,w)$, where $S{C}_{1}(h,w),S{C}_{2}(h,w),\cdots S{C}_{n}(h,w)$ are least modified to satisfy the requirement of LC-based ISS.
**Step****7:**- Renew the confidence terms after each filling process $C\left({q}_{i}\right)=C\left(\widehat{{p}_{i}}\right)$ for any ${q}_{i}\in {\psi}_{\widehat{{p}_{i}}}\cap \mathsf{\Omega}$, $i=1,2,\cdots n$.
**Step****8:**- Repeat Steps 3–7 until each cover image is completely inpainted.
**Step****9:**- Output n shares $S{C}_{1},S{C}_{2},\cdots S{C}_{n}$.

- In Step 6, the values of $S{C}_{1}(h,w),S{C}_{2}(h,w),\cdots S{C}_{n}(h,w)$ are updated in the processing of encrypting $S(h,w)$ by LC-based ISS.
- After the patch of the current window for each share is replaced by the most matching block in Step 4, the secret block of the current window is encrypted into n corresponding updated blocks with close values based on LC-based ISS to replace the patch of the current window in Step 4. Then, the modified patches of the current window for each share will be the basis for the next subsequent inpainting processing. Although the ISS sharing processing will introduce slight modification into the shares, the already inpainted block with the slight noise will be the input of the next inpainting round. Based on the current input, the next order and the most matching block will be selected. In such a way, meaningful shares can be obtained in a visually plausible way.

## 4. Experimental Results and Analyses

#### 4.1. Image Illustration

- The target part is automatically and adaptively detected, and then is successfully inpainted into the visually plausible shares.
- Each share looks reasonable to the human eye, and thus is meaningful.
- We cannot decrypt the secret when any $t<k$ shares are collected; when any $t=k$ or more shares are collected, the secret image is progressively decrypted; when all the n shares are collected, the secret image is losslessly decrypted.
- An APISS for the $(k,n)$ threshold is achieved by the introduced scheme.

#### 4.2. Image Quality

- $l\left(x,y\right)=\frac{2{\mu}_{x}{\mu}_{y}+{C}_{1}}{{\mu}_{x}^{2}+{\mu}_{y}^{2}+{C}_{1}}$
- $c\left(x,y\right)=\frac{2{\sigma}_{x}{\sigma}_{y}+{C}_{2}}{{\sigma}_{x}^{2}+{\sigma}_{y}^{2}+{C}_{2}}$
- $s\left(x,y\right)=\frac{2{\sigma}_{xy}+{C}_{3}}{{\sigma}_{x}{\sigma}_{y}+{C}_{3}}$

#### 4.3. Comparisons with Related Schemes

- Yan et al.’s scheme needs to select the target part manually, which is human-exhausted, especially for the target with irregular shape, and is therefore not suitable for batch processing. However, our scheme automatically and adaptively detects and removes the secret target part, which is thus suitable for the processing of large-scale images.
- The selected target part of our scheme is less precise than that of Yan et al.’s scheme, because they select the target part manually. This weakness of our scheme can be enhanced through combining salience detection and object segmentation.

#### 4.4. Extensions and Discussions

- The important information of the input image can be selected by other techniques according to practical requirements, such as edge detection and object segmentation.
- Salience detection on multiple secret images with close content can be utilized to improve the salience detection accuracy.
- Some other inpainting methods, such as the PDE-based method, can also be applied to the introduced scheme.
- We can adopt different ISS schemes, different filling order selection methods, or different $(k,n)$ threshold extension methods to achieve different features.
- Our method can be applied to grayscale image. If a binary image inpainting algorithm is employed, our method may be applied to binary image as well.
- We can use more images to test the scheme. The advantage of adaptive PISS and the effectiveness of saliency detection are dependent on the adopted saliency detection algorithm if the images have multiple objects or more complex background.

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Yan, X.; Lu, Y.; Liu, L.; Wan, S.; Ding, W.; Liu, H. Exploiting the Homomorphic Property of Visual Cryptography. Int. J. Digit. Crime Forensics
**2017**, 9, 45–56. [Google Scholar] [CrossRef] [Green Version] - Belazi, A.; El-Latif, A.A.A. A simple yet efficient S-box method based on chaotic sine map. Opt. Int. J. Light Electron Opt.
**2017**, 130, 1438–1444. [Google Scholar] [CrossRef] - Cheng, Y.; Fu, Z.; Yu, B. Improved Visual Secret Sharing Scheme for QR Code Applications. IEEE Trans. Inf. Forensics Secur.
**2018**, 13, 2393–2403. [Google Scholar] [CrossRef] - Wang, G.; Liu, F.; Yan, W.Q. Basic Visual Cryptography Using Braille. Int. J. Digit. Crime Forensics
**2016**, 8, 85–93. [Google Scholar] [CrossRef] [Green Version] - Naor, M.; Shamir, A. Visual Cryptography. In Advances in Cryptology-EUROCRYPT’94, Workshop on the Theory and Application of Cryptographic Techniques, May 9–12; Lecture Notes in Computer Science; Springer: Perugia, Italy, 1995; pp. 1–12. [Google Scholar]
- Shamir, A. How to share a secret. Commun. ACM
**1979**, 22, 612–613. [Google Scholar] [CrossRef] - Yan, X.; Liu, L.; Lu, Y.; Gong, Q. Security analysis and classification of image secret sharing. J. Inf. Secur. Appl.
**2019**, 47, 208–216. [Google Scholar] [CrossRef] - Yan, X.; Li, J.; Lu, Y.; Liu, L.; Yang, G.; Chen, H. Relations between Secret Sharing and Secret Image Sharing. Security with Intelligent Computing and Big-data Services; Yang, C.N., Peng, S.L., Jain, L.C., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 79–93. [Google Scholar]
- Ding, W.; Liu, K.; Yan, X.; Wang, H.; Liu, L.; Gong, Q. An Image Secret Sharing Method Based on Matrix Theory. Symmetry
**2018**, 10, 530. [Google Scholar] [CrossRef] [Green Version] - Zhou, Z.; Arce, G.R.; Di Crescenzo, G. Halftone visual cryptography. IEEE Trans. Image Process.
**2006**, 15, 2441–2453. [Google Scholar] [CrossRef] - Wang, Z.; Arce, G.R.; Di Crescenzo, G. Halftone visual cryptography via error diffusion. IEEE Trans. Inf. Forensics Secur.
**2009**, 4, 383–396. [Google Scholar] [CrossRef] - Liu, F.; Wu, C. Embedded extended visual cryptography schemes. Inf. Forensics Secur. IEEE Trans.
**2011**, 6, 307–322. [Google Scholar] [CrossRef] [Green Version] - Weir, J.; Yan, W. A comprehensive study of visual cryptography. In Transactions on DHMS V; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6010, pp. 70–105. [Google Scholar]
- Yan, X.; Lu, Y.; Liu, L. General Meaningful Shadow Construction in Secret Image Sharing. IEEE Access
**2018**, 6, 45246–45255. [Google Scholar] [CrossRef] - Guo, T.; Jiao, J.; Liu, F.; Wang, W. On the Pixel Expansion of Visual Cryptography Scheme. Int. J. Digit. Crime Forensics
**2017**, 9, 38–44. [Google Scholar] [CrossRef] [Green Version] - Yan, X.; Liu, X.; Yang, C.N. An enhanced threshold visual secret sharing based on random grids. J. Real-Time Image Process.
**2018**, 14, 61–73. [Google Scholar] [CrossRef] - Yan, X.; Wang, S.; Niu, X.; Yang, C.N. Halftone visual cryptography with minimum auxiliary black pixels and uniform image quality. Digit. Signal Process.
**2015**, 38, 53–65. [Google Scholar] [CrossRef] - Thien, C.C.; Lin, J.C. Secret image sharing. Comput. Graph.
**2002**, 26, 765–770. [Google Scholar] [CrossRef] - Yang, C.N.; Ciou, C.B. Image secret sharing method with two-decoding-options: Lossless recovery and previewing capability. Image Vis. Comput.
**2010**, 28, 1600–1610. [Google Scholar] [CrossRef] - Bao, L.; Yi, S.; Zhou, Y. Combination of Sharing Matrix and Image Encryption for Lossless (k,n)-Secret Image Sharing. IEEE Trans. Image Process.
**2017**, 26, 5618–5631. [Google Scholar] [CrossRef] - Liu, Y.; Yang, C.; Wang, Y.; Zhu, L.; Ji, W. Cheating identifiable secret sharing scheme using symmetric bivariate polynomial. Inf. Sci.
**2018**, 453, 21–29. [Google Scholar] [CrossRef] - Liu, L.; Lu, Y.; Yan, X.; Wang, H. Greyscale-images-oriented progressive secret sharing based on the linear congruence equation. Multimed. Tools Appl.
**2017**, 77, 20569–20596. [Google Scholar] [CrossRef] - Yan, X.; Lu, Y.; Liu, L.; Wang, S. Partial secret image sharing for (k,n) threshold based on image inpainting. J. Vis. Commun. Image Represent.
**2018**, 50, 135–144. [Google Scholar] [CrossRef] - Fu, H.; Cao, X.; Tu, Z. Cluster-Based Co-Saliency Detection. IEEE Trans. Image Process.
**2013**, 22, 3766–3778. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Otsu, N. A threshold selection method from gray-level histograms. Automatica
**1975**, 11, 23–27. [Google Scholar] [CrossRef] [Green Version] - Criminisi, A.; Perez, P.; Toyama, K. Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc.
**2004**, 13, 1200–1212. [Google Scholar] [CrossRef] [PubMed] - Shen, W.; Song, X.; Niu, X. Hiding Traces of Image Inpainting. Res. J. Appl. Sci. Eng. Technol.
**2012**, 4, 4962–4968. [Google Scholar] - Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process.
**2004**, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]

**Figure 2.**Experimental result of the image saliency detection. (

**a**) The secret image S; (

**b**) image saliency; (

**c**) automatically selected target part by Otsu’s threshold operation.

**Figure 3.**An example of the inpainted image obtained using the approach of Criminisi et al. (

**a**) The secret image S; (

**b**) the same input cover image, denoted by C, through selecting and removing the secret target part with green color from S; (

**c**) the general notations; (

**d**) directly inpainted result.

**Figure 5.**Experimental results when directly applying linear congruence (LC)-based image secret sharing (ISS) for $(3,3)$-threshold. (

**a**) The secret image S; (

**b**) the same input cover image, denoted by C, through selecting and removing the secret target part with green color from S; (

**c**–

**e**) three shares $S{C}_{1},S{C}_{2}$, and $S{C}_{3}$; (

**f**,

**g**) decrypted results by any two or more shares.

**Figure 6.**Experimental result for the introduced scheme for the threshold $(3,4)$. (

**a**) The secret image S; (

**b**) the same input cover image through automatically selecting and removing the secret target part with green color from S using salience detection method on S and Otsu’s threshold operation; (

**c**–

**f**) four shares $S{C}_{1},S{C}_{2},S{C}_{3}$, and $S{C}_{4}$; (

**g**–

**i**) decrypted results by any two or more shares.

**Figure 7.**Experimental result for the introduced scheme for the threshold $(3,3)$. (

**a**) The secret image S; (

**b**) the same input cover image through automatically selecting and removing the secret target part with green color from S using salience detection method on S and Otsu’s threshold operation; (

**c**–

**e**) three shares $S{C}_{1},S{C}_{2}$, and $S{C}_{3}$; (

**f**,

**g**) decrypted results by any two or more shares.

**Figure 8.**Experimental result for the introduced scheme for the threshold (3,4). (

**a**) The secret image S; (

**b**) the same input cover image through automatically selecting and removing the secret target part with green color from S using salience detection method on S and Otsu’s threshold operation; (

**c**–

**f**) four hares SC

_{1}, SC

_{2}, SC

_{3}, and SC

_{4}; (

**g**–

**i**) decrypted results by any two or more shares.

**Figure 9.**Experimental results of the scheme of Yan et al. [23] for the (k, n) threshold, where k = 3, n = 4. (

**a**) The secret image S; (

**b**) the same input cover image through manually selecting and removing the secret target part with green color from S; (

**c**–

**f**) four shares SC

_{1}, SC

_{2}, SC

_{3}, and SC

_{4}; (

**g**–

**i**) decrypted results by any two or more shares.

**Table 1.**Average peak signal-to-noise-ratio (PSNR) and structural similarity index measure (SSIM) between $S{C}_{i}$ and $D{C}_{i}$ on the target area.

Threshold (k,n) | PSNR | SSIM |
---|---|---|

(2,2) | 20.7750 | 0.8396 |

(2,3) | 20.9817 | 0.8510 |

(3,3) | 22.3658 | 0.8939 |

(2,4) | 20.8117 | 0.8376 |

(3,4) | 22.5924 | 0.8958 |

(4,4) | 24.1274 | 0.9153 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yan, X.; Sun, L.; Lu, Y.; Yang, G.
Adaptive Partial Image Secret Sharing. *Symmetry* **2020**, *12*, 703.
https://doi.org/10.3390/sym12050703

**AMA Style**

Yan X, Sun L, Lu Y, Yang G.
Adaptive Partial Image Secret Sharing. *Symmetry*. 2020; 12(5):703.
https://doi.org/10.3390/sym12050703

**Chicago/Turabian Style**

Yan, Xuehu, Lei Sun, Yuliang Lu, and Guozheng Yang.
2020. "Adaptive Partial Image Secret Sharing" *Symmetry* 12, no. 5: 703.
https://doi.org/10.3390/sym12050703