Chinese Physics Letters, 2020, Vol. 37, No. 4, Article code 044201 Scanning-Position Error-Correction Algorithm in Dual-Wavelength Ptychographic Microscopy * Rui Ma (马锐)1,2, Shu-Yuan Zhang (张书源)1, Tian-Hao Ruan (阮天昊)1, Ye Tao (陶冶)1, Hua-Ying Wang (王华英)3, Yi-Shi Shi (史祎诗)1,2** Affiliations 1School of Optoelectronics, University of Chinese Academy of Sciences, Beijing 100049 2Center for Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049 3College of Mathematics and Physics, Hebei University of Engineering, Handan 056038 Received 17 November 2019, online 24 March 2020 *Supported by the National Natural Science Foundation of China under Grant No. 61575197, the Fusion Foundation of Research and Education of CAS, the Youth Innovation Promotion Association of CAS (2017489), the University of Chinese Academy of Sciences, and the Natural Science Foundation of Hebei Province of China (F2018402285).
**Corresponding author. Email: optsys@gmail.com
Citation Text: Ma R, Zhang S Y, Ruan T H, Tao Y and Wang H Y et al 2020 Chin. Phys. Lett. 37 044201    Abstract We propose a new algorithm for the error correction of scanning positions in ptychographic microscopy. Since the scanning positions are varied mechanically by moving the illuminating probes laterally, the scanning errors will accumulate at multiple positions, greatly reducing the reconstruction quality of a sample. To correct the scanning errors, we use the correlation analysis for the diffractive data combining with the additional constraint of dual wavelengths. This significantly improves the quality of ptychographic microscopy. Optical experiments verify the proposed algorithm for two samples including a resolution target and a fibroblast. DOI:10.1088/0256-307X/37/4/044201 PACS:42.30.Kq, 42.30.Sy, 42.30.Va © 2020 Chinese Physics Society Article Text Ptychography[1] comes from the development of the lensless coherent diffractive imaging (CDI) technology by recording multiple diffractive patterns of an object with overlapping illuminated regions.[2] Ptychography is widely used in the visible band,[3] x-ray and other microscopic imaging fields,[4] and a series of related iterative recovery algorithms have been developed. Some methods have been developed to correct the transmission errors caused by inaccuracy of the transmission stage, including the annealing methods,[5] the cross-correlation technique,[6] and the conjugate gradient algorithm.[7] In the meantime, other methods introduce the multiple-wavelength into the optical metrology and phase recovery.[8–11] In our previous work, we proposed the theory of generalized ptychography,[12] and we also studied the key parameters of an illuminating beam, the multi-wavelength experiment and parallel algorithm for optical ptychography.[13–15] Furthermore, we applied the ptychography in optical image encryption, 3D information hiding and optical watermarking.[16] In ptychographic microscopy, because the object is usually magnified dozens of times, the field of view is more limited by the size of a charge coupled device (CCD) than that in ptychographic imaging. The microscopic quality is much more greatly reduced by the errors of the ptychographic scanning especially accumulated in multiple illuminating positions. Since the ptychographic scanning is often realized with the laterally movement of illuminating probes, these mechanical errors are nearly inevitable. To solve the problem, we propose an error-correction algorithm in the present study. This can effectively decrease the errors to improve the performance of ptychographic microscopy by analyzing the correlation of redundant data, combining with the extra constraint of dual-wavelength. Our algorithm is proved by the optical experiments for the samples such as the resolution target and the fibroblast.
cpl-37-4-044201-fig1.png
Fig. 1. Setup of dual-wavelength ptychographic microscopy.
In dual-wavelength ptychographic microscopy shown in Fig. 1, we capture two diffractive patterns with different wavelengths at each scanning position for a complex-amplitude sample. However, we can reconstruct the partial amplitude distribution of the sample with the corresponding diffractive pattern by using the Gerchberg–Saxton (GS) iterative algorithm. If the sample is considered as a pure amplitude object, then it can even be retrieved with the single diffractive pattern just after several iterations of the GS algorithm. Although the retrieved amplitude is obtained without high definition, it can be used to calculate the overlapping extent of the adjacent diffractions generated by ptychographic scanning. Based on the overlapping extent, we can estimate the scanning errors further. In the optical experiments of dual-wavelength ptychographic microscopy, we collect a series of diffractive patterns in sequence according to the scanning path of $X \times Y$ ($X$ and $Y$ are the row and the column, respectively). The intensity of the diffractive pattern acquired with a wavelength of $\lambda$ is recorded as $I_{(x,y),\lambda} \left(u \right)$ ($x=1,2,3,\ldots, X$, $y=1,2,3,\ldots, Y$). Here $x$ and $y$ represent the coordinates of the row and column positions to the corresponding diffractive patterns, respectively. The scanning length of each moving step is $D$ and the pixel size of CCD is $m$, both in units of µm. The theoretical pixel number of each movement for the sample is $r_{0} =D/m$. In the dual-wavelength experiments, the baffle is used to block the blue light to obtain the diffractive pattern of the green light, and vice versa, until all of the diffractive patterns are collected. The proposed algorithm shown in Fig. 2 is described in the following.
cpl-37-4-044201-fig2.png
Fig. 2. Flow chart of error-correction algorithm in dual-wavelength ptychographic microscopy.
Step 1: Calculate the amplitude distribution just with the single diffractive pattern. The amplitude functions of two adjacent positions can be expressed as $$ {\it\Psi}_{(x-1,y),\lambda_{1} } (r)={\big| }\wp \langle {\sqrt {I_{(x-1,y),\lambda_{1} } (u)} } \rangle {\big| },~~ \tag {1} $$ $$ {\it\Psi}_{(x,y),\lambda_{1} } (r)={\big|}\wp \langle {\sqrt {I_{(x,y),\lambda_{1} } (u)} } \rangle {\big|},~~ \tag {2} $$ where $r$ is the coordinate vector at the sample plane, and $\wp \langle \rangle$ represents the function obtained by the GS algorithm after about three iterations with assuming the sample as a pure amplitude distribution. ${\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)$ stands for the amplitude distribution corresponding to the scanning position of $(x, y)$. Step 2: Take the overlapped part of ${\it\Psi}_{(x-1,y),\lambda_{1} } \left(r \right)$ and ${\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)$ to calculate their correlation coefficient. We need to make sure that the overlapped part has the same regions corresponding to ${\it\Psi}_{(x-1,y),\lambda_{1} } \left(r \right)$ and ${\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)$. Set a lateral deviation for the overlapped part and vary it point by point until complete the calculation of all the correlation coefficients to be compared. By continuously moving the partial region of ${\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)$ with the deviation of $\Delta r$ and calculating the correlation coefficient with the part of ${\it\Psi}_{(x-1,{\rm y)},\lambda_{1} } \left(r \right)$, we can obtain the correlation-coefficient matrix of two adjacent positions as follows: $$\begin{alignat}{1} &r({\it\Psi}_{(x-1,y),\lambda_{1} } (r),{\it\Psi}_{(x,y),\lambda_{1} } (r+\Delta r))\\ =&\frac{{\rm cov}({\it\Psi}_{(x-1,y),\lambda_{1} } '(r),{\it\Psi}_{(x,y),\lambda_{1} } '(r+\Delta r))}{\sqrt {{\rm Var}({\it\Psi}_{(x-1,y),\lambda_{1} } '(r)){\rm Var}({\it\Psi}_{(x,y),\lambda_{1} } '(r+\Delta r))} },~~ \tag {3} \end{alignat} $$ where ${\it\Psi}_{(x-1,y), \lambda_{1}} '(r)$ and ${\it\Psi}_{(x,y),\lambda_{1}} '(r+\Delta r)$ represent overlapped regions taken from ${\it\Psi}_{(x-1,{\rm y)},\lambda_{1} } \left(r \right)$ and ${\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)$, respectively. Step 3: Calculate the scanning errors of lateral movements. Because the vertical scanning errors have been calculated in steps 1 and 2, after calculating the scanning errors between lateral positions of the first row, we can find out the optimized coordinates of all scanning positions relative to the first position. $$ {\it\Psi}_{(x,y-1),\lambda_{1} } \left(r \right)={\big| }\wp \left\langle {\sqrt {I_{(x,y-1),\lambda_{1} } \left(u \right)} } \right\rangle {\big| },~~ \tag {4} $$ $$ {\it\Psi}_{(x,y),\lambda_{1} } \left(r \right)={\big| }\wp \left\langle {\sqrt {I_{(x,y),\lambda_{1} } \left(u \right)} } \right\rangle {\big| },~~ \tag {5} $$ $$\begin{alignat}{1} &r({\it\Psi}_{(x,y-1),\lambda_{1} } (r),{\it\Psi}_{(x,y),\lambda_{1} } (r+\Delta r))\\ =&\frac{{\rm cov}({\it\Psi}_{(x,y-1),\lambda_{1} } '(r),{\it\Psi}_{(x,y),\lambda_{1} } '(r+\Delta r))}{\sqrt {{\rm Var}({\it\Psi}_{(x,y-1),\lambda_{1} } '(r)){\rm Var}({\it\Psi}_{(x,y),\lambda_{1} } '(r+\Delta r))} }.~~ \tag {6} \end{alignat} $$ Step 4: Repeat steps 1 to 3 for the diffractive patterns acquired at wavelength $\lambda_{2}$. Step 5: The optimized coordinates of each scanned position of the object are determined uniquely. To effectively represent the positional deviation of the peak position with the correlation-coefficient matrix in steps 2 and 3, we perform an integer optimization on the correlation-coefficient matrix acquired from the two wavelengths. It can also enhance the robustness of the algorithm. $$ R_{(x,y)} (a,b)=0,~~{\rm for}~x =1,~ y=1,~~ \tag {7} $$ $$\begin{alignat}{1} \!\!\!\!\!\!&R_{(x,y)} (a,b)\\ \!\!\!\!\!\!=\,&{\rm find}\{ \max [ \alpha \cdot r({\it\Psi}_{(x-1,y),\lambda_{1} } (r),{\it\Psi}_{(x,y),\lambda_{1} } (r+\Delta r))\\ \!\!\!\!\!\!&+(1-\alpha)r({\it\Psi}_{(x-1,y),\lambda_{2} } (r),{\it\Psi}_{(x,y),\lambda_{2} } (r+\Delta r)) ] \},~~ \tag {8} \end{alignat} $$ $$\begin{alignat}{1} \!\!\!\!\!\!&R_{(x,y)} (a,b)\\ \!\!\!\!\!\!=&{\rm find}\{ \max [ \alpha \cdot r({\it\Psi}_{(x,y-1),\lambda_{1} } (r),{\it\Psi}_{(x,y),\lambda_{1} } (r+\Delta r)) \\ \!\!\!\!\!\!&+(1-\alpha)r({\it\Psi}_{(x,y-1),\lambda_{2} } (r),{\it\Psi}_{(x,y),\lambda_{2} } (r+\Delta r)) ] \},~~ \tag {9} \end{alignat} $$ where $\alpha \subset \left({0,1} \right)$, and ${\rm find}\!\left\{ \right\}$ represents the corresponding coordinates of positions found for the maximum. $R_{(x,y)} (a,b)$ is the position error of the scanning position of $(x, y)$, where $a$ and $b$ represent the horizontal error and longitudinal error in the translation stage, respectively. Step 6: Calculate the cumulative errors of each scan position relative to the first scan position as follows: $$ R_{(x,y)} (a,b)=0,~~{\rm for}~x =1,~ y=1,~~ \tag {10} $$ $$ R_{(x,y)} (a,b)=R_{(x-1,y)} (a,b)+R_{(x,y)} (a,b),~~ \tag {11} $$ $$ R_{(x,y)} (a,b)=R_{(x,y-1)} (a,b)+R_{(x,y)} (a,b).~~ \tag {12} $$ Step 7: Calculate the position-error matrix and introduce it into the ePIE (extended ptychographical iterative engine) for reconstructing the sample. In each iteration, the ePIE algorithm can retrieve the function of the sample using the diffraction data with only one wavelength. Then, we need to introduce the updated function into the diffraction data of the other wavelength for further updating. The steps for the $k$th iteration of the ePIE algorithm are as follows: (1) Start with a guessed function of the sample $O_{(x,y),k} \left({r +R_{(x,y)} {\rm (a,b)}} \right)$ and a guessed function of the illuminating probe $P_{\lambda_{1}, k} \left(r \right)$. The function of the exit wave can be written as $$\begin{alignat}{1} \!\!\!\!\!\!\!\!\!\!\psi_{(x,y),k} \left(r \right)=P_{\lambda_{1},k} \left(r \right)O_{(x,y),k} \left({r +R_{(x,y)} {\rm (a,b)}} \right).~~ \tag {13} \end{alignat} $$ (2) The exit wave propagates to the detector. The measured intensity is $$ {\it\Phi}_{(x,y), k} \left(u \right)=F\left\langle {\psi_{(x,y),k} \left(r \right)} \right\rangle ,~~ \tag {14} $$ where $u$ denotes the coordinate vector at the detector plane, and $F\left\langle \right\rangle$ represents the Fresnel transform. (3) Replace the amplitude of guessed diffractive pattern with the known amplitude of the measured intensity. $$ {\it\Phi}_{(x,y),k} '(u)=\sqrt {I_{(x,y),\lambda_{1} } (u)} \frac{{\it\Phi}_{(x,y), k} (u)}{| {{\it\Phi}_{(x,y),k} (u)} |}.~~ \tag {15} $$ (4) Propagate back the field ${\it\Phi}_{(x,y),k} '(u)$ to the sample plane. $$ \psi_{(x,y),k} '(r)=F^{-1}\langle {{\it\Phi}_{(x,y),k} '(u)} \rangle .~~ \tag {16} $$ (5) Update the object function using the following equation $$\begin{alignat}{1} \!\!\!\!\!\!O_{(x,y),k} &({r+R_{(x,y)} {\rm (a,b)}})=O_{(x,y),k} ({r+R_{(x,y)} {\rm (a,b)}}) \\ \!\!\!\!\!\!&+\frac{P_{\lambda_{1},k}^{\ast } (r)}{| {P_{\lambda_{1},k} (r)} |_{\max }^{2} }(\psi_{(x,y),k} '(r)-\psi_{(x,y),k} (r)).~~ \tag {17} \end{alignat} $$ (6) Update the probe function with the new updated function of the sample, $$\begin{align} P_{\lambda_{1},k+1} (r)=\,&P_{\lambda_{1},k} (r)+\frac{O_{k}^{\ast } (r+R_{(x,y)} (a,b))}{| {O_{k} (r)+R_{(x,y)} (a,b)} |_{\max }^{2} }\\ &\cdot(\psi_{(x,y),k} '(r)-\psi_{(x,y),k} (r)).~~ \tag {18} \end{align} $$ (7) The sample function updated with wavelength $\lambda_{1}$ is regarded as the initial guess function updated with wavelength $\lambda_{2}$, and the exit wave can be written as $$ \psi_{(x,y),k} (r)=P_{\lambda_{2},k} (r)O_{(x,y),k} ({r +R_{(x,y)} {\rm (a,b)}}).~~ \tag {19} $$ (8) The exit wave propagates to the detector. The measured intensity is $$ {\it\Phi}_{(x,y), k} (u)=F\langle {\psi_{(x,y),k} (r)} \rangle .~~ \tag {20} $$ (9) Replace the amplitude of the guessed diffractive pattern with the known amplitude of the measured intensity. $$ {\it\Phi}_{(x,y),k} '(u)=\sqrt {I_{(x,y),\lambda_{2} } (u)} \frac{{\it\Phi}_{(x,y), k} (u)}{| {{\it\Phi}_{(x,y),k} (u)} |}.~~ \tag {21} $$ (10) Propagate back the field ${\it\Phi}_{(x,y),k} '(u)$ to the sample plane, $$ \psi_{(x,y),k} '(r)=F^{-1}\langle {{\it\Phi}_{(x,y),k} '(u)} \rangle.~~ \tag {22} $$ (11) Update the sample function by using the following equation. $$\begin{alignat}{1} &O_{(x,y),k+1}({r+R_{(x,y)} {\rm (a,b)}})\\ =&O_{(x,y),k} ({r+R_{(x,y)} {\rm (a,b)}}) \\ &+\frac{P_{\lambda_{2},k}^{\ast } (r)}{| {P_{\lambda_{2},k} (r)} |_{\max }^{2} }(\psi_{(x,y),k} '(r)-\psi_{(x,y),k} (r)).~~ \tag {23} \end{alignat} $$ (12) Update the probe function using the new updated function of the sample. $$\begin{align} P_{\lambda_{2},k+1} (r)=\,&P_{\lambda_{2},k} (r)+\frac{O_{k}^{\ast } (r+R_{(x,y)} (a,b))}{| {O_{k} (r)+R_{(x,y)} (a,b)} |_{\max }^{2} }\\ &\cdot(\psi_{(x,y),k} '(r)-\psi_{(x,y),k} (r)).~~ \tag {24} \end{align} $$ Experiment 1: resolution target.—The setup of our optical experiments is shown in Fig. 1. We first use the resolution target USAF1951 as the sample to test our method. The laser beam passes through a beam splitter before coupled into the fiber coupler, which is adjuncted with a filter and a lens. Then the output light is connected to the pinhole through a single-mode fiber. The fiber is installed quite close to the pinhole. As the core size of the fiber is 4 µm with NA = 0.12, the fiber can generate a spherical wave projecting the resolution target. The target is mounted on an $x/y$ translation stage, and its diffractive pattern arrives at the CCD. To utilize the dual-wavelength, we first block one laser beam with a baffle, then collect the corresponding diffractive pattern before block the other laser and record the pattern again. After collecting the diffractive patterns of the two beams, we move the sample with the translation stage to another position to repeat the collection, until we have the patterns of all scanning positions. In our experiments, the distance between the pinhole and the target is 3 mm, the CCD is placed at 25 mm downstream from the sample plane. The wavelengths of the two lasers are 532.8 nm and 450 nm, respectively. The diameter of the pinhole is 500 µm. The two-dimensional translation stage (Suruga Seiki sales, one-way positioning accuracy is 5 µm), and the 8-bits CCD (IMPEX igv-b4020, maximum resolution is $4032 \times 2688$, and the pixel size is 9 µm) are employed. The size of the diffractive pattern is recorded with $1800 \times 1800$ pixels, and the scanning path is $4 \times 4$.
cpl-37-4-044201-fig3.png
Fig. 3. Experimental results of the resolution target. (a)–(d) The retrieved amplitudes of the target, (e)–(h) the retrieved phase distributions. Here (a), (b), (e), (f) show the reconstructions of single-wavelength illumination; (c), (d), (g), (h) show the results of dual-wavelength illumination; (a), (e), (c), (g) show the recovered results without correction; (b), (f), (d), (h) show the retrieved with-correction results. The area enlarged by the red outline in (a)–(h) is group 7 of USAF1951. (i) The result acquired by 100 times magnification with an optical microscope, and the width of each black bar in the red outline is 7.81 µm. (j) The mean-error curve with the variation of iterative numbers. Magenta, green, red and black correspond to (a)–(d), respectively. The embedded numbers in (j) represent the error values at the 60th iteration.
cpl-37-4-044201-fig4.png
Fig. 4. Experimental results of fibroblast reconstruction. (a)–(d) The amplitude of the retrieved sample. (e)–(h) The phase of the reconstructed sample. Here (a), (b), (e), (f) are the retrieved results with single-wavelength illumination; (c), (d), (g), (h) are the reconstructions with dual-wavelength illumination; (a), (e), (c), (g) are the recovering results without correction; (b), (f), (d), (h) are the reconstructions with error correction. (i) Result with 100 times magnification using the optical microscopy. (j) The error reduction curve. Magenta, green, red and black correspond to (a)–(d), respectively. The numbers embedded in (j) represent the error values at the 60th iteration.
The experimental results are shown in Fig. 3. To test our algorithm, we retrieve the sample in the four different cases: (1) single-wavelength illumination without error-correction, (2) single-wavelength illumination with error-correction, (3) dual-wavelength illumination without error-correction, (4) dual-wavelength illumination with error-correction. Figures 3(a), 3(b), 3(e), 3(f) and 3(c), 3(d), 3(g), 3(h) are the reconstructions of the single-wavelength and dual-wavelength illumination, respectively. It is found that the quality of retrieved amplitudes and phases by dual-wavelength are better than those by single-wavelength, and more sample details can be recovered by dual-wavelength. Figures 3(a), 3(e), 3(c), 3(g) and 3(b), 3(f), 3(d), 3(h) are reconstructions without and with error-correction, respectively. The amplitude distribution without correction is quite blurred, and the phase is almost invisible. The with-correction amplitude can present the group 7 of USAF1951, and its phase distribution is much clearer. In Figs. 3(a)–3(h), the area enlarged by the red outline is the group 7 of the resolution target. Figure 3(i) is obtained by 100 times magnification with an optical microscope. The width of each black bar in the red outline is 7.81 µm, as shown in Fig. 3(i). In addition, Fig. 3(j) gives the mean-error variation with the iterations in the previous four cases. The experiments show that the contrast and clarity of the image using our error-correction algorithm are greatly improved. In other words, our algorithm significantly enhances the quality of the reconstructions. Experiment 2: fibroblast.—We also use dyed cells as the sample for the test. The optical path and parameter settings are the same as those in experiment 1, and the experimental equipment is also the same but the CCD. In this case, we use an ordinary black and white industrial camera (DAHENG MER-2000, 8-bits, maximum resolution is $2500 \times 2500$ pixel, pixel size is 2.4 µm). The size of the collected diffractive patterns is $1536 \times 1536$ pixels, and the scanning path is $5 \times 5$. In experiment 1, the sample is the standard resolution plate, in which there is obvious strip texture and edge information. This is the simple gray scale distribution of black and white with the obvious contrast and clear boundary. To test the performance of our algorithm further, we use the standard fibroblasts with the more complicated distribution as the sample. Though the experiments with the error correction of positions, we demonstrate that even the ordinary industrial camera can get the satisfactory results of dual-wavelength ptychographic microscopy. The experimental results of fibroblast are shown in Fig. 4. We can find that the amplitude and phase in Figs. 4(a) and 4(e) are almost invisible, and the outline and details nearly get lost. Figures 4(b) and 4(f) are the reconstructions using single-wavelength illumination with the error correction. We can see part of the outline information of fibroblast with much less noises. However, the clarity of amplitude and the details of phase are still unideal. Figures 4(c) and 4(g) are the results of the dual-wavelength illumination without correction. The quality of amplitude by the dual-wavelength without correction is similar to that by the single-wavelength with correction, while the phase reconstructed by dual-wavelength without correction is much better than the latter one. This presents that the dual-wavelength seems more tolerant to the position error than the single wavelength especially for the reconstruction of phase. Figures 4(d) and 4(h) are the results of dual-wavelength with correction, in which the retrieved amplitude is clearer and the field of view is larger. In the retrieved phase distributions of the previous four cases, we can see that the dual-wavelength with position-error calibration is particularly important for the phase reconstruction. This also greatly improves the phase convergence in the iterative process. The reconstruction shown in Fig. 4(d) is nearly the same as Fig. 4(i), which contains the sufficient details of the sample. The recovered quality with position correction is much higher than that without correction, and the retrieving quality of dual-wavelength is also better than that of single-wavelength. Further, Fig.  4(j) gives the mean-error variation with the iterations in the four cases mentioned previously. The scanning-position errors may be served as a kind of undetermined initial parameters of the system of ptychographic microscopy. Our error-correction algorithm can give more accurate position-coordinate before the usual ptychographic reconstruction. Thus, the experimental results with calibration are still better than those without correction in the case of fewer iterations shown in Fig. 4(j). As a result, we can greatly improve the reconstruction quality in the dual-wavelength ptychographic microscopy using our algorithm for position correction. In conclusion, we have proposed an error-correction algorithm for scanning positions in dual-wavelength ptychographic microscopy. With this algorithm, we can accurately determine the scanning variations for the sample and we can calibrate the position error of the translation stage. In addition, the algorithm is verified by the microscopic imaging results of two different samples.
References Beugung in inhomogenen Primärstrahlenwellenfeld. II. Lichtoptische Analogieversuche zur Phasenmessung von GitterinterferenzenMovable Aperture Lensless Transmission Microscopy: A Novel Phase Retrieval AlgorithmTransmission microscopy without lenses for objects of unlimited sizeAn annealing algorithm to correct positioning errors in ptychographyTranslation position determination in ptychographic coherent diffraction imagingPtychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methodsDual wavelength optical metrology using ptychographyInformation multiplexing in ptychographyPhase retrieval using multiple illumination wavelengthsLensless phase microscopy using phase retrieval with multiple illumination wavelengthsGeneralized Ptychography with Diverse ProbesPtychographical Imaging Algorithm with a Single Random Phase EncodingOptical image encryption via ptychography
[1] Hoppe W and Strube G 1969 Acta Crystallogr. Sect. A 25 502
[2] Faulkner H M L and Rodenburg J M 2004 Phys. Rev. Lett. 93 023903
[3] Rodenburg J M, Hurst A C and Cullis A G 2007 Ultramicroscopy 107 227
[4]Rodenburg J M 2008 Adv. Imaging Electron Phys. 150 87
[5] Maiden A M, Humphry M J and Sarahan M C 2012 Ultramicroscopy 120 64
[6] Zhang F, Peterson I and Vila-Comamala J 2013 Opt. Express 21 13592
[7] Tripathi A, McNulty I and Shpyrko O G 2014 Opt. Express 22 1452
[8] Claus D, Robinson D J and Chetwynd D G 2013 J. Opt. 15 035702
[9] Batey D J, Claus D and Rodenburg J M 2014 Ultramicroscopy 138 13
[10] Bao P, Zhang F and Pedrini G 2008 Opt. Lett. 33 309
[11] Bao P, Situ G and Pedrini G 2012 Appl. Opt. 51 5486
[12] Shi Y S, Wang Y L and Zhang S G 2013 Chin. Phys. Lett. 30 054203
[13]Wang D, Ma Y J and Liu Q 2015 Acta Phys. Sin. 64 084203 (in Chinese)
[14]Xiao J, Li D Y, Wang Y L and Shi Y S 2016 Acta Phys. Sin. 65 154203 (in Chinese)
[15] Shi Y S, Wang Y L and Li T 2013 Chin. Phys. Lett. 30 074203
[16] Shi Y S, Li T, Wang Y L, Gao Q K, Zhang S G and Li H F 2013 Opt. Lett. 38 1425