We propose an algorithm, which we refer to as ℓ1-K-SVD, to learn data-adaptive dictionaries in the presence of non-Gaussian noise. The fundamental idea behind the algorithm is to replace the usual ℓ2-norm-based data-fidelity metric with ℓ2-norm, and minimize it using iteratively reweighted least-squares (IRLS).
10">In the dictionary update stage of ℓ1-K-SVD, we adopt a simultaneous updating strategy similar to K-SVD, that is found to result in faster convergence.
15">We elucidate how the proposed idea can be extended to minimize the ℓp data error, where 1-s2.0-S0165168415004351&_mathId=si0009.gif&_user=111111111&_pii=S0165168415004351&_rdoc=1&_issn=01651684&md5=9b446acc68487ef4d8e859a26c78e08e" title="Click to view the MathML source">0<p<1, in scenarios where one has to deal with sparse/impulsive noise contamination.
We demonstrate experimentally that the ℓ1-K-SVD algorithm results in faster convergence and more accurate atom detection performance compared with the state-of-the-art algorithms. It is also shown that ℓ1-K-SVD is more suitable than the competing algorithms, when the training dataset contains fewer examples.
As an application, we deploy the algorithm for image denoising. It is found that ℓ1-K-SVD results in peak signal-to-noise ratio (PSNR) values that are on par with the K-SVD algorithm, but the improvement in structural similarity index (SSIM) over K-SVD is approximately 0:08–0:10, indicating its efficacy in preserving the structural content of images.