Lensed cameras have dominated photography for millennia because they are the most efficient way to obtain a focused image. Lensed cameras require a complex lens system to provide high-quality, bright, aberration-free images. Smaller, lighter, and less costly cameras have been increasingly popular. Small, high-performance cameras that may be used in any location are needed. Cameras that use refractive lenses, which require a long distance to focus, are less compact than those that don’t.
Recently, the lensless camera has been a popular choice. Thanks to recent advances in computing technology, the lens system can be simplified by replacing computers for different sections of the optical system. Thanks to image reconstruction computers, an ultra-thin, light, and inexpensive lensless camera are now possible. Although image reconstruction technology has yet to be developed, the lensless camera suffers from low image quality and a lengthy computation time.
Using a novel image reconstruction technique, researchers could cut down on computation time without sacrificing picture quality. Lensless cameras, according to one member of the study team, “could be ultra-miniature, permitting creative uses that are beyond our imagination” because they do not have lenses.
Cameras with no lenses typically use a thin mask and an image sensor as their optical components. It is possible to construct the mask and sensor as a single unit using well-established semiconductor manufacturing methods in the future. Light reflected off the sensor is encoded by the mask, creating patterns on the sensor’s image sensor. It’s possible to decipher the casted patterns, despite their unintelligible nature to the human eye, with an understanding of the optical system. To recreate the picture, a mathematical approach is employed.
Using a thin mask, a lensless camera encodes the scene and then reconstructs the picture. Improving image reconstruction is a critical concern in lensless imaging. Errors in system modeling can lead to inaccurate reconstructions in traditional model-based reconstruction approaches. This limitation is removed from the reconstruction process when utilizing a deep neural network (DNN) driven only by data. Model-based techniques for lensless imaging outperform pure DNN reconstruction algorithms. Optical encoded patterns in lensless optics can only be deciphered using global characteristics because of the multiplexing nature.
Convolutional networks (FCNs) are used in all DNN reconstruction approaches, although they are inefficient for global feature reasoning. An entirely linked neural network is proposed for the first time in this investigation. The suggested architecture enhances the reconstruction by enhancing global feature reasoning in the proposed design. By comparing the proposed architecture to model-based and FCN-based approaches, the superiority of the proposed architecture is proved in optical experiments.
Still, the decoding method, based on visual reconstruction technology, remains a challenge. By addressing a “convex” optimization issue, lensless optics-based decoding algorithms mimic the physical process of reconstructing the picture. Reconstruction results are vulnerable to the physical model’s poor approximations, as shown by this result. Based on the physical system, model-based approaches reproduce images. An ideal point light source illuminates the mask-based lensless device and creates a perfect pattern on the sensor. The physical system determines the point spread function (PSF).
As a result, the computation required to solve the optimization issue is highly time-consuming. The difficulties of model-based decoding may be solved using deep learning, which uses a non-iterative direct technique to build the model and decode the picture. Lensless photography’s existing deep learning techniques rely on convolutional neural networks (CNNs), yet these algorithms cannot create high-quality photographs.
CNN’s are inefficient because they analyze pictures based on the “local” pixel associations in the surrounding area. On the other hand, Lensless optics use a process known as “multiplexing” to transform local information from the scene into overlapping “global” information that all image sensor pixels can read.
Using this multiplexing characteristic, the Tokyo Tech research team develops an innovative machine learning approach for image reconstruction. Vit, a cutting-edge machine learning approach that excels in global feature reasoning, is the basis of the suggested algorithm. As a result, it can quickly and effectively learn picture attributes in a hierarchical structure. As a result, the proposed technique may successfully handle the multiplexing property while avoiding the disadvantages of classic CNN-based deep learning, resulting in enhanced picture reconstruction.
The proposed technique is quicker than traditional model-based methods because direct reconstruction is possible using an iterative-free processing algorithm constructed using machine learning, which is faster than classic model-based methods. The impact of model approximation mistakes is considerably reduced when the machine learning system learns the physical model. Using global image characteristics, the proposed ViT-based technique can analyze cast patterns across a vast image sensor area. On the other hand, machine learning-based decoding approaches rely on CNNs to identify local relationships.
With the Vit architecture, a new approach for acquiring high-quality pictures is proposed that overcomes the limitations of previous methods like iterative image reconstruction and CNN-based machine learning. A lensless camera using the suggested reconstruction approach produces high-quality and visually attractive pictures, according to optical studies conducted by the researchers. Real-time capture is possible because the post-processing calculation is quick enough.