Researchers from the University of California, Berkeley have developed a lensless camera that produces 3D images from a single 2D image without scanning. The image sensor has a diffuser (a bumpy piece of plastic) on top of it instead of a lens and so they call it DiffuserCam. The hardware may sound simple but the software that comes with it is actually very complicated. Download full text here.
According to the researchers, the camera is initially meant to monitor a microscopic neuron activity in living mice without the need of a microscope. Eventually, the technology can also be applied in the future for several other technologies that require capturing 3D information.
“The DiffuserCam can, in a single shot, capture 3D information in a large volume with high resolution,” said the research team leader Laura Waller, University of California, Berkeley. “We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects.”
Researchers had recently demonstrated the ability of the DiffuserCam to reconstruct 3D pixels from a 1.3-megapixel image of leaves from a small plant without them having to scan it 360°.
“Our new camera is a great example of what can be accomplished with computational imaging—an approach that examines how hardware and software can be used together to design imaging systems,” said Waller. “We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home.”
The DiffuserCam is also very inexpensive to build as it only requires image sensor and a diffuser. The image sensor can be of any type of image sensor and the software might soon be available for everyone, giving masses a freedom to utilize the technology.
The DiffuserCam can capture objects ranging from microscopic all the way up the size of the human. The resolution is said to be decreasing depending on how far the subject from the image sensor, however, it is still be very high enough to distinguish which is the closest or the farthest object from the sensor.
This new type of camera is somehow related to light field cameras. Light field cameras capture how much light is striking a certain pixel on the image sensor as well as from which angle the light hits that pixel. For the light field cameras to work, it needs an array of tiny lenses placed in front of an image sensor to capture directions of the incoming light. This would allow the software to recognize objects in three dimensional form, allowing the user to refocus the subject.
One of the major drawbacks of light field cameras is that the microlens arrays it uses are expensive to produce as it has to be customized for a particular camera. The quality is also not that great because some spatial information is lost in favor of collecting directional information.
“I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware,” said Waller. “If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”
It took the researchers many experimentation with various types of diffusers and a lot of time to developed the complex software algorithms before they discovered that Waller’s idea for a simple light field camera was actually doable.
Phys.org reported that using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, the researchers were able to improved the traditional light field camera capabilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.