Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

Lens-Free Flat Cameras Make Use of Pinhole Technology (npr.org) 65

RhubarbPye writes: As reported on NPR, "Engineers in Texas are building a camera that can make a sharp image with no lens at all." By incorporating millions of individual pinholes with photoreceptors and postprocessing software, this camera system has been reduced to minimal thickness. Cameras in the wallpaper? A new phase of wearable cameras? What other applications for this technology could be developed?
This discussion has been archived. No new comments can be posted.

Lens-Free Flat Cameras Make Use of Pinhole Technology

Comments Filter:
  • Looks like someone decided to put their money into a hole.

    • by shanen ( 462549 )

      I was looking for the "Nothing to see here" joke.

      Who is responsible for this hideous omission?

      On a more serious note, the headline was rather sensationalistic. My first thought was "pinhole", but perhaps I know too much about the history of optics and photography?

  • Dynamic range? (Score:5, Insightful)

    by RightwingNutjob ( 1302813 ) on Wednesday February 17, 2016 @01:49AM (#51525557)
    TL;DR. Most uncooled camera chips give you maybe 10 or 11 bits of dynamic range, and light is subject to Poisson noise, meaning the brighter a pixel, the noiser it is in absolute (not relative) terms. If you have to solve a big giant matrix inversion to do the job of a collimating lens, you're composing each pixel as a sum of many others instead of just itself, some of them being way brighter than the reconstructed image, meaning your reconstructed pixel is always noisier. Cool idea, and certainly has its applications, but the best images will always come from big fat optics.
    • Re:Dynamic range? (Score:5, Insightful)

      by Arkh89 ( 2870391 ) on Wednesday February 17, 2016 @02:15AM (#51525631)

      No. First, on these you are mostly limited by the thermal noise of the sensor which is miles above the photon noise for this application. Then you are still thinking that a pixel receives the same flux (power per surface area) as a traditional camera. This is not correct as each of the pixels collect flux from a much larger angular portion of the scene (due to the lack of optical focusing).

    • Re:Dynamic range? (Score:5, Informative)

      by Ungrounded Lightning ( 62228 ) on Wednesday February 17, 2016 @02:19AM (#51525649) Journal

      If you have to solve a big giant matrix inversion to do the job of a collimating lens, you're composing each pixel as a sum of many others instead of just itself, some of them being way brighter than the reconstructed image, meaning your reconstructed pixel is always noisier.

      Not really.

      When you average a large number of samples the noise tends to partially cancel out while the signal keeps adding up. Though the noise goes up with more samples, the signal goes up more, improving the signal to noise ratio. Even if you end up adding in some bright signals, with extra noise, that's still stomped by signal when you have enough samples.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        No, the original poster was more correct. They're not averaging together a bunch of pixels, but applying an inverse matrix , which will weigh pixels differently, and quite frequently can involve very high weights assigned to noisier signals. This can result in an emphasis that amplifies noise. There is a lot of work done on different ways of clipping or modifying such matrix equations to make it slightly less accurate in an ideal world, but much less noisy in the real world.

        Also, no averaging of noise w

        • Also, no averaging of noise will happen if you try to produce images with similar pixel count to the number of detectors.

          I agree completely there. (I'd say there is averaging but you're averaging in as much extra noise from other pixels as you're averaging out from multiple samples of the target pixel - and even if the noise were merely proportional the pixel brightness, rather than disproportionate as they get brighter, the bright ones would noise up the dim ones.)

          They're not averaging together a bunch of

      • No. As someone with patents on multiplex imaging, I can tell you that inverse problems lead to greater noise or equivalently reduced dynamic range. You can see this in their photos.

  • by erice ( 13380 ) on Wednesday February 17, 2016 @01:52AM (#51525569) Homepage

    Nothing in the article talks about what the resulting aperture is. To get a reasonable exposure time, you need to capture adequate light. Cameras in cell phones already suffer because their lenses are too small to capture enough light. Is this scheme worse because it lets less light through or better because a larger "lens" is practical?

    • Is this scheme worse because it lets less light through or better because a larger "lens" is practical?

      Currently, it's worse (TFA mentions quality similar to first gen webcams).
      But indeed, that technology is really scalable. TFA muses with large surface flatcams.
      (They mention walls of it. Or boxes/cylinder in the middle of which you put an object, etc.)

      So the whole back cover of a smart-phone could be a giant pinhole array.
      Such a large surface even if covered with only pinholes (and even if some of the hole might get obscured by fingers holding the phone) would gather much more light and information and coul

  • What would be interesting is if such a sensor array could efficiently and wirelessly tether itself to the image processing engine. The potential surveillance applications are mind boggling.
  • Generally a pinhole camera is very light insensitive due to the small amount of light let in by the pinhole, but multiply the number of pinholes to amplify the image and use a super sensitive sensor gathering from each pinhole, and it seems like it would yield some amazing results.
  • The Irony (Score:2, Funny)

    by djinn6 ( 1868030 )
    I see all of the photos in the article were taken with a conventional camera, complete with lens blur.
  • by fragMasterFlash ( 989911 ) on Wednesday February 17, 2016 @03:02AM (#51525787)
    So make a headband 360 degree camera to capture video you can view with your VR headset. Also, make police officers wear these instead of the silly badgecams they currently use.
  • ...will be your screen. In a video call you will be able to look at the other party in his/her eyes and it will not appear like you're reading something else.
  • by Lluc ( 703772 ) on Wednesday February 17, 2016 @03:44AM (#51525873)
    The way the NPR article describes this, it is no different from Uniformly Redundant Arrays, i.e. Coded Aperture Imaging: see https://en.wikipedia.org/wiki/... [wikipedia.org] If you look at the 1998 paper, "Uniformly Redundant Arrays" by Busboom et al, the first sentence describes work from the 1960s:

    Coded aperture imaging (CAI) (Mertz and Young, 1961; Dicke, 1968) has matured as a standard imaging technique in X–ray and Gamma-ray astronomy. It is capable of combining high angular resolution with good photon collection efficiency by using a mask consisting of transparent and opaque elements placed in front of a position sensitive detector (Figure 1).

    So is the only innovation here using more pinholes, more pixels, and more processing than were around in the 1990s?

    • Given that the paper is titled FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation [arxiv.org], I think they know about the previous work. Specifically,

      FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system.

    • It's the flatness and the lateral extensibility (wall paper sized) that are new for coded aperatures

  • I assume that you still have to make the surface with all the pinholes in, a little convex. If only to not just capture the same square cm in front of the lens a thousand times.

    • I don't think so. Each pinhole can produce an image corresponding to a wide angle which only depends of the size of sensor of its distance from the pinhole. If the sensor is very close to the pinhole(s) then the image will have a wide angle (up to 180 degrees).

       

  • Interesting work with a lot of unobvious possibilities. "Lensless" is a little misleading. Pinholes are just the center circle of a zone plate. Zone plates are lenses that work by diffraction instead of refraction. They look like a bulls-eye (see http://www.eastjesus.net/tech/... [eastjesus.net] for a quick and simple primer). The diameter of the hole determines the focal length - hence too big OR too small leads to fuzzier images. The have a couple of big drawbacks - the focal length is a function of wavelength hence obje

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...