Articles

GPN19 – Plenopticam – Open-Source Light Field Photography Software

June 2, 2019



hi on hath become is ID here in a flat hack – I know open source of weapon optical deep on Christopher ensign forgot it's efficient vida Ferguson Etzion individual photo damn it was me chef de stad and evidences in person was definite seen on the strain analysis others expert seed of an finger oscar aestheticized of mentioned universe of algorithms which can you know Christian damn vilify con una plus finish thank you and hello everyone I would like my name is Christopher by the way and I want to introduce you to the field of plenoptic cameras and I brought one device with me that is a plane off the camera and as you can see this doesn't look any different from a conventional camera however they are micro lenses just in front of the image sensor which conventional cameras do not have and with this something really fancy is possible and this is called refocusing and I want to give you an idea what refocusing means so as you can see in this image we have many many focal planes that we can bring into focus so each object can be brought into focus from only a single capture so you only have to take one shot and then sweep through the scenery now with yeah regards to the focus and when I started looking at this and when I heard of it I was wondering how's this possible and I want to give you an idea today how we can achieve this so as I mentioned earlier we only need one image and this is what such an image looks like we have thousands of micro lens or thousands of micro images that are projected by micro lenses and as you can see in a magnified portion in the middle these are very small so this can be thought of as a fly perceives its environment and we need to do some image processing in order to achieve this refocus refocusing capability but before that we first need to dive into the optics to understand how this is possible but don't worry I don't get into too much details here by the way does anyone recognize this image which is blurred here yeah you got it even though it's blurred you've that's that's really good you guess so I guess the average how many of you do no doom by the way so yeah that tells a lot about the average age of this in this room then I I believe leave it to you to decide what that agent might be but so to start off we have an optical bench here which is that red dashed line and on this optical bench we have a conventional image sensor that you can find in any camera out there and in front of that we have these so-called micro lenses and this is the unconventional part by the way I just depicted six of them here but actually you have thousands of them and also simplified is the objective lens that you also can find with or in any camera and with this the optical setup is complete and we can start looking at race and I've depicted one yellow array here because this is quite a distinctive fray it's called chief ray and a chief ray has the property that it is traveling through the optical center of a lens and this chief ray travels through two optical centers one through the optical center of a micro lens and then again through the optical center of the maintenance and if we extend that ray if we connect these two positions and extend that ray we end up on the image sensor where this intensity gets captured and this by the way is the center of the micro image and if we continue to trace these rays we can see that we end up having a beam of consisting of all these rays but we are not only have one pixel or we can think of it one ray corresponds to one pixel now but we not only have one pixel per micro image we have plenty of them I simplified this by just saying we have one or several pixels on the left side which are highlighted in blue here that also form one beam and on the other side as well so with this we have three pixels per micro image and in doing it that way we have fully described the plenoptic model and can start to look at refocusing so how is it possible to refocus and to remind you what refocusing means I brought that doom guy into focus here and so let's start off with this develop model first we have to make a few definitions here as you can see with the red bar with a red line here we have an object plane and this is represented in the right image with the black and white test chart and this black and white test shot is brought to focus here in that case if we pick one object point of this particular plane and trace the Rays through the image side we see that they focus on the image plane here on this micro lens but they do not end up there they travel through the lens and end up on the image sensor in the micro image so what we have to do now is we need to reconstruct the intensity that existed here this image plane position how do we do that we simply integrate on other words sum up the pixel values that belong to this micro image so we take all the pixels within that micro image and add them up and doing so we reconstruct the intensity that existed on that micro lens but we now only have one spatial point of that refocused image and we knew we need to do that for all the adjacent points as well and in doing so we reconstruct the entire image with a focus at the background but as I claimed earlier you can also focus to foreground objects as you can see these are these figures here so now I move the object plane to the front and again I pick an object point and I trace the race on the image side and I see that each ray is traveling through a different micro lens now so the pixels that I have to collect and add up are distributed over men micro images and I have to identify them and then add them up to reconstruct an image that is refocused to the foreground I can do that for all the adjacent positions and finally we have reconstructed an image with the focus to the foreground so this is basically how refocusing works and yeah when I develop that model and explain this to myself and implemented it I was wondering is it possible to predict the distance to a refocused object and it turned out that this is feasible again we have that model and in order to achieve this I regard each ray as a linear function and fit these linear functions into an equation system and solve this equation system in order to get the intersecting position and once I have that I can estimate the distance in a new metric value to reference back to this object obviously what you need to do in in advance is know the focal length parameter of your mainland's nor the focal length parameter of your micro lens know the pixel pitch of your image sensor but if this is all known then you can estimate the distance of a refocus object another capability of this camera is to change the perspective view and I want to give you an idea what that means again we have our model here and if I highlight pixels that share the same relative position in each micro image and collect them and rearrange them I I have generated a view from a different perspective and the perspective position is where these blu rays focus so the perspective position resides on that main lens aperture plane and you can move along that and as you can see when I move back and forth like now that yellow rays are highlighted if I move back and forth the objects in the front appear to move which is a typical stereoscopic setup as you can also imitate Y closing and opening your eyes you see phenomenon and I can also pick another position so you can you you can vary this viewpoint along your maintenance aperture plane algorithmically this can be thought of as illustrated here on the Left we have the micro image representation with micro images of 3 by 3 pixel size and I highlighted the blue pixels here because they correspond to these blue rays and if you rearrange them to a new image array as depicted on the right you would obtain this perspective image view so this is how it works in principle on a very abstract level if you want to dive into this and get a more in-depth knowledge I recommend to read some of these scientific publications and now and finally I would like to come to the most important part of this talk which is the software that I've written that yeah has the implementation of the algorithms I just described you can find that on github as with this link down here this is how the user interface looks like I will give a brief demonstration now and if you do not call a plane optic camera your own you can also obtain some light view data online this is free to use and you can use download the software download the image data and play around with it and why am I telling you all this well I would like you to join me on my road developing this and collaborate to improve this software so to give you an idea this is purely written in Python by the way we can also since it takes some time to compute these images one step could be to convert as part of it to C to make it to make it perform faster and better this is what the interface looks like it's quite lean not too many buttons and I want to use some of the images we captured back then at university which is this image you can see the micro images again this is what the raw image looked like there's a camera by the way that we built ourselves back then because vital cameras were not available and also we needed to know the focal length and so on and this was not given with a commercial camera this is why we had to take our own components to know what we are using to know the parameters in order to predict distances and so on so now I have pointed to to the light field image and what is also necessary is this white image calibration file that I just opened this is necessary and necessary to calibrate the camera we need to find the Centers of each micro image and this is why this has to be provided as well there are some settings here I don't want to go to details here there's a documentation I took some effort to document that so if you're interested you can read through so now while this is processing I think there's a lot of or some time to for you to ask me questions if you have some so if you like you can come up with what whatever is on your mind how difficult is it to retrofit I like a normal DSLR with a micro lens sorry why is it even possible or commercially available available if I got you if I got you right then you asking whether the manufacturing or how difficult it is to manufacture such a camera well they are available micro lens arrays for about $1000 roughly if you just get one or two and I think you don't want to get more than that then it is quite expensive and the manufacturing process well depends on your skills but yeah it is not too difficult I've seen a workshop of a colleague who actually did that in front of an audience and built that as a conventional I guess whatever camera that was but a quite conventional camera micro lens was attached and then you attach your objective lens and then you're done and the rest is done by the software and the good part of this software is it is not limited to commercial available cameras or the one that was initially commercially available so you can build your own and use that software because it is not restricted to any type of a specific type of pen optic camera yeah was did that answer your question satisfyingly so if if if we think one step ahead could this technology be applied to video images as well very yeah very interesting question we asked ourselves the same thing back then and it would be very convenient because usually at the movies setup you have a one camera and what what one person does this person is called the focus puller he pulls the focus of the scene next standing next to the cameraman and you could don't do that it just in one shot like usually this person has to do it one or twice once or twice and this could be done only in one shot and you can postpone that process to the post processing stage so this could be done later on it depends on the yeah creative freedom of the people sitting behind there and there was a company also looking into that and trying to introduce the camera to the cinematography market but they didn't make it before they had to close down and but yeah this it is a very interesting application and if it comes to other applications I think that microscopy could benefit from from this because these viewpoints that I've shown you earlier are very close to each other very narrow meaning you only get that from very close objects so you have to be your objects that you capture have to be very small meaning microscopy might be one field or endoscopy or all the medical instruments out there yeah if we're talking about microscopes usual problem there is to have enough light and in this case I think but that's basically my question and that you also reduce the amount of light that's available if you restrict yourself to certain points of the camera because you only need those for that particular focus and you discard the rest so you would need much more light I guess what do you say in fact you the the amount of light that you that is captured is the same however there's another trade-off since we collect only one pixel out of each micro image we reduce reduce the overall image sensor resolution so the trade-off is rather on the resolution side then on the amount of light side I would say and so this is rather the trade of that we are making with this type of camera but in terms of light there these images can be quite annoyed noisy since you you split up your image point to many pixels like the image point that exists on the micro lens gets bread over many pixels so by this you help you introduce noise or you have quite noisy images but since you have the information replicated you can also by this integration process you can cancel out parts of the noise quite easily so I would rather think that the image resolution part is much harder for it much yeah hard to take trade off for photographers or for any medical applications I have a Python question I'm seeing it's taking quite a while what technologies tech do you use to use siphon to use numpy do use pi PI yeah let me just give you I've written the dependencies here I don't use C Python right now if there's anyone who's an expert in that who has used that before the scythe on okay sorry yeah I'm not using it up to now but if you're willing to take part and introduce that feel free currently I'm using just a few I try to keep it very yeah to keep it very lean I only use how many of them are five five libraries here numpy is one of them Syfy TIFF library and some demo psyching libraries in order to yeah use the by a pattern image of the Lytro camera yeah obviously yes yes this is where the image processing takes place yeah all right then if there are no more questions anyone want you have one more question or do I have done with the demo sorry say it again are you done with the demonstration you say yep oh yeah true true sure to completely forget about that yeah since I already have shown you some of the images I thought oh that's not the big magic now okay let's have a look at them so the refocusing process is done however I guess grab it all the images are there I was thinking maybe they're still exporting so now these are the images as they come out and as you can see this is like a different image from that that we saw in the introduction and as you can see refocusing as possible and what's also coming out of the software is are a bunch of viewpoint images and as mentioned earlier there are a bit smaller since I've done some tweak to extent the spatial resolution of the refocused images and as you can see the objects in the front are moving however if we move at the very end of the aperture plane of that micro elf that mainland's we see that vignetting effect this is a typical vignetting behavior that you also face with conventional cameras so this has to be treated in future and if you are an expert in image processing you're free to join me and rectify this and eliminate this artifact your phone thank you very much okay thank you okay that's another way as it just showed us this technique allows you to compute view different viewpoints from one shot so that should allow you to compute 3d models of the senior that's absolutely true you can compute depth maps so-called deaf maps and yep this is also one of the future tasks that are still on my list if you have experience and doing that you're happy I'm happy to have you on my side and yeah that would be great to put that also into that software for sure okay so thank you very much for your interesting talk and I think you will be still happy for for others to talk to you afterwards as well so you feel free yeah yeah thank you for having me and s-set come to me afterwards and let's have a chat and yeah have a good day

You Might Also Like

No Comments

Leave a Reply