NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Torch Lens Maker – Differentiable Geometric Optics in PyTorch (victorpoughon.github.io)
etik 49 days ago [-]
Great work! Here's some prior art in the (torch) space: https://github.com/vccimaging/DiffOptics

A few notes, though paraxial approximations are "dumb", they are very useful tools for lens designers and understanding/constraining the design space - calculating the F/#, aperture stop, principal planes and is critical in some approaches. This pushes what autodiff tools are capable of because you need to get Hessians of your surface. There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.

fouronnes3 49 days ago [-]
Yes, that DiffOptics paper was one of my main inspiration for this project. It's a very cool paper.

> There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.

Yes, this is definitely the direction I want to take the project in. If you have any reference material to share I'd be interested!

etik 49 days ago [-]
Gaussian quadrature integration for rms spot size or wavefront error:

> Forbes, G. W. (1989). Optical system assessment for design: numerical ray tracing in the Gaussian pupil. Journal of the Optical Society of America A, 6(8), 1123. https://doi.org/10.1364/josaa.6.001123

In general, you'll want to look at MTF calculation (look at Zemax's manual for explanation/how-to). There is also a technique to target optimization at particular spatial frequencies:

> K. E. Moore, E. Elliott, et. al. "Digital Contrast Optimization - A faster and better method for optimizing system MTF," in Optical Design and Fabrication 2017 (Freeform, IODC, OFT), OSA Technical Digest (online) (Optical Society of America, 2017), paper IW1A.3

barrenko 39 days ago [-]
A big question here, do you think it would be possible to self-study optics and what would it take?
cbarrick 49 days ago [-]
Neat!

I've been working off and on on a similar hobby project, working through the book _Computational Fourier Optics: A MATLAB Tutorial_, and implementing it in Jax.

My main interest is adaptive optics, but I'm only a hobbyist (limited physics background) and honestly haven't had much time to put into it.

fouronnes3 49 days ago [-]
Would love to chat with you about your project! I'm very interested in jax also. You can find my email on my website if you wanna get in touch :)
barrenko 49 days ago [-]
If you could be bothered to write a blog post on it, I'd be interested in reading it.
skwb 48 days ago [-]
I'm a avid (hobbyist) photographer and I've noticed a TON of genuinely good 3rd party lenses (primarily Sigma and Tamron) and even 'fine' lenses at rock bottom prices (Viltrox, 7Artisans, TTArtisans, etc) for like $250. The conventional wisdom I've heard is that computer aided design has totally revolutionized this field.

I can only hope that projects like these help build better lenses for the future.

mhalle 49 days ago [-]
It's really awesome that you've taken a widely available tool like PyTorch and used it out of domain to provide a library like this, especially one focused on exact solutions and not approximations.

Any plans to include diffractive optics as well? (A totally self-serving question, given that refractive optics is much more common.) In a past life I taught holography and wrote interactive programs to visualize the image forming properties of holograms.

aaclark 49 days ago [-]
This is very cool and crosses paths with a few projects I've been working on recently. - implementing a ReLU network in Blender, mostly for visualization - applying the Riemann-Schwarz mapping theorem to discrete radiance fields - solving a spherical-elliptical optics dilemma in perspective projection Your project dovetails spectacularly with this yet you've tackled the core chain of geometry problems "in the opposite direction". It seems I'll have to pick a different thesis topic, but I'd love to pick your brain about it.
fouronnes3 49 days ago [-]
Feel free to contact me! Love to chat :) My contact info is on my website.
Scipio_Afri 49 days ago [-]
Very cool. This is somewhat naive question considering I actually have an EE background, and I think I know the answer but considering their shared EM theory, do you see any parallels of this thinking tangentially applicable to radio frequency system design?
fouronnes3 49 days ago [-]
I know absolutely nothing about radio so I can't really answer, sorry! But there's really something to be said about using PyTorch (or any other ML framework for that matter) as a general purpose optimizer. The modeling capabilities of torch.nn are quite extraordinary, and the fully dynamic nature of the PyTorch graph (something that wasn't really possible with previous frameworks like tensorflow) is really something that hasn't been talked about enough in my opinion. It's like differentiable programming, basically. You can write any "normal" python function and get an *exact* derivative of it. There are some caveats but it's very very powerful.
MITSardine 48 days ago [-]
Could you clarify this a little?

My layman perception of NNs is they are a formalism that defines a parametric function family (for instance the family of affine functions has m + mxn parameters for n input and m output space dimensions) using some base primitives and composition rules. By tacking on a cost function, an optimizer, and a bunch of (input,output) pairs (training set) to this, one obtains optimal parameters such that the cost function is minimized over the training set (in some norm, I imagine). The NN can then be used to map never seen before inputs to outputs in a manner, hopefully, that leads to a small value of the loss function (i.e. adequately).

Even if this is wrong to some extent, could you confirm that the optimizer is but one component of a larger system, and in fact one that exists independently of NNs as well? Such as a stochastic gradient descent. In that case, what is the role of NNs in what you mentioned, would it not be simpler to yank the optimizer out and apply it to your application directly? It seems to me trying to recast a given problem to a NN to make use of a Python library's included optimizer is a sort of "XY problem", if one could just write their cost function and pass it to the optimizer directly (which presumably is no less open source than the library that includes it).

I may be misinterpreting, because this is not my field, however interpolation or projection are things very familiar to me, so I may have a bias to interpret things to resemble this. In that case, I'd welcome corrections.

MITSardine 48 days ago [-]
Looks very interesting. What would you say are the main technical challenges in the way of your development?

By the way, the roadmap link on the Github repo page is broken (though I did find it in the project's website).

I saw you're interested in handling Bezier splines. This might be premature generalization, but would you want to support whole BREPs? Is that something that is used in applications? I imagine lenses are generally simple geometries, so perhaps not.

fouronnes3 48 days ago [-]
Thanks, I've fixed the links.

Honestly, the main challenge at this point feels like finding the time / resources to just work on everything I've got on the roadmap. I think I've got enough ideas for at least 1y of full time work, perhaps more.

Bezier splines are interesting. I actually had a working implementation a few weeks ago, so I know it will work. It's just that I refactored the internals of how surfaces work and didn't port the old bezier spline code yet, but it shoudln't be too difficult.

I haven't given too much thought about volume representation so far. Currently it's just a list of unconnected surfaces which seems sufficient for now.

MITSardine 48 days ago [-]
Oh, BREPs don't necessarily mean closed surfaces. They're just trimmed rational Bézier splines and other primitives stitched together with some topology. I'm mainly wondering if you might not find the surface-related functions you need in a CAD kernel like OpenCascade, which might support the surfaces you're interested in out of the box.
fouronnes3 47 days ago [-]
I see. Yeah I could look at that but it needs to be re-implemented in pytorch to provide the backwards pass anyway. But the implementation might be a useful resource.
bee_rider 49 days ago [-]
I will ask a dumb question as someone who knows nothing about this stuff (since you already have good questions by smart people):

How close is something like this to being competitive with ray-tracing (as featured in video game engines, or as featured in something like Blender)? I guess, since it is using Torch it should be… surprisingly performant, right? You get some hardware acceleration at least.

fouronnes3 49 days ago [-]
Both this project (and optical design in general) and rendering engines (like video games or any 3D rendering) implement ray tracing, and so are related. But the application is different and therefore they are not really competing. The underlying math is similar, but implementations will be quite different.

Ray tracing for rendering typically needs to figure out which surface a ray is hitting as part of collision detection. This is typically done with something called Bouding Volume Hierachies. Optical design (at least in sequential mode) side steps that issue completely because the order of surface collisions is known in advance.

Another big difference is that ray tracing for optical design needs to be differentiable. This is why I made this project in PyTorch, so that the entire collision detection code and physics implementation (refraction, reflection) can be differentiated with respect to parameters that describe the shape of surfaces. Then you can gradient descent the entire optical system to find optimal parameters.

Finally, rendering raytracing typically implements a lot of realism functions like diffuse or partial reflection which makes the code acutally more complex in some ways. But optical design will care more precisely about things like precise modeling of dispersion, which is not a huge focus for rendering. And there can be real-time performance constraints if you're doing a video game also. Here the implementation really doesn't care about any real time stuff.

qoez 49 days ago [-]
As an expert in this: What's your opinion on using optics like this as actual neural networks? Any big drawbacks or big real benefits
num3ric 48 days ago [-]
Potential similarities with Mitsuba's inverse rendering functionality?https://mitsuba.readthedocs.io/en/stable/src/inverse_renderi...
pixelpoet 49 days ago [-]
Surprised no one has mentioned Mitsuba renderer, in particular the caustic design demo: https://www.youtube.com/watch?v=eTHL3W2NUn0&list=PLI9y-85z_P...
makizar 49 days ago [-]
Could you ELI5 what the applications would be ? Could a render engine be built on top of this and hooked up to a DCC like Blender ? Or is this a way to do computational photography, say correct the depth of field of an image of "denoise" it ?
fouronnes3 49 days ago [-]
The main application is designing optical systems. Say you want to build a camera lens. Modern camera lenses are made of multiple individual lenses, sometimes up to 12 or more pieces stacked together. Everything from the shape of the lens surfaces, to the exact materials and gaps between the pieces has to be precisely calculated so that light ends up where you want to!
isgb 49 days ago [-]
Is there any way to simulate (maybe even interactively) things like focus and zoom? It would be cool to have some way to shift lenses (or lens groups) along the optical axis and visualize how light rays get projected onto the image plane.
fouronnes3 49 days ago [-]
That would be cool indeed! Not really a focus of this project - and kinda complex because it's all in python. Only the rendering widget is in JS, but it's only passively displaying the input data it gets as JSON.

Check out this project[1] which kinda does that, although it's 2D only as far as I know. But it's fully interactive, which is super neat.

[1] https://phydemo.app/ray-optics/

viraptor 49 days ago [-]
What would this be used for in practice? I understand what it does, but have little experience in the area and thought we know what shapes we need for almost all applications. Who would go as far as a complete shape design?
RobotToaster 49 days ago [-]
This looks really great.

Do you have any plans to add stock lens catalogue matching? To make it easier for hobbyists to manufacture lens assemblies.

fouronnes3 49 days ago [-]
Yeah that could definitely be a thing, good idea :) If you have any good links to share about good catalog databases please share!
RobotToaster 49 days ago [-]
The best option I'm aware of is used by a similar (but less advanced) python library called rayopt, it imports from the free version of zemax. https://github.com/quartiq/rayopt
turnsout 48 days ago [-]
Really cool! I haven’t peered into the internals yet, so forgive the ignorant question: are the calculations spectral?
fouronnes3 48 days ago [-]
It's all geometric. But I rays have wavelength data and material models represent index of refraction as a function of wavelength, so dispersion is fully modeled.
Evidlo 48 days ago [-]
Very cool. Do you thinking supporting diffractive optics is possible, or too much of a deviation?
TeeMassive 49 days ago [-]
I wonder if anyone tried making lens out of its outputs using transparent 3D printer resin?
meatmanek 48 days ago [-]
Unless you can get your layer height height down to small fractions of a wavelength of light (so like 50nm or less), I suspect the stairstep shape of your 3d prints will throw off all the optical properties. The tangent angle of the surface is more important (sensitive) than the overall shape of the lens.

You could get rid of the layer lines with post-processing like grinding/polishing, but grinding/polishing is how lenses are traditionally made anyway. Maybe you get to skip the first few steps (rough shaping of the glass blank), but you're still probably left with an optically inferior product.

TeeMassive 45 days ago [-]
https://formlabs.com/blog/creating-camera-lenses-with-stereo...

It seems that dipping the lens in resin is the best solution.

gtsnexp 49 days ago [-]
How far are we from completely replacing Zemax?
fouronnes3 49 days ago [-]
I've never used Zemax myself but I'd love to just keep working on Torch Lens Maker until we get there!
stormfather 48 days ago [-]
Oh fuck me. Can you use this to make an analog transformer out of tiny lenses that's pre-trained? Like, take a digital PyTorch transformer model and spec out the lenses to recreate the computations? And build that? If so, you will cover yourself in glory. Intelligence too cheap to meter.
guy234 49 days ago [-]
it begs the question of using optics for ML?
Bayes7 49 days ago [-]
Xmd5a 49 days ago [-]
GistNoesis 49 days ago [-]
Will there be some diffraction optic in the future, can we just add some phase somewhere or will it need a complete rewrite ? I'd like to experiment with photon sieves and holograms.
mentalgear 49 days ago [-]
Very innovative application of NN architecture in a different (physics/optics) domain !

> The key idea is that there is a strong analogy to be made between layers of a neural network, and optical elements in a so-called sequential optical system. If we have a compound optical system made of a series of lenses, mirrors, etc., we can treat each optical element as the layer of a neural network. The data flowing through this network are not images, sounds, or text, but rays of light. Each layer affects light rays depending on its internal parameters (surface shape, refractive material...) and following the very much non‑linear Snell's law. Inference, or the forward model, is the optical simulation where given some input light, we compute the system's output light. Training, or optimization, is finding the best shapes for lenses to focus light where we want it.

49 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 06:49:25 GMT+0000 (UTC) with Wasmer Edge.