Last modified: 2025-04-10 17:14:59
< 2025-04-09 2025-04-11 >Plan is to continue on the Peptide DSL. I think I want to be able to handle vec3
s. One issue is how to work
out whether a variable is a float or a vec3. Maybe I assume float for everything and you have to tell the parser
which ones are vec3s? Or have you pre-declare all variables?
I think let you pass in a map of name-to-type, and undeclared variables are assumed float.
Cool, vec3s working now.
One issue is I can't easily detect how many arguments a function expects? So min(1,2,3)
doesn't do what you
expect.
Oh, neat, you can check func.length
to see how many args func()
expects. Good enough.
So next thing is making a way to add custom SDFs to the tree.
Oh, one thing is that at runtime p
isn't a variable. p
is something that is passed in at compile time, so
the parser wants to accept that as well.
Whoa, check it out:
Working!
But it refers to p
lexically, which means DomainDeform, Translate, etc. won't work, because it's
always looking at the top-level p
.
Great, now I can pass in named nodes at parse time and they can be brought into the resulting tree.
So maybe this is actually good enough for now?
This is apparently a "Lidinoid", according to https://nodtem66.github.io/Scaffolder/tutorial_2/ :
sin(2*p.x)*cos(p.y)*sin(p.z) + sin(2*p.y)*cos(p.z)*sin(p.x) + sin(2*p.z)*cos(p.x)*sin(p.y) + cos(2*p.x)*cos(2*p.y) + cos(2*p.y)*cos(2*p.z) + cos(2*p.z)*cos(2*p.x)
But doesn't seem to be working.
Wikipedia:
Mine:
The Wikipedia description is slightly different https://en.wikipedia.org/wiki/Lidinoid
Ohhhh, I still didn't do operator precedence. So, yeah, of course it doesn't work. Let's do that next.
Amazing, Claude did it on the first try with "Can we make Expression handle parens and operator precedence?".
Better now:
So I reckon that's "good enough" for now. Other features you might want include:
Instead of having "broken" distance functions divide their result by their local Lipschitz factor, I should propagate them up the tree, and divide by it only at ray-marching time. That way shell thicknesses etc. won't get distorted.
lipschitz=1
The point of the Lipschitz factor is that if the field says the surface is some distance away, then the surface better not be any closer than that distance divided by the Lipschitz factor. So what does that say about how we combine them?
I think we ideally want every function's field to have a mean gradient of 1, and the Lipschitz factor just needs to make sure that sphere tracing won't jump through the surface. So should combinators always take the max of the child Lipschitz factors? Or combinators look at the lower-bound distance implied by each of the children's distances and Lipschitz factors, work out the overall lower-bound distance, and then give a Lipschitz factor that matches the lower-bound distance for the distance that we're actually returning?
It is annoying that I have to manually make every Modifier node propagate its child's Lipschitz factor.
Fortunately there is a good way to tell if you are under-estimating the Lipschitz factor: set some extreme values and drag the "step" slider down and see if the shape changes. If it does, then the Lipschitz factor is wrong.
And, sweet, I can now make a pretty extreme Twist
and no glitches:
But we do now have the problem that we run into the step limit in the shader quite quickly.
http://www.incompleteideas.net/IncIdeas/BitterLesson.html The "Bitter Lesson" tells us that we should use solutions that take advantage of computation.
One possibility is that we can use our knowledge of the child's bounding box to define a "bounding cylinder" for the twist, and for things that are far from the bounding cylinder, we just return the distance to the cylinder with Lipschitz = 1. But this is leveraging "human domain knowledge" rather than computation.
Maybe ignore for now, see how it performs on the GPU at home.
So now DomainDeform, DistanceDeform, and Twist all have Lipschitz constant calculation. All of the modifiers need to at least pass on the Lipschitz factor from their child.
The distance field of a (cross section of a) cube looks like this:
Rounded on the outside. (Correct).
But the distance field of a cube made out of the intersections of 6 half-planes looks like this:
And that means operations like an outside Shell end up with sharp corners instead of rounded corners, from input objects that look identical, which is quite unintuitive.
Is there anything we can do about that?
Given f(p)
and g(p)
, the intersection is not simply the larger of the 2 distances, but instead the distance to the
nearest point p1
at which both f(p1)
and g(p1)
are <= 0.
Find the shortest vector d
such that f(p+d) <= 0 and g(p+d) <= 0
. The intersection of f(p)
and g(p)
is |d|
, but signed.
I wonder if we can compute a first approximation of d
based on max(f(p), g(p))
, and then repeat that a couple of times with
Newton's method or whatever? But how would we know what direction to step in?
Ideas:
And just do maybe 2 or 3 steps to try to find the surface, and then measure the distance of the resulting point from the point we started at, and give that distance as the distance.
Reminds me of FastInvSqrt.
From some sketching on paper it looks like, for intersections, you want to follow the gradient of the one that says you're furthest away. And that looks correct for both the inside and the outside.
Annoyingly, with Peptide I'm going to have to evaluate both possibilities anyway! I may need to address this at some point.
I think with 2 iterations we get to the exact corner or edge of an intersection of half-spaces, so 2 iterations is probably enough to be getting started with. Either the first iteration jumps you to the exact surface and the second iteration goes nowhere, or the first iteration takes you to the plane of one of the surfaces, and the second iteration takes you to an edge.
For more complicated shapes you might want more iterations. Also you might want to configure the "renormalisation level" or whatever in the combinator node? Unsure. Maybe I prefer to "leverage computation" and not make it configurable.
So yeah, let's just try it out with 2 iterations on IntersectionNode
and see if it works...
So:
p0 = 0;
d1 = child1(p)
g1 = child1.derivative(p)
d2 = child2(p)
g2 = child2.derivative(p)
sign = d1 < 0 && d2 < 0 ? -1 : 1;
p = d1 > d2 ? p-d1*g1 : p-d2*g2
d1 = child1(p)
g1 = child1.derivative(p)
d2 = child2(p)
g2 = child2.derivative(p)
p = d1 > d2 ? p-d1*g1 : p-d2*g2
return sign * length(p - p0);
Either I did something wrong or this is generating absolutely enormous expression trees. Instead of just calling each child once, I now call
each child twice and call its derivative twice, so maybe 4x as much work? Doesn't seem crazy. The derivative can be 3x as expensive as the
main function, because it has to compute 3 vectors. So could be 8x as much? But it's taking hundreds of times longer to compile the shader,
if it is even going to compile the shader. And this is for a scene that is just Intersection(Box, Box)
.
If I just do one iteration then it is more performant, but obviously that doesn't actually help at all.
What if I follow both gradients at the same time? So take p = p-(d1*g1+d2*g2)
? And then take the length of that? No, I think that always gives you
the distance to the corner, instead of the distance to the nearest point on the surface. Simplify it to dist = |d1*g1 + d2*g2| * sign
, gives us
this:
So, yeah, distance to corner instead of distance to intersection of surfaces. But good that it shows that the idea could be sort of in the right direction.
So why is this code with derivatives so slow? Can I get it to dump the expression tree at each step?
One thing is we seem to be evaluating makePeptide()
of the Intersection 3 times? It's because it was doing it once pointlessly, once to
get the SDF for cursor coordinate raymarching and SSA for shader, and once again to get the normal for the shader. I've made it only do it once now.
One annoyance is that it looks like console.log
isn't displayed until you return control to the browser, which means if your code is hanging
you don't get any debug output.
Also, how do you do smooth min with this method? Address that later.
I think it is giving up while trying to compute the overall first derivative. Is this Firefox running out of memory? Or running out of patience? The page somehow becomes interactive again without logging the code that comes immediately after computing the first derivative.
I made it not try to compute the overall first derivative, and it is working! Gives a good distance field, great success.
I am curious if making it do a third iteration of gradient-following would cause the same issue. Yes, seems to.
So the issue is that a derivative of a tree that contains a derivative of a tree that contains a derivative is very expensive to calculate.
If instead of taking the length of p-p0
at the last step, I instead take |max(d1,d2)| + |p-p0|
, then I do get an improvement on the
initial max(d1,d2)
, even with only one iteration.
But it's no longer a lower-bound on distance! It now underestimates the distance.
Instead of adding max(d1,d2)
and p-p0
, I should take the "worst-case Euclidean distance", which means assume they are perpendicular, so
take square root of sums of squares.
And actually, for the specific (but very common) case of intersections of perpendicular half-spaces, this works perfectly! And is it now performant enough to put the normals back on?
Yes!
Great success.
So what do I need to do to actually use this?
OK, looking at the intersection of a sphere and a box, it does give a discontinuous field, which means an Offset of the surface looks like this:
So we need to not just give a lower bound on distance - we need to make it smooth.
My intuition is that a 2nd iteration would almost completely solve it. So let's try turning normals back off and see.
Actually doesn't fix it. The issue is still there. Maybe slightly less bad, but not a lot less bad.
What about getting derivatives with finite differences? Looks like performance may be even worse this way.
What if I normalise the gradient vectors? Still get a discontinuous field.
The issue I'm having is that I choose which gradient to follow based on which surface is furthest, which means moving a tiny distance in space can result in picking the other surface, which then follows a gradient in a different direction and gets a different answer. Obviously the solution is to do more iterations, but for now that doesn't look possible.
What if I follow both gradients at once to get the distance to the corner - and then pick the smallest out of the distance to the corner, and the distance to the furthest of the two surfaces? No, the problem is the distance to the furthest of the surfaces is already an underestimate. So I want max of distance to the corner and distance to furthest surface when both distances are positive, otherwise I want just distance to furthest surface?
Nope.
We always want just max(a,b)
when one is positive and one is negative.
We basically have 2 vector fields. One vector field per child object, each telling us the distance and direction to the nearest point on the surface. And we need to somehow combine these to find out the distance to the nearest point that is inside both objects.
There is also the possibility that the objects don't overlap, so the intersection is empty. What distance do you give then? I basically don't care as long as it's positive.
Looking in 2d, there is a region in the corner where the point lies "between" the normals of the 2 surfaces at the corner. Inside this region you
want the distance to the corner. Outside of that region you just want max()
.
I think if you do enough iterations of "follow the largest gradient", you ought to eventually find the closest point on the intersected object.
Looking here, we have 6 regions:
max()
max()
max()
max()
gives right answermax()
gives right answermax()
gives the wrong answerSo we need to detect when we're in region E, and only in that region we want to return the distance to the corner instead of max(a,b)
.
The dashed lines indicate surface normals of the 2 child objects, taken from the corner of the intersection. Isolines in C,D are parallel to the rectangle. Isolines in A,F are parallel to the circle. Only in E do the outer isolines need to bend to stay centered on the corner.
In F the distance to the rectangle is smaller than the distance to the corner. In this region we want the distance to the circle. In the distance to the circle is smaller than the distance to the corner. In this region we want the distance to the rectangle.
In E the corner is further away than both of the child surfaces are individually.
Ah, the problem is we can't work out where the corner is, if the surfaces aren't perpendicular and flat. Because adding the two vector fields doesn't take us to the corner.
What if we always take the first surface distance on the first iteration and the second one on the second iteration?
Still discontinuous, just along a different line.
OK, so a few questions on this:
Working on the "better-intersections" branch at the moment.
I don't think it handles concave surfaces:
Starting from the "blob" point, it jumps to one surface, then the other, then the other, and eventually gets stuck at a local minimum where the 2 surfaces are close together but not overlapping, and never converges towards the overlapping region.
After several steps of this it will be at one surface or the other, and max()
of the 2 possible points will be larger than max()
of the 2 distances from the starting blob. So in that respect it is an improvement over straightforward max()
, other than that
it may be discontinuous.
For overlapping convex objects I think it always converges. In fact, for overlapping convex objects, does it always converge in 2 steps? I think after 2 steps it always get you a point on the intersection, but not necessarily the closest point to the point you started from, example:
It has found a point that is inside both objects (good), but it is not the closest point to where it started from. If it then does a couple more steps of alternating which surface's gradient it follows, it will converge. Or have I confused myself by drawing a bad diagram? Does it actually work? Unsure.
No, it doesn't converge to the correct point in 2 steps, easier to see with rectangles:
But, yes, if you do several more iterations (and follow the gradient even if distance is negative), then you do converge towards the closest point, at least in this case.
And as for when it's continuous... I think if you always take the first object first and the second object second, then you get a continuous output? It's only if you choose the order based on which one is largest that it has a discontinuity.