jes notes Index Gallery . Signed Distance Functions Shaft passers Snap issues

2025-04-09

Last modified: 2025-04-09 22:50:22

< 2025-04-08 2025-04-10 >

Targeted blends

I made the context menu for a blend only show "Disable" and "Delete this".

One thing I noticed is that disabling a blend hides the entire object, not just the blend. But more generally, if you "disable" a modifier node, it should become a pass-through to its child node. Good, done.

Now I'll look at the uniform naming thing.

I think the problem is that makeNormalised() gives a different unique id to every combinator.

Would it make more sense for the "normalised" form of the tree to match the TreeRewriters intermediate form?

Lol, quick hack is to derive the "normalised" combinator ids from the non-normalised ids, and that solves the immediate problem.

But actually, TreeRewriter writes blend ids onto TreeNodes that need to apply blends. And any TreeNode that doesn't get given a blend id probably wants to just have a straightforward min/max with no blend?

Setting the unique id to -1 in fromTreeNode() works. Later on we could consider optimising out the chmin/smin/chmax/smax calls for combinators that don't have a blend attached.

Actually... why does unique id -1 work? Won't that make uniform names with minus signs in them?

Oh... TreeNode's min/max were using P.const instead of this.uniform. With that fixed: yes, it makes uniform names with minus signs.

Setting it to 0 instead works. nextId starts out at 1 so it won't cause collisions, and now I can change all the parameters, including blend parameters, without recompiling shader. Great success. It does seem like TreeRewriter might be very slow though.

Doing blends of blends is not trivial. We'd probably want to first solve all the blends of primitives, and then treat those ones as "new primitives" and do another pass to solve blends of blends. And blends of blends of blends needs a third pass, etc.; basically you have a DAG of blends and need to solve the leafest nodes first. Forget about it for now.

Hmm, I added logging for the time taken by TreeRewriter and it is really small, like 5 ms. So what is time-consuming? Nothing else that logs its time is taking very much of it.

OK, making Blends show up as children of primitives is working OK. I made a separate array blends that takes "children" that don't need to point to the parent. And the blend shows up as a "child" of both of its argument nodes. One drawback is if you click on them in the tree, it always selects it in the first position, even if you click on the second one. We might need TreeView's conceptual tree to have 2 different "blend nodes" even though there is only one underlying BlendNode? Or we might not care. Or we might just highlight both copies when selected, that sounds best actually.

But the more pressing problem is that the blends are no longer being applied. I guess collectBlends() isn't finding them even though I thought I made it do that.

Oh! It's because I'm forgetting to markDirty(). Again. Fixed.

For some reason blend surfaces are correctly coloured blue on hover, and clicking to select works, but they don't get coloured green when selected. The node gets deselected as soon as the mouse hovers off it?

Because ui.nodeExistsInTree() doesn't know about TreeNode.blends. Make it use TreeNode.findNodeById instead.

And now a more meaty problem: with the default object, Intersection(Union(Sphere, Box), Gyroid), if I add a blend between Box and Gyroid, then it applies between Sphere and Gyroid as well. So let's go back to unit testing, add a test case for that, and fix it.

Weirdly it actually passes in the unit test. So how is that different?

Oh, the document doesn't get marked dirty when you edit a blend. markDirty() on a blend should markDirty() on its 2 arguments, since it doesn't have a parent of its own.

In the actual app if I make Intersection(Union(Box,Sphere), Gyroid) and try to put a blend on (Sphere,Gyroid) then it applies to (Box,Gyroid) as well, and if I try to put one on (Box,Gyroid) then it doesn't show up at all. But in the unit test it looks like both cases work correctly.

And if I dump processedDocument to the console, it looks right to me?

It is a Union(Intersection(Box,Gyroid), Intersection(Sphere,Gyroid)), and the the Intersection(Box,Gyroid) has the blend parameters set. But it is rendering as if it doesn't. Maybe I need to dump the shader source?

Well in both cases it looks to be using the uniforms. We must be setting the uniforms at least some of the time else there would be no blends at all. Are we setting all of the uniforms?

I made it log the values from processedDocument.uniforms() and indeed it is getting 0 in the case that isn't applying any blend, even though the uniqueId is correct, which is weird because I think the only place the TreeRewriter sets the uniqueId, it also sets the blend parameters.

OK, I discovered that it is also setting the blend radius on the uniform for the Intersection with id 0, which is why it is incorrectly applying the blend to the other object in the first case.

Oh! Maybe the problem is that it remembers names for uniforms based on the id etc. at the time this.uniform() is called. But since the uniform names are deterministic, let's just calculate them when they're actually used instead.

Hmm, no, that's not going to work. this.uniform() is when they're actually used.

But I am seeing an Intersection that has propertyUniforms containing 2 different uniqueIds. So definitely something is going weird there.

Setting propertyUniforms = {} at the same place we assign the blendRadius and uniqueId solves the problem. The issue is that serialize() isn't cloning propertyUniforms, it's keeping a reference to it.

And this seems to be working now! Even without TreeRewriter touching propertyUniforms. Good.

I think I want to somehow support both the workflow where you apply targeted blends to individual surfaces, and the one where you set blends manually on combinators. They are both powerful in different ways.

I think park blends for now, come back to them when I am dissatisfied with them.

2d texture images

So the plan here is to load in (without loss of generality) a black and white image, compute a 2d distance field of it (scaled up if you want), and then send the distance field to the GPU as a 2d texture, with options for tiling infinitely, mirror-tiling, etc.

And then we can extrude in Z, and "DistanceDeformInside" to apply a texture to the surface.

First example: checkerplate.

Second example: logo.

And once this is working, we can use it to make text.

So I guess the first step is to duplicate texture3d support but for texture2d.

And then firstly I guess a way to load an image and treat it as an SDF, and then after that process a black and white image into an SDF.

Well I have this loaded from an image:

However it is really slow? I don't see why it should be slow.

And when I add it as the second child of a DistanceDeformInside, it is giving warnings about not having a second child?

I think DistanceDeformInside gets broken by the tree rewriter. It needs to keep the reference to the 2nd child unchanged. What if the second child has blends applied to it? Oh dear.

OK, hack DistanceDeformInside to copy its second child to some other field, and restore it as needed.

Whoa, working a bit:

Need to make it calculate a distance field for the image, instead of loading the image directly.

One issue is that it looks like I'm only managing to get pixel values in the range 0 to 1, instead of arbitrary floats. Is this my limitation or something to do with GPU textures? It definitely let me use arbitrary floats for 3d textures.

It does work for arbitrary floats, great success.

The next thing is upsampling the image to get better quality distances.

Here it is with a "checkerplate" texture on the gyroid:

Computing SDFs over large images is really slow. Could maybe do that bit in the background as a web worker and let you work on something else while it is thinking? And same for mesh import/export.

jes cylinder:

maxDisplacement is surprisingly annoying, because it has to work for both positive and negative displacements. I guess do the thing I'm doing now and wrap it in clamp(x, -maxDisplacement, maxDisplacement). So, not that annoying actually.

OK I have a proper 2d SDF now. And maxDisplacement.

I've added options for tiling/mirroring in both X and Y. We could also do with "mirror tiling" probably, ignore for now.

Good, I think this is "basically working" now.

Peptide DSL

Let's see if we can make a quick DSL for Peptide. I'll want a recursive descent parser that turns a string into a Peptide expression.

It will want:

But to start with maybe I just want to be able to take simple infix expressions and turn them into Peptide.

And this is another great opportunity to do some Test-Driven Development.

The most basic possible Peptide expression is just a number, so let's start there.

OK so I now have infix operators "working" (but not applying precedence), and numbers, and variable names.

Next up is maybe function call parsing.

I've done it by letting any identifier followed by a (maybe empty) argument list in parentheses is considered a function call, and then if P.whatever(...args) doesn't throw an error then you're allowed to use it. Maybe I would prefer not to allow arbitrary poking into Peptide though? Or maybe I would? At least it conveniently allows use of all functions without any special handling.

One thing I am kind of stuck on is how to know if a variable name is a float, vec3, or struct.

Once I can pass in vec3's, and access their elements, I can start adding some UI to let you type expressions into Isoform.

Todo

< 2025-04-08 2025-04-10 >