Last modified: 2024-10-08 09:39:26
< 2024-10-06 2024-10-08 >Just thinking some more about how we'd make these.
I'm thinking we surface the "top" of part 1, and machine in the features and profile of the top side, then flip over and do the same for part 2, making sure the surfacing brings the overall piece to 15mm thick.
But the tricky part is aligning part 1 and part 2 in the Y axis, because once you make the profile for part 1, the Y reference surface is lost. And if you don't take great care to align them, then it won't sit nicely flat later on when we come to separate the two pieces.
So to make these parts, I think:
And the stand needs I think 30mm long chunks to make 2 parts, so in total we use 180mm per 2 parts of both types, so in 1 metre we get 5 chunks, or 10 parts.
Gcode needed:
So this evening if there is time I'll try to start on the gcode, and tomorrow I plan to start on making these.
OK, I've worked out what to do about alignment.
In the first part "top-side-1" we square off the left and right end of the material so that the reference surface is consistent.
And we have the material sitting on tall parallels, and we machine down the front and back edge almost down to the vice jaws, so that they're consistent too.
And then yeah pick up the new Y zero by finding the centre of the other half's counterweight.
Following on from yesterday.
They talk about storing the KV cache on the GPU, and they talk about it in terms of "requests", and Cursor is fast even on machines without CUDA. Does that mean that while you're using Cursor there is a GPU in the cloud that is basically dedicated to you? Or at least some fraction of a GPU, if you assume one GPU has enough memory to store N users' KV caches. Is this really costing less than $20/mo per user for them to run? Or is it just burning VC money?
As good as they are at writing code, they're surprisingly extremely poor at finding bugs in code, if you just ask them to find bugs.
They're talking about different programming contexts as "calibration". Like the difference between how acceptable bugs are in a quick experiment versus in actual production code in Postgres or Linux. That's something I've been thinking about a lot lately as well. They're pointing out that (good) human programmers tend to "know" the context and are calibrated accordingly, but the LLMs don't have that holistic knowledge of what they're doing.
They use AWS for hosting, think it's really good.
< 2024-10-06 2024-10-08 >