-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Path: Add a new waterline algorithm named Grid Dropcutter #6677
base: main
Are you sure you want to change the base?
Conversation
0117d33
to
6e27904
Compare
6e27904
to
2b1a990
Compare
Since we're in feature-freeze right now, I've added the 0.21 label. I would suggest making the PR a draft as well |
@belov-oleg Thanks for the PR. JFTR, it's helpful to break up the commit history instead of it all being squashed in to one commit so it's easier for reviewers to inspect the code and understand what's been changed/touched added |
I would normally agree with @luzpaz but this is introducing a new feature. It's fine to squash the changes to one commit. One bigger change I would like to discuss is moving the main algorithm to a Path generator? I'm trying to move the main gcode generation logic into pure functions in modules in the Generators directory so they can be more thoroughly unit tested. This also results in the user-facing operations becoming simpler and more maintainable. Since we're in feature-freeze, you have some time to look at this if you're willing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't done a thorough code review yet. A lot of your math is way over my head so I'll only be able to review generally. Here's some comments in that direction:
Please format all Python files using Black version 22.1 or later.
Avoid one or two letter variable names. one letter variables are fine for axes ( X, Y, Z ) and for local loop counter (i, j). Otherwise please try to use names descriptive enough that the reader doesn't have to hunt for the definition.
Don't use print statements in debugging. Instead use PathLog.debug()
Most Path python files have a standard boilerplate debugging block at the top of the file. Include this to make it easy to turn debugging on and off.
fp = open("/proc/cpuinfo", "r") | ||
info = fp.read().split("\n") | ||
fp.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than explicitly closing the file, It's almost always better to do this with a context manager. https://book.pythontips.com/en/latest/context_managers.html
if hasattr(model, 'Shape'): | ||
shape = model.Shape | ||
else: | ||
shape = model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern can be simplified with the getattr function which returns a default.
shape = getattr(model, 'Shape', model)
|
||
|
||
def applyTool(self, tool ): | ||
if type(tool) == int or type(tool) == float: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern is more pythonic like this:
if type(tool) in [int, float]:
"App::PropertyBool", | ||
"FinishingProfile", | ||
"Clearing Options", | ||
QtCore.QT_TRANSLATE_NOOP( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QtCore. not needed here. QT_TRANSLATE_NOOP is imported at the top.
# Begin GCode for operation with basic information | ||
# ... and move cutter to clearance height and startpoint | ||
if obj.Comment != "": | ||
self.commandlist.append(Path.Command("N ({})".format(str(obj.Comment)), {})) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the 'N'? Line numbers are added by the post-processor. No need in the Command.
Our minimum python version is now high enough that f-strings are acceptable.
This could be rewritten as
self.commandlist.append(Path.Command(f"N ({str(obj.Comment)})", {}))
faces = list() # list of Part.Face | ||
voids = list() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Declare empty lists with a double square bracket. It's approximately 3X faster.
faces = []
# Extract waterline | ||
# contourMap.dump("drill_05_%i_%.1f_%i.txt" % (mdlIdx, layDep, len(traces))) | ||
nothing = True | ||
while True: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the only way out of the loop is the break statement.
Couldn't you do
while len(nodes) > 0:
...
import time | ||
|
||
class TestPathLevelMap(): | ||
def sphere(self, x0, y0, z0, r, n ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python unit tests always run in lexigraphical order within the test class.
We usually name our tests something like test001() and then put a good description of the test in the next comment line. This way we have control over the order they execute. This isn't strictly necessary but if the tests build on each other, it's good to bail out early rather than continuing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand how to include these tests in the main workflow. Some of them are significantly time-consuming. How much CPU time can I use for tests? Also, I have no idea how to run a test for the whole algorithm. Firstly, there are many possible combinations of parameters, and the second question is how to check the correctness of the created path. Can you give me some ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of your tests seem to be stress tests. I would not add those to the main test suite. It's good to have them but we don't want to be that hard on the CI pipeline for every single PR.
Testing the whole algorithm is what I would call an 'integration test'. That's also very difficult to write, especially for user-facing functionality where, like you said, there are lots of combinations.
What we are striving for right now are unit tests. These are tests that validate a single function and especially validate assumptions made by that function. For example, if your function is expecting a list, will it return a ValueError if it gets a float or will it break later in the function with 'TypeError not iterable' messge?
Best Practices: https://www.testim.io/blog/unit-testing-best-practices/
I'm speaking in generalities here, not giving you advice on how to proceed. Writing good tests is an acquired skill and one that I'm still acquiring. So I don't feel like an expert. I won't let that stop me. :-)
If you don't have a pure function for the main algorithm, or such a function would take too many arguments, then consider factoring out some of the complexity to simpler functions and testing those.
You can look at the generators I've already written and the corresponding unit tests. I'm really happy with these. Working this way changed how I write the code. It forced me to make the code simpler, and more modular because it was easier to write the tests that way. The side effect is that the code is easier to read and much more stable. Also, I went faster because the effect was cumulative. I'd get one feature and test working then start working on another feature. if my changes broke the first feature, I knew it immediately.
Regarding the correctness of a generated path. Start small. Work up.
I would do a test case that uses a dirt simple model. Maybe a sphere. Does one waterline loop. Then compare the commands in the resulting path to your expected result. You can get the string of a Path object with
obj.Path.toGCode()
I suggested a compressed commit because there were a lot of wrong decisions in the development process, later completely reworked. The full development history is saved in another fork branch. |
I agree. In the screenshot, I selected the STOCK for the boundary. So I would expect the tool to be free to move outside the perimeter of the model all the way to the edge of the stock. Is that not what this property controls? |
@belov-oleg are you going to make any changes on this PR? If you want this draft PR to be considered for a version 1.0 release, it should be marked ready for review by Monday, 2024-06-03 at the latest. (further details in the 1.0 feature freeze forum announcement). |
Add: A new waterline algorithm named Grid Dropcutter
The idea of algorithm was inspired by OCL Dropcutter but the code was totally redesigned. Now OCL library is not required. It was found, that the previous algorithm is very slow for models with several thousand faces. Generally it was caused by ineffective area.makeSections routine. This routine actually was used for estimation of bound boxes only. In the new version bound boxes are calculated directly from tessellation of faces.
New features:
The processing can be made inside the contour specified by selected faces.
Variables BoundaryAdjustment and BoundaryEnforcement can be used.
Any tool including ball end tools can be used. Paths for a ball end tools are calculated with performance comparable with endmill ones. Old tool format is not supported.
In multi-layer mode the processing of whole required area on each level is performed with the specified StepOver value.
Finishing can be selected. In this case finishing passes are executed after full draft processing in upwards direction which can minimize the affect of shavings, if they are not removed fast enough. The distance between finishing layers is specified by FinishDepth parameter.
In transitions between layer the appropriate ramps are inserted. The transitions between paths are optimized and are executed without steps over if possible.
Selection of the area.
If no faces are specified (or all specified faces are declared as Void or Excluded using AvoidLastXFaces parameter), all models included in the job are processed. The processing area is calculated as a contour of the model part, located above the FinalDepth level, expanded to the tool radius. BoundaryAdjustment can be used to expand (negative value) or contract this area.
Also the area is limited by Stock or BoundBox of the Model using BoundBox parameter.
Each model, located not below the FinalDepth level is profiled separately. If any other model is located in this area it is taken into account, but not profiled. The minimal gap between this model and the mill will be not less then ExtraOffset.
Any number of faces can be specified. The tail of this list can be treated as "Void", that is marked for exclusion using the parameter named AvoidLastXFaces.
If at least one included face exist, the processing is done inside the projection of this face to XY plane, even if the face is not visible from positive Z direction. You can use this feature to specify processing areas of one model with selection of faces of an auxiliary model located under the processed one.
If several faces are selected the common projection to XY plane is processed. One can choose how to process this features - collectively or individually. Different models are always processed individually. The area can be contracted or expanded with positive or negative values of BoundaryAdjustment respectively. The area determines the place where the centre of a tool can be located. Therefore if you want to process a face with some small adjoining features it is sufficient to select the central face and set the appropriate negative value for BoundaryAdjustment.
There are several possible combinations of BoundaryAdjustment and BoundaryEnforcement.
1 BoundaryEnforcement=False. Excluded faces are removed before boundary adjustment. BoundaryAdjustment is calculated using fast and not exact algorithm.
2 BoundaryEnforcement=True, BoundaryAdjustment > radius of the mill. Excluded faces are removed before boundary adjustment. BoundaryAdjustment is calculated using exact algorithm. Note that the gap between the processed model and the other models is calculated approximately.
3 BoundaryEnforcement=True, BoundaryAdjustment < radius of the mill, but is not negative. Excluded faces are removed before boundary adjustment. BoundaryAdjustment is calculated using exact algorithm and is equal to the radius of the mill. The gap between the processed model and the other models is calculated approximately.
4 BoundaryEnforcement=True, BoundaryAdjustment is negative. On the one hand we want to expand the area, but on the other hand we are specifying the boundary enforcement.
In this case the boundary will be shifted away using the not exact algorithm, and then the boundary of excluded faces, shifted away with the exact algorithm to the radius of the mill will be excluded from the area.
The principle and the structure of the algorithm.
For each model group a level map is created. The level map is a structure having a rectangular array of elevations in Z direction plus several fields, which describe the position and resolution of the map. The array is implemented as a numpy array of floats now. Each cell of the array contains the exact value of the maximum elevation of some surface in the square given by XY coordinates in min_x i * sample_interval .. min_x (i 1) * sample_interval, min_y j * sample_interval .. min_y (j 1) * sample_interval where i and j are indices of the cell. Initially a surface which covers all set of models is created. For the calculation of this surface the facet is tessellated and for each triangle each intersection of the edge projection to XY plane with the grid and each cell wholly located inside the triangle are calculated.
Then it is transformed in place to the minimal surface where the tip of tool can be located without cutting of extra material from the desired part. To create this surface first the profile of the tool is gathered from the tool 3D model using the same procedure. Then the list named "job" is created. Items of this list describe the shift in i, j and z direction of the resulting map and the original map when the operation of cell by cell maximum between them is performed. Roughly speaking we should made this operation for each cell of the tool footprint. But several optimization tricks can significantly reduce this job. First partial maximums of cells can be calculated for horizontal areas of the tool profile. Then the original map is previously analyzed row by row, and for each row maximal and minimal derivatives along both coordinates are stored. Using this derivatives it is possible to quickly estimate which parts of the tool profile can't be in contact with the surface anywhere in this row and exclude them from calculation. Also the calculations is optimized with the aim to keep data in L1 and L2 caches of processor as effective as it is possible.
For calculation of the path this surface is crossed by the plane at the appropriate level and a contour map is created. Each cell of the contour map contains one of 4 values: 0 - air, 1 - grey zone, 2 - expanded material and 4 - material. Initially this map contains only air and material, then the bound of material is marked as a grey zone. The centre of tool can pass anywhere inside this zone. The algorithm selects points when the cells of grey zone are connected by corners, but not by edges. These points form the path. Also a special processing of the turns on 90 degrees and a back tracing of narrow gaps are performed. For multi-layer processing the bound of the material is expanded away on the specified distance and a new grey zone is selected.
For optimization of the tool movement adjacent paths are joined together where it is possible with a straight line without touching the material, that is only over cells marked as 0 or 2 on the contour map. After this optimization an appropriate ramps are inserted.
The finishing is performed similarly, but also the calculated paths can be slightly shifted down for better processing of shelves and gently sloping surfaces. The distance between paths is calculated in accordance with the tool profile and the sample interval.