Firmware uncertainties

OK.

“…micro-controller should be adjusting acceleration…”

As long as the moves aren’t so small that the buffer can’t hold enough segments to actually predict the accelerations, it should still be a lot less work than the trig needed to make line segments from an arc. These are 8 bit processors, after all.

So along those lines I went back to no arc gcode. still had an issue. Slowed down the accelerations, still had an issue. I did not try slowing down the actual gcode speed though.

To me it is odd, this is using less resources than a printer and moving slower (even at my test 40/mm/s) by far so we should not be having an actual issue so I think I am back to thinking extruders=0 is the problem.

I guess that is my next test, before arcs

It may not be EXTRUDERS=0, as much as caring about “travel” moves. Since most of the testing is about extrusion moves, there may be some other constraints from extrusion moves (like linear advance stuff) that works fine if you are extruding, but has odd effects when just making XYZ moves.

Thinking out loud too… I think gcode is at the wrong level of abstraction, at least the movement commands. It’s low level because it’s very dependent on the particular machine / setup you’re using. It doesn’t make sense to share gcode like you would share STL files for example. But it’s also too high level because you don’t have full control over the acceleration profiles etc.

I think it would make more sense if gcode would allow you to specify acceleration profiles for every move. No computations on embedded processors needed and everything is “precompiled”. You’d have full control over what happens at which speed. Kind of like the Klipper approach, except that it would all be “precompiled”.

Tracking it down is driving me nuts. As soon as I noticed the Z axis being wonky I knew something deeper was going on.

I need to look into that. I am pretty happy with the old accel and jerk we had + s curve. Last night I think I really got X and Y set well they handled everything as I expected and wanted. Keeping in mind we only need accelerations to not skip steps by asking too much of physics. For CAM you actually do not want excessive accelerations as it changes the tool loading (unless you are using a router PID then you have more room for accel).
So setting super slow accels means you could run into work hardening on metals and make corners a failure point.

This is an interesting idea, but I’m not about to replace all that stuff for free :).

The accelerations are easy enough to do if you break them up by axis. The trick is that if you have a small buffer, and 50 moves that are 0.01mm long, you might be moving too fast, and if the next move (that you can’t see) is a right turn, you may exceed your accelerations just getting down near zero at the corner. You need to see far enough ahead to be able to compute the max speed now to stop before that corner.

It would be waaay easier to just do that with the entire path ahead of time, working backwards from each bottleneck to assign a max speed. Instead, not only do you have this problem of not being able to see the future, you have to do the math each time there is a new point, because when you see it is straight, you can increase the max speeds of all the points in the queue.

Increasing the speed and memory capability of the processor will make this a lot easier, but it is definitely possible to do with this processor, at least with some easy assumptions.

We can set the buffer parameters in config advanced. Would/could there be a benefit to making it larger? I see the down side as a pause taking longer to happen but I would rather have better movements all the time and sacrifice a pause that is rarely used.

I don’t think it exists. It’s a kind of wishful thinking. It would take a lot of work to make this happen. Not only because firmware needs to support it, but you also need to generate those profiles.

CAM knows much more about what’s it’s trying to do than gcode. So ideally, CAM would generate the profiles. That would be better than just having a kind of gcode postprocessor that adds the accel profiles. The postprocessor would be way easier to create, but would be a kind of poor man’s version.

Never mind

. I will keep going to make some more tests

Jamie I had to change -Recip to RECIPROCAL to get it to compile.

With your changes, it is pretty dam good first try. I am going to try and tweak my settings a bit more.

With a 160 for your setting, but the large arcs were still slow especially the outer triangle logo arcs.

Thanks!

I am not sure. At a setting of 160 large arcs are noticeably slow, at 50 it is basically squares off the small arcs. 100 seems okay but large arcs are still slow and the corners are still chunky. Using my test file.

What board are you running this on? I am on a rambo.

Watch the outside of my triangle logo vs the inside.

I am seriously at a loss. Watching your video Jamie the best I can figure is I am fighting multiple issues.

From the video I can explain the weird issue. That multiple speed thing, the Z axis changing speeds, the bottom of the little letters and the large arcs being chunky. When the buffer is half full SLOWDOWN is implemented. So it moves normal, depletes the buffer to 50%, SLOWDOWN, then resumes normal, them SLOWDOWN again. That explains why the Z is not always affected, it has to catch a slowdown. In your video the buffer hits 50% on almost every arc, if not more than once.

So now I have no idea what to do, I will play with slowdown off I guess.

Ok I will try again tonight with your branch to make sure we’re looking at the same thing. Are you changing the acceleration values? I was running everything with the factory default (M502) settings. My Z axis is heavy and everything shakes like crazy so on the high speed runs I can’t see whether there are polygons or not; all I see are sine waves. I’m on MKS Gen L, equivalent to ramps.

I forgot you are running that monster!

I just updated the 415 branch with one that is okay. Now I am rerunning tests with the buffer in mind, first is slowdown off, after that I will try to increase the buffer,

Alright I gotta take a break from this before I go insane. Wrap up so far, Arcs are slightly functional but clearly not complete. I just moved both files to 20mm/s and still with arcs not on on the gcode works perfectly and is very clearly moving and accelerating correctly. Small arcs go slower larger arcs move faster. Flawless movements and plot now.

Using arcs in the gcode file I can get perfect corners in the plot but it clearly moves wrong, small arcs seem to ignore accelerations all together, large arcs fail to move smoothly. lots of jerks and stalls.

Something is not adding up here. I regenerated the toolpaths with and without arcs at 20 mm/s and I’m seeing the opposite behavior: without arcs it chokes on lots of small movements (I assume it would be better running from SD instead of over serial). With arcs everything looks fine to me. I’m happy with the arcs behavior.

This is using the code from today, I am calling “415b” which has ARC_SEGMENTS_PER_SEC 160 and MM_PER_ARC_SEGMENT .2. I haven’t changed junction deviation or acceleration so they are at
#define JUNCTION_DEVIATION_MM 0.045 // (mm) Distance from real junction edge
#define DEFAULT_MAX_ACCELERATION { 230, 230, 80, 230 }

I first ran the no-arcs version, then with arcs:
Arc Torture test_20_noarcs.gcode (116.3 KB)
Arc Torture test_20_arcs.gcode (18.0 KB)

Maybe this can narrow it down:

  1. Do you see behavior in my video (second half) that is bad? Perhaps there is a problem I am not observing?
  2. If not, do you get the same behavior with my gcode (with arcs)? If so, then maybe there is something in your CAM, or if not, then some difference in the firmware? I tried to match your setup as closely as possible.
  3. Do you have stale EEPROM settings? Perhaps M502 for factory reset and see if it helps. Probably good to do M503 first to look for discrepancies to explain the behavior.

This is what I’ve got as of right now:

Send: M503
Recv: echo:  G21    ; Units in mm (mm)
Recv: echo:  M149 C ; Units in Celsius
Recv: 
Recv: echo:Steps per unit:
Recv: echo: M92 X200.00 Y200.00 Z800.00 E200.00
Recv: echo:Maximum feedrates (units/s):
Recv: echo:  M203 X50.00 Y50.00 Z15.00 E25.00
Recv: echo:Maximum Acceleration (units/s2):
Recv: echo:  M201 X230.00 Y230.00 Z80.00 E230.00
Recv: echo:Acceleration (units/s2): P<print_accel> R<retract_accel> T<travel_accel>
Recv: echo:  M204 P230.00 R3000.00 T230.00
Recv: echo:Advanced: B<min_segment_time_us> S<min_feedrate> T<min_travel_feedrate> J<junc_dev>
Recv: echo:  M205 B20000.00 S0.00 T0.00 J0.05
Recv: echo:Home offset:
Recv: echo:  M206 X0.00 Y0.00 Z0.00
Recv: echo:Z-Probe Offset (mm):
Recv: echo:  M851 X10.00 Y10.00 Z0.00
Recv: ok
1 Like

Okay well mostly consistent with my results. My no arc gcode is much smoother, but similar (from SD). It slows more at smaller corners just not as drastic as yours, and faster at larger diameter arcs.

So the ARC gcode, the two most obvious are the concentric circles, see how the first two (smallest) are full speed then it slows down for the larger ones. Then in my triangle logo notice how the large outer perimeter with the giant arcs is super slow and the inner part goes near full speed. To me that is basically backwards. If you pay real close attention to the small letters at the end of each arc it fully jerks, as if acceleration goes away too soon. So instead of slowing down it just doesn’t do the accel maybe?

I think we can deal with smaller arcs being slower (using non arc gcode), but that jerk and non accel (with arc gcode) will snap bits. Both are bad but no-arcs seems to be the lesser of two evils to me.

My non arcs is way more fluid for some reason.

With an accel of 230 that puts us at 1mm of accel ±, to get to and from 20mm/s, maybe I am looking at this wrong. Maybe those are just too high when things work right?

The SD card could make that difference. If the non arc is starving the buffer with tons of tiny arc segments.

Not to be argumentative, but adding a small arc instead of a hard corner should let it take that turn much faster. Maybe it is actually using the correct acceleration, and it just doesn’t look like it? Also, it is speed that breaks bits, not acceleration, right? The best example is the N. I’m trying to judge from Jamie’s table wobble if it is worse on the round ends than the sharp inside corners. I honestly can’t tell.

I also can’t tell if the larger arcs are going slower from just looking. Maybe I should figure out how to go frame by frame and determine the actual speed.