MarlinBuild again

Yeah me to, done for the night. Thanks for all the work, seems like progress was made!

There seem to be random failures. I think I may be pushing it too hard. Too many jobs at once might be breaking it. I may need to revert the board based workflows and just do one big workflow.

With the 60+ jobs it has, and it being so easy to add more, I wonder what the limits are going to be. I am also saving a ton of artifacts. There must be some space limitation.

I almost hate to bring it up, but I would strongly consider separating the driver configurations out of the boards that have physically separate drivers (e.g., RAMPS). I realize that in that specific case, you aren’t building for full Vi support (since Ryan isn’t selling them anymore), but it just seems… cleaner to have the configurations logically separated when the boards are separate from the drivers. Now, for the Rambos and other boards with integrated drivers, I’m all for rolling the driver configuration into the board configuration.

And by driver configuration, I’m talking about things like steps per unit, digipot settings, etc. Things that are actually tied to the drivers (microstepping limits, or availability of digipots), and not the boards themselves. I suppose I could spend some more time reacquainting myself with C/C++ headers and pragmas, and put in some conditional logic to handle those specific edge cases…

As an example, I’ve created a file for A4988 configuration, and aside from setting all the driver types to A4988, I have the steps per unit set to { 100, 100, 400, 100 } since the A4988’s can only do half the micro-steps the DRV8825’s can.

It remains to be seen how many configurations we can adequately do this for. Both in terms of just the number of jobs we build, but also each combination adds a potential corner case, and it makes it even harder to find the right one.

But I agree drivers are a possible option, and they are already separate, but I only have one ramps driver option now and it is based on the original config with drv8825s.

This is a big experiment right now. It may go no where.

What I like is your idea of a JS front-end, where someone could pick and choose their build, and get a built binary out the other end. Maybe only have a smaller core set of configurations that auto-build with new releases, but allow folks to customize their own build. Maybe track what builds are getting requested, and add anything that suddenly seems to explode, but for the most part, just pump and dump. Hell, set it up as a Patreon benefit… :smiley:

Well start by choosing one branch, cuts everything in half.

And to be fair I hammered that thing last night 4-5 different set of tests running at once, plus you running them at the same time. That is pretty tough use case.

It has had some errors this morning too though. Just one or two commits at a time. The steps will just fail, no reason given. It makes me think I am reaching a limit in processor power or disk space or something.

2 Likes

Maybe the API requests? https://docs.github.com/en/actions/getting-started-with-github-actions/about-github-actions#usage-limits

I got excited seeing you can set up your own “runner” Thought it might be fun…well it says don’t do it on a public repo, too dangerous.

So it seems like once the edits and testing are done the limits they impose are plenty? Not seeing any mention of a way to actually monitor your usage

Nope, I bet it is action minutes. You get 2000 per month, that goes quick so many of those were 8-10 minutes each. each extra minute is $0.016.

So that means less options, less often, or a private runner.

I thought that was only for private repos. I thought public repos had no costs.

Hmmm, right again. It does say free for public, but there must be a limit.

Do you think we can stay under this limit (whatever it is), or need to do something different? Like somehow make most of it private to run a private runner, or is this limit just to constricting?

I was about ready to flash an image and test out some new boards.

I seemed to hit the limit often after I split them into board specific workflows. So it is worth it to try to flip them back into one giant workflow again.

Unfortunately, the only time I care about the speed of the builds is while I am working on it, and that’s also when I am slamming it with new commits.

I’m sure we can keep it under control. But it may mean removing some of the builds or testing less frequently (like, on a PR, but not a push, or something).

Did you look at the archim on arduino problem? I am wondering if I should run that up to Marlin to see if they want to know about it.

Actually, as it is, the scheduled ones have all ran just fine for the last several days. This doesn’t include the arduino/bugfix combination for the archim boards. But I think that might mean the random failures are not an immediate concern.

What do you want to try flashing? Have you looked at the artifact .zip file and built from that for any of the boards? I am also curios to know if you’ve compared the configuration files at all. I tried to make them as close as possible, but since they aren’t from exactly the same marlin commit, there are subtle differences.

No not yet. Quarterly taxes just snuck up on me and backhanded me pretty good. Today might not be the best time for me to test things…tax day makes me a little cranky.

Yup had a look this morning and saw they seem to be doing fine.

Had big plans for skr testing today.

I did look at them pretty good, I think they are actually pretty solid. The diff file seems to be off (at least when I looked a few days back, have note checked recently), it doesn’t show the changes on some of them, off the top of my head, the buadrate is not shown but changed. seems like the text file might be off by a line or two.

2 Likes

Yesterday, I went ahead and added 2.0.6.

I also changed all of them to run one job at a time. They take a while now, but I haven’t seen any spurious errors in the 3 builds since then. Maybe that will make this more stable.

The thing we’re really missing is a way to go from the artifacts to the release, or some other online folder with the zip files. There is an api to get the artifacts from the actions. I have no idea how to select the most recent one on main though. Maybe I should look into that.

Another option would be to have another branch stable. And when we are satisfied with a build, we can PR into stable, and when we merge, it will run the workflows (again) and only if the branch is stable, it will make a release. There’s more I have to figure out, but if we keep all 30 builds, there are going to be so many artifacts. I know I don’t want to go through and download them all. I also don’t trust myself to do it right.

1 Like

Boy how time flies especially when you’re busy with moving in the middle of a pandemic :S
So yeah, CNC stuff had to take a back-seat.

I did some release testing a while back in my repo. The principal workflow that new Github functionality provides should be the following.

  1. Nightlies will be just uploaded to build artifacts. These should be automatically cleaned up by Github so no fuss there
  2. Releases are tagged commits in the repo. One pushes to a tag, automatic builds build the stuff, then artifacts get published as part of a github release.

I can’t promise that I have time this weekend to pick up playing with this again, but maybe.

Oh and one more thing I realized: It’s probably futile trying to get Marlin to keep their awk scripts working the way we want, there was again a change to these that broke the functionality we need and they fail silently again. We should just fork these.

2 Likes

So it would work something like that way Betaflight builds it’s releases. Am I understanding correctly?

1 Like

My idea is very similar, yes. However in addition to hex files there will be zips with preconfigured sources.