mslicer

created June 13th, 2024 • 10m reading time • 90 views
Screenshot of main UI showing a 3D shark model and its sliced result.

I recently got access to an ELEGOO Saturn 3 Ultra resin printer, but I was disappointed with the selection of slicers. From my research It looked like the main options were Chitubox, VoxelDance Tango, and Lychee, none of which are open source and all had other issues.

For some reason models sliced with Chitubox were not printing (I was probably just doing something wrong but moving on). I actually really liked VoxelDance Tango but they charge $16/month and limit the number of slicing operations per month, so I didn’t make it past the free trial. Online I saw a lot of praise for Lychee Slicer, but when I first ran it to test it out, I swear it took like ten minutes to start, I also had to make an account to use it, there were ads in it, and the slicing was surprisingly slow.

Resin printers build up models bottom up by exposing UV-curing resin with a specific pattern then moving the build plate up for the next layer. I figured that since MSLA slicers really just need to output an image for each layer, it shouldn’t be too hard to make my own.

Project Status

Note that this project is still early in development and currently lacks many of the advanced features in the previously mentioned slicers.

You can find development builds to play with on Github here, just open the latest release and download correct binary for your operating system.

Contents

How Slicing Works

So the two main steps in getting a sliced layer are finding the points of intersection between a mesh and a plane, then converting those points into a polygon and filling it. Implementing these two systems has been probably the most difficult / fiddly part of this project, I ended up running into a lot of bugs that created some interesting looking results (see Slicer Fails).

Mesh Intersections

To find the intersection between the plane and the mesh, the basic idea is to check each leg of each triangle and see if it crosses the plane, if it dose then figure out how far along the segment the intersection is and return that as a point. Ignoring edge cases there will always be zero or two intersections, which means that every successful triangle intersection will return a line segment. The actual implementation also uses an acceleration structure to avoid looping through every triangle.

A triangle being sliced three ways
Simplified Implementation
// Point is the position of the plane and normal is the direction /
// rotation of the plane.
let point = self.inv_transform(&Vector3::new(0.0, 0.0, height));

// Instead of transforming every vertex to handle transformations of the
// mesh, we just transform the plane in the opposite ways.
let up = Vector3::z_axis().to_homogeneous();
let normal = (self.inv_transformation_matrix * up).xyz();

let mut out = Vec::new();
for (v0, v1, v2) in faces {
    // By subtracting the position of the plane and doting it with the
    // normal, we get a value that is positive if the point is above the
    // plane and negative if it is below. By checking if any of the line
    // segments of triangle have one point above the plane and one
    // below, we find any line segments that are intersecting with the
    // plane.
    let (a, b, c) = (
        (v0 - point).dot(&normal),
        (v1 - point).dot(&normal),
        (v2 - point).dot(&normal),
    );
    let (a_pos, b_pos, c_pos) = (a > 0.0, b > 0.0, c > 0.0);

    // Closure called when the line segment from v0 to v1 is intersecting
    // the plane. t is how far along the line the intersection is and,
    // intersection is the point that is intersecting with the plane.
    let mut push_intersection = |a: f32, b: f32, v0: Pos, v1: Pos| {
        let (v0, v1) = (self.transform(&v0), self.transform(&v1));
        let t = a / (a - b);
        let intersection = v0 + t * (v1 - v0);
        out.push(intersection);
    };

    (a_pos ^ b_pos).then(|| push_intersection(a, b, v0, v1));
    (b_pos ^ c_pos).then(|| push_intersection(b, c, v1, v2));
    (c_pos ^ a_pos).then(|| push_intersection(c, a, v2, v0));
}

By splitting the model into some number of segments, then taking note of every triangle that passes through each segment, when slicing you just need to run the intersection test for triangles in the current segment. This massively speeds up the slicing operation especially with large triangle counts.

I originally tried using a bounding volume hierarchy (BVH) which are usually used for accelerating ray-triangle intersections in ray tracing, but it actually slowed things down more than intersecting with every triangle. I suspect this is because planes extend infinitely in two directions, so a lot more bounding volumes had to be considered.

Polygon Filling

I then use the scan line polygon fill algorithm to fill the created shape. This algorithm runs for each row of the output image and finds the intersecting points between each row and all the line segments from intersecting with the mesh. Then looping through the intersections in a row, you can create a list of continue runs of same colored pixels. These are then directly encoded into the output format, for example, .goo. For added speed this can be sped up with multithreading, each layer can be sliced independently and the results can then be collected together at the end.

Unfortunately there are some issues with this algorithm, the biggest being multiple intersecting models, or even self-intersecting models. In these cases the expected behavior is for these intersections to be ignored. In order to fix this, I needed a way to tell which side of the mesh is the outside, this information is stored in the normal direction of each face. By modifying the mesh-plane intersections to also return the normal of the face, when filling in a row these unwanted intersections can be ignored.

Image showing the process of ignoring unwanted intersections
Simplified Implementation
for y in 0..slice_config.platform_resolution.y {
    let yf = y as f32;
    let mut intersections = segments
        .iter()
        .map(|x| (x.0[0], x.0[1], x.1))
        // Filtering to only consider segments with one point
        // above the current row and one point below.
        .filter(|&(a, b, _)| ((a.y > yf) ^ (b.y > yf)))
        .map(|(a, b, facing)| {
            // Get the x position of the line segment at this y
            let t = (yf - a.y) / (b.y - a.y);
            (a.x + t * (b.x - a.x), facing)
        })
        .collect::<Vec<_>>();

    // Sort all these intersections for run-length encoding
    intersections.sort_by_key(|&(x, _)| OrderedFloat(x));

    // SNIP (ignore unwanted intersections) //

    // Convert the intersections into runs of white pixels to be
    // encoded into the layer
    for span in intersections.chunks_exact(2) {
        let y_offset = (slice_config.platform_resolution.x * y) as u64;

        let a = span[0].0.round() as u64;
        let b = span[1].0.round() as u64;

        let start = a + y_offset;
        let end = b + y_offset;
        let length = b - a;

        if start > last {
            encoder.add_run(start - last, 0);
        }

        encoder.add_run(length, 255);
        last = end;
    }
}

Exporting to .goo

To be loaded by a printer, the sliced layers need to be encoded in a known file format, the printer I have supports .ctb (chitubox) or .goo (elegoo custom format). Looking back it probably would have been a more strategic move to implement the chitubox format as it is supported on more printers, but I chose to implement the goo format. There is an official format spec, however not everything was mentioned so some reverse engineering was still required. I made a custom ImHex pattern file to show the different fields and make sure I understand the format. (link)

Screenshot of .goo file open in imhex (hex editor)

After making my first attempt at an implementation, I took a known good .goo file, decoded it and re-encoded it to check if it would print. It did not. After much struggle, it turns out I had mistakenly put the wrong display resolution in the slice config and that was causing the printer to fail loading the file with no error message.

Layer Encoding

Even though the layers are really just grayscale images, if they were not compressed in some way, each layer would be at least 58.98 MB. A sliced file with 300 layers would be 17.69 GB and a file with 5,200 layers would be 306.7 GB. My point is that this data needs to be compressed somehow.

In the case of the .goo format this is done by going from the top left of the image down to the bottom right, row by row, and just storing the number of pixels in each continuous run of a single color. This is called run length encoding (RLE). To further reduce the space needed to store a layer, smaller number representations will be used if possible, from one to four bytes.

The First Print

Since the beginning of this project I had been using the Utah Teapot as a test model, so thats what I decided to print first. After fixing the previously mentioned display size bug, the sliced file was successfully loaded and printing started! Unfortunately it’s never that simple and this is what came out of the printer a few hours later…

Small squished teapot with large block attached

The teapot looks squished because of an unrelated bug with layer encoding where runs could not be any longer than 256. But this result really confused me for a while, decoding the sliced goo file with my own library and other tools everything looked completely fine. After much thinking, I figured out that this happened because when slicing the last run ended with the last white pixel and there could be a large area of undefined pixels at the end. In all my testing I initialized the buffer that layers were decoded into with 0s but I guess in the printer this is not the case and it was just printing undefined memory.

After figuring out what caused the issue it was super easy to fix, but getting all the little bits of cured resin out of the vat was not fun.

// The fix
if last_index < pixel_count {
    encoder.add_run(pixel_count - last_index, 0);
}

Starting Prints Remotely

The Saturn 3 Ultra printer has support for connecting to a wifi network and having print jobs started and monitored remotely, but the only software that I know of that makes use of this ability is Chitubox. After some research I found a protocol writeup and implementation from Vladimir Vukicevic, which gave me a huge head start.

The protocol has three function, each of which build off of a different network protocol:

  • Printer Discovery (UDP)
  • Printer Monitoring and Controlling (MQTT)
  • File Uploading (HTTP)

Printer Discovery

To find all the printers on the network, a UDP broadcast packet is used. This kind of packet can be received by multiple peers on a network. By sending a UDP packet to (for example) 192.168.1.255 with data of M99999, all printers on the network will connect to the peer that sent the message with a JSON response containing the printer name, display resolution, system capabilities, and some other attributes.

The next step is to get the printer to connect to a MQTT broker, which must be hosted on the same device that sent the UDP packet. This is done by sending the printer another UDP packet with a body of M66666 {mqtt_port}, where {mqtt_port} is replaced with the port of the mqtt broker.

Printer Monitoring and Controlling

You would think that there would be an existing rust library for running an embedded MQTT broker, but I couldn’t find what I was looking for and ended up spending a day implementing it myself. Anyway, once the printer connects to the MQTT server, it subscribes to /sdcp/request/{mainboard_id} and starts publishing status updates to /sdcp/status/{mainboard_id} every five seconds.

Commands are sent to the printer by publishing messages to the printers request topic. The most important commands are UploadFile and StartPrinting. Ill cover file uploads more in the next section, but once a file is uploaded, by publishing a JSON message with the filename, and the first layer, it will start printing.

File Uploading

The FileUpload packet has a url field that defines the port and path of the file to download, again from the same device that sent the UDP packets. This means that we also need an HTTP file server, luckily I did not need to write this one myself, there are already a lot of small web server frameworks to choose from.

The Interface

Although the actual slicing was the most finicky part to get working, the user interface takes up the majority of the project’s codebase. I use a combination of egui (intermediate-mode UI library) for rendering the UI elements, and wgpu (cross-platform graphics API) for rendering the viewport and slice preview.

I don’t really have much to say on the egui stuff, it being intermediate dose make some layouts more difficult to make because the size of an element or the other elements may not be known when laying the elements out, so it took some time to learn how to really work well with egui. Anyway, there are two main render pipelines used for rendering the viewport, the solid line pipeline and the model pipeline.

Solid Lines

The solid line pipeline is for drawling the grid and bounds of the print area. Its also used for debugging support generation and rendering normals but thats not super important. When initializing wgpu, I request the POLYGON_MODE_LINE feature, which allows just rendering the edges of polygons as lines. At startup or whenever the build volume / grid spacing is changed, the build plate mesh generation function is called. It outputs triangles with two points overlaying, effectively just making a line, there is also an additional color field in the vertex layout that allows changing the color of any line.

Models

The model rendering is a little more complicated. By default the GPU will interpolate between position and normals as the fragment shader is run across different points of a triangle, this causes even low-poly meshes to look smooth. Although this might usually be the wanted behavior, because the actual slicing uses the provided meshes without any smoothing, it would be a less accurate representation of the mesh. Not sure if this is the best way, but to fix this I created three new vertices for every face, each with the same normal so there will be no interpretation.

To light the model, I used the Phong reflection model. This model defines the brightness of a point on the model as the sum of ambient, diffuse, and specular reflections.

Illustration of the components of the Phong reflection model (Ambient, Diffuse and Specular reflection).

Brad Smith, CC BY-SA 3.0, via Wikimedia Commons

Demo Video

Here is a demo video showing mslicer being used to slice and print Treefrog by Morena Protti. The video is also hosted on YouTube (here) if the one below doesn’t play.

The Future of This Project

I have already worked on this project for about 120 hours over the past few months, but I am nowhere near to done. Most of the basic slicer features are implemented but I still need to work on things like support structure generation and island detection. I’m honestly really proud of how far this project has come already and I will definitely continue to work on it in the future.