Last year included in the release of macOS 15 was a new dynamic wallpaper / screensaver called ‘Macintosh’, which shows a zoomed in video of the graphics of early Macintosh computers. I thought it looked really cool and even set my desktop wallpaper to be a screenshot of the screensaver (from Basic Apple Guy), since I couldn’t use the real thing as a desktop Linux user. The video above is a short screen recording of the final result.
Since then I considered the idea of recreating the screensaver so I could get the full experience, but I reasoned that the effort required was not worth a screensaver, like people don’t even use screensavers anymore. Then summer came and I had a lot more free time to work on random projects, so here we are.
How it Works
Writing the basic rendering code only took like two hours, since is a fairly simple screensaver, and because I made some simplifications like removing the drop-shadow from below each pixel. I found that trying to render the shadow on the GPU used more compute power than I wanted to allocate to a screensaver. But anyway, using my library WGPU abstraction library, tufa, I was able to get the rendering working in about 150 lines of code.
At startup it would copy a screenshot of the old Macintosh Operating System into a buffer (one bit per pixel) and use it to texture a quad. I then scaled the quad and viewed it through an orthographic camera with a camera direction that (somewhat) matched the official screensaver, .
Transformation Matrix Construction
let depth = 100.0;
let projection = if aspect < 1.0 {
Matrix4::new_orthographic(-aspect, aspect, -1.0, 1.0, -depth, depth)
} else {
Matrix4::new_orthographic(-1.0, 1.0, -aspect.recip(), aspect.recip(), -depth, depth)
};
let scale = Matrix4::new_scaling(self.scale);
let view = Matrix4::look_at_rh(
&self.camera_pos.into(),
&(self.camera_pos + self.camera_dir).into(),
&Vector3::z_axis(),
);
return projection * view * scale;
The Shader
Since all the textures are 1-bit, I decided to pack them into u32s, just because why not. By passing the quad’s rounded uv coordinates multiplied by the image resolution into the following function we
fn pixel(pos: vec2u) -> bool {
let idx = pos.y * ctx.image_size.x + pos.x;
return (image[idx / 32] & (1u << (idx % 32))) != 0;
}
But getting the little gap between pixels (with anti-aliasing) was a bit more complicated. Do do this, I calculated the Chebyshev distance from the current fragment to the center of the pixel it was closest to and use a cutoff value to make all the pixels fade off around their edge.
fn frag(in: VertexOutput) -> @location(0) vec4<f32> {
// Get the image coordinate of the current fragment
let pos = in.uv * vec2f(ctx.image_size) - vec2(0.5);
// Round to the center of the nearest image pixel
let rounded = round(pos);
// Get the chebyshev distance between the current fragment and the
// center of the closest image pixel
let dist = chebyshev_distance(pos - rounded);
let edge = dist - 0.42;
// Lookup the value of the closest image pixel
let pixel = pixel(vec2u(rounded));
// Calculate the opacity of the current fragment as the edge fade
// subtracted from the image pixel value (zero or one). The
// multiplication by 20 is to make the fade smaller (yet another
// arbitrary constant).
let opacity = 1.0 - f32(pixel) - saturate(edge * 20.0);
// Return the current foreground color with an opacity of value
// (clamped from 0 to 1)
return vec4(ctx.color, 1.0) * saturate(opacity)
}
I just picked an arbitrary cutoff value of that looked right at the scale I was targeting. In the video below you can see how the output changes as the cutoff animates from to . To get the fade in / fade out effect, I just animate the cutoff from / to zero along some axis of the quad.
Making the Assets
The most time consuming part of this project was getting / making all the graphics then key-framing the animations for each scene. Most of the graphics were just screenshot I took from the emulators at https://infinitemac.org, but some were more complicated like the one below with all the icons. Searching around the internet I found some collections of vintage Macintosh iconography, but a lot of it was weirdly upscaled so I had to remake it pixel by pixel, which took a while.
Each scene is defined with a source image, total duration, and a bunch of properties (which can each be animated).
pub struct Properties {
pub camera_pos: Vector3<f32>,
pub camera_dir: Vector3<f32>,
pub scale: f32,
pub frame: usize,
pub progress: f32,
pub progress_angle: f32,
}
Here is an example scene config, some properties are set outside the keyframe since they remain constant during the entire animation, like the camera direction or scale. Then I picked camera start and end points at and . Finally I added progress values and angles, which define how the pixel cutoff is animated to get the fade in / out effect.
[[scenes.scene]]
image = "images/image-12.png"
duration = 11.0
scale = 2.5
camera_dir = [ 0.40, 1.26, 1.06 ]
keyframes = [
{ t = 0.0, camera_pos = [ -1.75, -3.25, -1.3 ], progress = 0.0 },
{ t = 5.0, progress_angle = -1.57, progress = 30.0 },
{ t = 5.0, progress_angle = 1.57, progress = 13.7 },
{ t = 10.0, camera_pos = [ -1.75, -1.75, -1.3 ], progress = -16.3 }
]
The Colormap
In order to match the aesthetic of the original screensaver, I needed to somehow extract its colormap. From watching a screen recording of it, I could see that the background was a gradient between the top the bottom and the foreground looked to be consistent across the frame. All these colors are can probably be derived from one colormap, but I don’t know what color space they are being interpolated in, so I just made individual maps for the the background top / bottom, and the foreground.
To do this I ended up downloading a video of the screensaver from YouTube and writing a janky Python script to extract the most common colors of the background at the top and bottom of the screen, as well as the foreground color from the middle of the screen. Due to the way my script used arbitrary cutoff values and the foreground colors weren’t always viable, the resulting color maps needed to be cleaned up a lot. Below is the program output for the foreground color, the black blocks are where there were no foreground pixels visible (during a transition), and the wrong shade of green sneaks in a little due to my incorrect cutoff values.
Since the colormap repeats at a fixed interval, I was able to line up the results from 10 different video segments. By coping pixels around, I filled in all the undefined or incorrect pixels, blurred it, then scaled it down to a width of 128px. I repeated for the two background color maps, then finally combined them all into a single image.
Screen Locker on Wayland
I now had a working windowed application to render the screensaver, but (since I don’t actually need a screensaver) I decided to use it as my lock screen instead. Since I use the sway window manager, I found swaylock-plugin (forked from swaylock), a screen locker for sway with support for animated backgrounds through the wlr-layer-shell-unstable-v1 protocol.
I was originally using my tufa library, which massively simplifies rendering for simple things like this, but it only supports rendering to windows. I decided to rewrite the renderer with wgpu, which wasn’t difficult, but still took a little while. Now I could use this common backend with both the windowed and layer-shell frontend.
Using Smithay’s Client Toolkit, a toolkit for writing wayland clients in rust, I was able to throw together a simple wayland client that makes a surface for each display and renders to each.
Conclusion
This was a fun project, and it came together much faster than I would have expected. The biggest issue with it is the lack of a drop shadow under each image pixel. I tried to add it, but trying to do it entirely in the pixel shader used more GPU compute than I wanted to give to a completely unnecessary feature on a laptop. Perhaps I could get better performance by using separate passes for blur, maybe ill look into that some other time.
If you want to use or play with this yourself, or even setup the lock screen, see the configuration and usage sections in the reposotiry readme.