Skip to content

Blog

August 2025 project update

Closing out August 2025, we are proud to announce our third release of the year. This release follows an intense development period where the team has reevaluated its priorities / timelines and refocused efforts on “delivering core Linux distribution tooling that will simplify our ability to scale out over time”.

We have documented some of our progress in our last two blog posts and spent the last two months further progressing towards these goals. We have implemented a basic version of virtual packages (Package Sets), continued our hardware (and VM) enablement efforts and have selectively been growing our repository where we feel it’s beneficial to our users.

Whilst not an exhaustive list, some of the top line repository updates include:

  • GNOME 48.4
  • Plasma 6.4.4
  • Sway 1.11
  • Cosmic Alpha 7
  • Linux 6.15.11
  • Mesa 25.2.1
  • LLVM 20.1.8
  • uutils-coreutils 0.1.0
  • sudo-rs 0.2.8
  • ffmpeg 7.1.1
  • fastfetch 2.51.1 (adds AerynOS logo)
  • Waydroid: Add at 1.5.4
  • openvpn: Add at 2.6.14
  • protontricks: Add at 1.13.0
  • winetricks: Add at 20250102

In addition, we fixed a subtle issue with our PATH configuration that mostly affected our console logins. With this fix, we have made our login experience fully stateless. We have also enabled sulogin for a single-user root shell to diagnose and repair boot failures.

AerynOS is transitioning to a package set model for core packages installed on a user’s system.

Package sets are a collection of packages that are related or used together for a specific purpose. In AerynOS, they are used for consolidating our base system packages and for each of our offered Desktop Environments / Window Managers.

Each Desktop Environment offered by AerynOS has an associated package set (usually “recommended”). Depending on the environment, we may optionally offer a “minimal” and/or “full” solution with less or more packages to better suit our users requirements.

The package set model we have implemented is a stepping stone technology, not the final solution we are looking to implement. It introduces the basic premise of virtual sets of packages and is a precursor to our “system-model” work that will allow for exact reproduction of a user’s installed system.

To fully integrate the new package set model within the AerynOS project, we have adapted lichen to install Desktop Environments based on the associated package set. The TUI (Text User Interface) prompts guide a user to select which DE they want, and depending on the DE, the version we have curated for the install (recommended for the DE’s, minimal for sway) with moss determining what dependencies are required for a successful install based on that package set.

Lichen is a network installer meaning that it downloads the latest set of packages from the AerynOS repository for installation. Tweaks to our package sets don’t necessarily require a new ISO. Whilst the “live environment” may not be up to date, the user will get a fully up-to-date installation without requiring a post-install update step.

In its current state, lichen requires that a user pre-format their disk prior to attempting to install AerynOS. Given its nature of being a network installer, lichen also needs an active internet connection to complete an install. Before, if either of these prerequisites were not met, the installer would output a very unhelpful Rust code error. We have added the appropriate prompts to guide users to ensure they have an active internet connection and to remind them that they need to pre-format their disk.

Whilst we could fix the pre-format requirement, a conscious decision has been made to keep this “anti-usability” feature as a barrier to entry for “beginner Linux users”. This may seem very counter intuitive, however whilst we are in an alpha state, we need to be careful not to position ourselves as a “beginner Linux distro” that could attract many support requests. We need to focus our time on developing our tooling and infrastructure.

Do note that in time, we will fix this issue and become more beginner friendly!

Virtual Machine usage and hardware enablement

Section titled “Virtual Machine usage and hardware enablement”

Significant work has taken place to enable virtual machine support, both with AerynOS as the host and as the guest.

For host support, we have packaged virt-manager into our repository. The team utilised VMs for testing package set configurations and other potential breaking changes over the last month. It has sped up our development pace as VMs are more disposable by their nature.

For guest support, we have enabled a significant amount of hardware in our kernel and specifically enabled HyperV (based on user requests). We are also actively seeking user feedback for other VM environments. If you try AerynOS in other VM solutions and have any problems, you can report those issues here.

By better supporting both VM host and guest scenarios, we hope to unblock potential contributors from exploring our distribution and tooling.

Please note that we still classify AerynOS as alpha status project. We do not recommend anyone install it on hardware required for “production environments”! Our key goal is to enable the hardware / software that developers and contributors may need to make the transition to and/or explore AerynOS.

Reilly Brogan has added scx-scheds to our repository and set scx_flash as our default scheduler. Scx_flash is a scheduler that is focused on ensuring fairness among tasks and performance predictability. More details about it can be found on the sched-ext website.

For our use case, this is helpful as it allows an AerynOS system to still be responsive whilst heavy tasks such as building packages are happening in the background.

Whilst this has been implemented in our ISO (ie. for new installs), it will not retroactively apply for existing installs. If you want to transition to this, you will need to install scx-scheds.

There has been considerable effort around our DE provision this year, some of which is yet to materialise. We are happy to report that we are now offering KDE Plasma in our repository and it will be installable from our ISOs going forwards.

In addition, we have created a console only installation option for more advanced users and for our own testing purposes.

It’s fair to say that KDE Plasma has been one of the biggest requests we have received and we are happy to report that it is now available in our repository. Reilly Brogan has done a fantastic job packaging up the latest 6.4.4 version into our repository with both sddm and plasma-login-manager offered as login managers.

A running bug tracker can be found here to report any issues. Please do test it out and help us find and resolve any undocumented bugs that remain.

We are far enough in our bring up and testing process for KDE Plasma that we are comfortable offering it as an installation option in our ISO’s going forwards.

Within the AerynOS repository, Cosmic DE sits at the release tag of Alpha 7. Given the significant pace of development of Cosmic, we are looking to move to a more frequent update cycle to incorporate bug fixes and new feature releases tracking System76’s repo-release repository. Whilst still in flux, we are looking at a bi-weekly update frequency to balance maintainer burden with keeping the DE up to date. The first of these updates bring Cosmic DE packages to their most recent versions will land in the AerynOS repository in the coming days.

Following recent engagement efforts, we have been seeing new users and contributors checking out AerynOS and specifically our Cosmic spin. Through this additional testing, we are seeing more active engagement on keeping Cosmic updated and bugs squashed. Given its alpha status, we don’t expect a fully bug free user experience so please bear this in mind if choosing Cosmic. We are classifying our Cosmic spin as a “Technical Preview” given that both Cosmic and AerynOS are currently in alpha status.

Moving forward, we are looking to package up more of the available Cosmic applets and generally polish the Cosmic experience on AerynOS. If you have an interest in packaging and specifically in the Cosmic Desktop, feel free to get in touch as we could always use more support in testing and improving upon our DE experience.

We have had Sway within our repository since early last year but did not really highlight its inclusion. Sway has been updated to v1.11 and we have also included Waybar and a few other packages to make ricing Sway a nicer experience on AerynOS.

We debated including Sway as an installable option from our ISO, however we have made the decision to defer this for a future release. We have created an initial “minimal” package set for Sway which includes the bare minimum to get started on ricing it. However, it has not yet been validated to a level that we are comfortable shipping it to users, even in our alpha state.

It remains in our repository and will continue being worked on as we progress into the second half of the year. In time, we would also like to develop a couple of pre-configured Sway configs as additional package sets so users not already familiar with Sway can jump right in without having as much background experience.

In addition to the other environments, we have created a very minimal package set that will boot into the Linux console without any Desktop Environment. Users can use this console-only option as a starting point to configure a system install exactly to their requirements with only the packages they wish to have included or as the basis for a new DE/WM to be included within AerynOS.

Given the way we have layered DE/WM package sets over our base package sets, a user is able to install any of the other DE/WM options on top of this console only solution. This has been very helpful for the AerynOS team in testing our different offerings.

Other than updating to the latest 48.4 version, there is not else much to say on GNOME. To its credit, it has been working smoothly on AerynOS so there has not been any major work required.

It remains our default option for our ISO live environment and should only require updating to new versions as they release. If you do happen to discover an “undocumented feature”, please feel free to report it here.

Joey Riches has delivered a new command “moss state diff” which allows users to check the differences between two states. This is very useful when you want to revert back to an older state.

Each state is identified as a number and there was no way to understand what a state had. With this new command, you will be able to inspect the state and see if it’s the right state that you are looking for.

The command requires you provide two state numbers and it will return back the differences in package versions and new / removed packages between the two states you have specified.

We have also landed another new moss command “moss search-file”. This works similar to “moss search” however it works at the file level and enables users to ask moss to which package any given installed file under /usr belongs.

Joey Riches has picked up and continued his previous packagekit integration for moss to integrate into our various DE software centres (GNOME Software, KDE Discover and Cosmic Store).

Until now, users could only install AerynOS .stone packages through the terminal. This integration is a significant usability upgrade though we still recommend our users have familiarity of how to interact with moss via command line.

Alongside packagekit, we now also have appstream meta data hosted on our dotdev site. Work is on-going with both packagekit and appstream but the groundwork is completed. We can build from this point towards fully developed software centre integration.

Once this lands in our repository, we will take another step towards making AerynOS a more user friendly distribution.

Outside of the code development, there is a renewed focus on our documentation site. This is a continuing and incremental exercise with improvements coming across the board.

Over the last few months, we have improved the FAQ page, added more information on how to update packages on an AerynOS system and added additional information around the Desktop Environments we offer. We have also added specific background detail about how AerynOS is different to other distributions on our Philosophy page.

We will continue fleshing out our documentation in the coming months, with a specific focus on how to contribute, both to the project itself and how to create packages and submit them for inclusion in our repository.

Some of the feedback we have received is that documentation is fragmented and/or not yet created. This is a frustration we can remove through our documentation efforts and we have new contributors helping out in this aspect too.

We are always looking for more support so if you have any interest in getting involved with our documentation efforts, please feel free to reach out on Matrix and specifically engage with NomadicCore.

Our focus for the second half of the year remains similar to what we have detailed in our previous two blog posts.

We are working towards versioned repositories which will allow the team to deliver new features to our os-tooling (moss and boulder) in a seamless fashion. Versioned repositories are a prerequisite and gateway to future features that we will deliver in AerynOS hence its prioritisation.

For the os-tooling, we are adding structured logging for better insight and reporting, improving error handling and ensuring we deliver more helpful message output and looking towards adding JSON output for all of this for nicer parsing of “structured output” across process barriers. We continue to add low hanging fruit features

The link for our latest iso can be found at our download page.

Development update: os-tools

In our recent mid-year blog post, we mentioned that it would be the first in a short series of posts providing updates on the various work streams we have been actively progressing during the last few months. Whilst that post focused primarily on our infrastructure, in this one, we will shift our focus towards the work we have been doing around our os-tools.

To recap, our os-tools consist of Moss and Boulder. Whilst also originally written in DLang, initial ports of these were built in Rust during the latter half of 2023. Though we made the odd improvement here and there during 2024, through Q2 2025 we set out to review the code, develop an improvement plan and then put that into action.

The TL;DR is:

  • Moss: Existing PRs reviewed, refined and merged (including a PR enabling faster package installation via parallel blitting). The code received various bug fixes and refactors for correctness and maintainability.
  • Boulder: Similarly, existing PRs reviewed, refined and merged including a few smaller features and packaging macros being added. Since Boulder uses the moss crate, it now builds packages slightly quicker due to faster buildroot creation. The odd bugfix was also made.
  • User Experience: Improved error reporting features were added to both Moss and Boulder to improve the troubleshooting experience for users and packagers. More to be done on this front.

We will be continuing our os-tools work over the coming months with a specific focus on tidying up the code, improving documentation, and ensuring better error handling and status reporting throughout the codebase. This will come in particularly handy when we do the feature work to make us able to produce JSON output as the final part of the alpha2 os-tools milestone.

Overall, this work will help users when they come across unexpected errors and also be beneficial when onboarding new developers.

The JSON output feature, in contrast, is largely targeted at convenient machine parsing of structured output for automation and integration purposes, which we are banking on will come in handy for future development work currently in the planning stages.

Before going any further, you may have noticed my name as one of the authors of our mid-year update blog post. If you hang around our Matrix rooms, you will likely already know me but I thought it prudent to provide a formal introduction.

I first became aware of SerpentOS about three years ago but only joined the Matrix chat rooms in September 2023. I’m not actually a developer or have any coding experience, however, I am interested in open source projects and Linux distributions that can help me get the most out of my hardware. I liked what I saw with SerpentOS and over the course of 2024, started getting involved, trying to help out where I could.

Earlier this year, I ended up formally joining the team, around the time of the AerynOS rebrand, taking more of a support/communications role and providing feedback from an “average user” perspective of what I think might be important.

My focus will mainly be around working on our documentation, writing blog posts and engaging on our various social media platforms and Matrix rooms. I’m looking forward to getting stuck in and helping support a Linux distribution I want to use on my various devices.

Since their port to Rust, our os-tools have been working well enough. However, we are self-aware enough to know that our initial porting efforts left room for improvement, both in code quality and performance.

The following subsections outline some of the os-tools work we have been doing throughout Q2.

Both tarkah and new contributor, Jonas Platte, have been working on refactoring our existing codebase. To increase the available insight and diagnostic information, we have decided to align around the use of the tracing crate given that important parts of our code base are asynchronous, for which tracing is particularly suited.

For error handling, Jonas suggested that we move away from thiserror towards snafu. Whilst thiserror suited our requirements during our initial porting work, snafu offers some nice quality of life features and forces us to be more explicit about handling different types of errors, which we hope will yield better longer term maintainability. Moving over to snafu requires a little more upfront work to get high quality error output, but we believe that the reward will be worth it once the transition is completed across all of our code base.

Along with this refactor work, tarkah, Jonas and ermo are also improving the documentation within the codebase itself. With the infrastructure code having been ported to Rust, there is now greater scope to reuse and consolidate code between the various tooling crates.

One aspect of managing our tooling is ensuring that our codebase remains up to date. Part of this effort is also to ensure that we are updating our code’s dependencies to their own respective latest versions to benefit from bug fixes and performance improvements. Whilst this is an on-going task, some of our dependencies had been allowed to get a little stale. Through multiple commits, Jonas has systematically been updating the dependencies in our os-tools repo.

Part of this upgrade work also involved being able to lock dependencies for Rust packages as a way to ensure robustness of the Moss and Boulder builds we use in production.

Long-time contributor Joey Riches identified a parallelization improvement in Moss’s blitting process which was merged after several months of local testing.

In our testing, the code showed significant speedups across all three of our supported file systems (XFS, ext4 and F2FS). The previous single-threaded blitting made using ext4 and F2FS particularly slow, to the point that we did not recommend users use either filesystem as the basis of an AerynOS install.

However, blitting speeds with the new parallel approach — particularly with a “cold” kernel VFS cache — have significantly improved. Whilst ext4 and F2FS are still not as performant as XFS for our use case, they are at least more serviceable as the basis of an AerynOS install than they used to be. By way of an example, I saw a ~2x blitting speed improvement on my Gen4 NVMe SSD using XFS with the new parallel blitting code.

It’s worth restating that, to our knowledge, the moss approach to atomic updates, is the only one of its kind (at least in the Linux space) where users do not have to rely on containerization or A/B system swaps to deliver package updates. Eliminating download speeds as a variable, Moss is capable of atomically installing/updating hundreds of packages on your system in a matter of seconds to tens of seconds on SSD drives, and the installed/upgraded applications are ready to use next time the application is opened. No reboots and no messing with container permissions necessary.

Given that boulder also needs to blit files when it creates buildroots, the code has also had a positive impact on reducing package build times. This will be more evident on larger package builds and will have a cumulative impact, the more package work you end up doing.

Moss: Sync available before installed packages

Section titled “Moss: Sync available before installed packages”

As we were testing the upgrade path from our old packages.aerynos.com/volatile repository to our new, CDN-backed cdn.aerynos.dev/unstable repository, we ran into some unexpected small niggles related to how packages are resolved.

While tarkah’s fix to Moss was relatively small in terms of code, it served to ensure that updates would install properly on the first go when syncing to the new repo.

Consequently, we synced this bug-fix to the Moss version in the old repository to ensure that users will be able to seamlessly upgrade to the new rolling unstable repository.

As we were preparing the process for syncing the packages in our volatile build-server repository to our new downstream rolling unstable repository on our public-facing server, we ran into an issue with the existing moss index code path.

Before this issue was fixed, the stone.index file would be unconditionally written next to the actual package .stone files. This was useful when indexing local repos, but not as useful when indexing actual stone pool/ directories and sub-directories.

In the end, this was another small feature with somewhat large consequences, in that this enabled us to do the actual manual indexing in a way that is identical to how our infrastructure organizes things when indexing.

This in turns made it possible to make sure that the new rolling unstable repo presents with the same URI “pattern” as our volatile build repo:

❯ moss lr
- oldrepo = https://packages.aerynos.com/volatile/x86_64/stone.index [0]
- unstable = https://cdn.aerynos.dev/unstable/x86_64/stone.index [5]
- volatile = https://build.aerynos.dev/volatile/x86_64/stone.index [10]
- local = file:///home/ermo/.cache/local_repo/x86_64/stone.index [100]

Boulder: Fix up phase timing in end-of-build report

Section titled “Boulder: Fix up phase timing in end-of-build report”

When Boulder successfully completes a package build, it emits a report detailing how long each phase of the build process took.

ermo noticed that the output was wrong when the time exceeded an hour. For example 136m would be formatted as a rather silly-looking 1h76m instead of 2h16m. He fixed this in the following commit.

Boulder: Improve cache hit rates when updating packages

Section titled “Boulder: Improve cache hit rates when updating packages”

Boulder is designed to cache files and hash them as part of its build process. By hashing files, it uniquely identifies each file and stores this for later utilization. When a package is updated from one version to the next, where the package has files that have not changed, we have the opportunity to reuse the cache from a prior build (as long as the cache has not been purged for space saving).

Given the way boulder previously cached files, in named directories based on the package source names, there was a high likelihood that new caches would have to be built because the source file names contain version numbers, which by design will always change.

Reilly implemented a change to boulder so that our ccache entries will persist over version number updates improving the hit rate and therefore performance on boulder package builds.

Boulder: Tweak how we use sh-compatible shells

Section titled “Boulder: Tweak how we use sh-compatible shells”

For user-facing recipes, our recipe snippets are now always interpreted by bash.

However, testing with hyperfine has previously shown that dash is ~20% faster to start up during ./configure runs when compared to bash.

To reap the benefits of faster dash process startup times, ermo and Reilly implemented a change to our GNU autotools macros to use dash as the default shell.

Having said that, there are certain packages that just expect and/or are hardcoded to require bash. So to cater to that use case, we have also added autotools macros that packagers can use to make Boulder execute GNU autotools with bash on a per-package basis.

This gives our tooling the “dash as /bin/sh” ./configure speed improvements by default, yet allows packagers to still successfully invoke ./configure et al. with bash, where doing so is necessary for the build to complete.

On reviewing Boulder’s build macros, we found some low hanging fruit improvements to make to our cmake, ninja, and meson macros, which we landed back in April. Implementing and improving the various build macros available in AerynOS makes it easier and more convenient for packagers to package up applications with Boulder; either for their own personal use or for submission into the official repositories.

At Reilly’s initiative, we moved our decompression solution away from GNU tar to bsdtar-static. This change reduces the likelihood of compatibility issues and ensures Boulder package creation doesn’t rely on a dynamically linked system library, thus making it more of a reliable solution.

With this move, we have also added the ability to decompress tgz based source packages as part of the boulder build process.

Some of the work we have done has been aimed more at how we use our os-tools rather than the os-tools themselves.

Update build triple and fix up ARM AArch targets

Section titled “Update build triple and fix up ARM AArch targets”

With our recent transition from SerpentOS to AerynOS, we needed to update our build triple accordingly. This step has been completed in the background whilst allowing for seamless updates from older SerpentOS systems onto AerynOS based systems. This is part of a wider rebranding effort that is still on-going through our documentation site, repo READMEs and anywhere else we have an official presence.

In this same area, whilst AerynOS currently only supports x86_64 based devices, there is a desire to be able to target other system types longer term. One of our contributors has been experimenting with RISC-V so we have added preliminary support for this to aid their testing. Don’t expect to see AerynOS on RISC-V any time soon, but it’s great to see our distro becoming a sandbox for fun and experimenting on alternative systems.

During the port of boulder from DLang over to Rust, there was a change in how we expressed the ISA for packages built for the x86 emul32 architecture target. The old DLang version of Boulder expressed it as x86 where the Rust elf crate expresses it as 386 (EM_386 for those of you familiar with ELF parsing internals).

Reilly took point in implementing a series of changes that retained full compatibility between packages originally built on the old DLang infrastructure, and newer packages built on the Rust infrastructure.

The end goal was to flush out the packages containing references to the x86 Emul32 ISA through the recent rebuild of our whole recipes repository. This was accomplished by first ensuring that all packages exposed both x86 and 386 provider patterns, and then subsequently dropping the code that wrote the x86 provider patterns during the second full repo rebuild, ensuring that packages only contained the 386 provider patterns.

In the end, this worked out nicely for us.

We reviewed the open issues in our various GitHub repositories and the plethora of ideas we have for our tooling, and developed a high level set of milestones for our os-tools. For this milestone (os-tools alpha2), we want to focus on:

  • Adding structured logging for better insight and reporting
  • Improving error handling
  • Ensuring we deliver more helpful message output
  • Adding JSON output for the above, for nicer parsing of structured output across process barriers
  • Adding low hanging fruit features and fixing misfeatures as we review the code

As our readers may have surmised, we have slowly accumulated bits of technical debt here and there over the course of development. The os-tools alpha2 milestone is our chance to address that and make our code crisp, clean and ready for already-planned future development work, while we stabilize our new infra code in parallel to this effort.

A special thank you goes to Jonas for the work he has already done in terms of this sort of refactor work.

As already covered in this blog, we have been reviewing all open issues and PRs in our os-tools repository on GitHub for prioritization and to set internal milestones.

We are open to and actively looking for contributors who might be interested in looking through our code and providing feedback.

If you would like to try your hand at contributing, look out for issues marked “good first issue” and get in touch on Matrix.

We hope to see you there!

Mid Year Update

As we hit the middle of the year, it’s time for another update for those of you following along with AerynOS’s development.

Over the last few months, things may have seemed unusually quiet, however rest assured that there has been A LOT going on in the background. As such, we are preparing a short series of blog posts to go over the relevant topics in the coming weeks.

For this blog post, we are going to cover our infrastructure port, along with the process of rebuilding our entire package repository.

The TL;DR is that:

  • All core AerynOS tooling is now written in Rust
  • Every recipe in the repository has been rebuilt (twice!) with many packages then having been updated to newer versions after the rebuilds were completed
  • A CDN has been implemented for faster package installation and ISO downloads

When delivering a Linux distribution, its infrastructure and associated processes effectively act as the “spine” of the project. But spine surgery can be a delicate affair, particularly when it comes to rehabilitation after successful surgery.

For us, this cycle has been particularly demanding, as we have completed an MVP (Minimum Viable Product) port of our infrastructure tooling code to Rust, meaning that all core AerynOS tooling has now fully transitioned away from DLang.

We have covered the reasons for this transition previously, and it’s fair to say that we are already feeling the benefits of easy and native reuse of code in our tooling repositories and welcoming more Rust contributors into our community.

Earlier this year, our existing DLang build infrastructure started showing signs of instability and required more and more manual intervention to successfully land packages.

Given our prior decision to transition our tooling over to Rust, we had already stopped further development of the DLang based infrastructure. Hence, we decided to accelerate our transition timeline for the infrastructure re-write to Rust with tarkah and ermo leading the development activity, which began at the end of March.

Towards the end of May, we put the first infrastructure prototype to the test, and then iteratively fixed bugs and built out missing functionality to the point of being able to put our MVP into production on our build infrastructure.

This MVP will serve as the development base of the code that will be used for all future package builds.

What do we mean by infra?

Our infra is comprised of the Summit, Avalanche and Vessel service components:

  • Summit: Package build-controller, build-orchestrator and build-dashboard, monitors recipes tree and automatically builds new, incoming recipes once they show up
  • Avalanche: Build agent middleware, takes build orders from Summit and builds them with Boulder on a remote system, sends build logs back in real time to Summit, and reports the build result to summit at the end of the build.
  • Vessel: Package repository manager. Summit tells Vessel which packages and other build artefacts to expect from a build task that Avalanche has completed, and then Avalanche pushes those packages and build artefacts to Vessel, which then saves them in the appropriate place and re-indexes the repository with the new packages, so users can install/update them.

We have some cool features planned in AerynOS that we envision will make package maintenance a lot easier to manage through smart use of automation.

Until these features are implemented, however, maintaining the AerynOS repository will remain somewhat human resource intensive. This is the main reason why the repository is consciously being kept “small” for now, with us deliberately focusing on having packages that will help developers and contributors improve AerynOS, while still delivering a nice Daily Driver experience.

Until the new features are implemented, this will necessarily be a balancing act between maintaining the package repository so it doesn’t go stale vs. having the development time to implement the new features.

Aside from porting the infrastructure code to Rust, proper testing was required to yield confidence that packages were both successfully built on the new infrastructure and that they worked as expected.

The end goal was to prove that we were able to rebuild the full AerynOS recipes repository (currently at ~950 recipes) from start to finish without infra-related build errors on the new infra.

To enable the rebuild, ermo set up a distributed build cluster of four builders of varying hardware specifications. A separate branch of the ‘recipes’ repository was created, and was used to test both the Rust infrastructure and to land packages for internal testing without them being seeded to user installs.

In addition, compared to the old infra, we made it simpler to add new avalanche build agents to the build cluster, thus making it very simple to scale out our build cluster as required.

To summarise the infrastructure Rust re-write and testing effort, we have:

  • Completed more than 3k recipe builds
  • Deployed the new Rust infrastructure on the AerynOS builders and continue to use it on ermo’s build cluster
  • Validated that the new infrastructure code is more stable and performant at runtime than the previous DLang version

The full rebuild of the recipes repository has also served to ensure ABI sanity for dependencies. Additionally, we can now say that at this point in time, the whole AerynOS repository is known to be buildable and works with all the latest toolchains.

A special thanks goes to Reilly Brogan, who worked diligently with ermo to not only drive the rebuild process, but also to ensure that some longstanding repository issues were corrected as part of the rebuild process.

During this process, we have delivered updates to our os-tools (Boulder and Moss), toolchains and build systems. A selection of the updates and additions include (but is certainly not limited to):

  • Linux 6.14.11 (6.15.x on the way)
  • LLVM 20.1.7
  • GCC 15.1.1
  • Rust 1.88.0
  • Golang 1.24.4
  • Mesa 25.1.4
  • GNOME 48.2
  • COSMIC 1.0.0_alpha7
  • Sway 1.11
  • Firefox 140.0.1
  • Thunderbird v139.0.2
  • Uutils-coreutils 0.1.0
  • Nodejs 22.14.0
  • Wine 10.8
  • Distrobox added at v1.8.1.2
  • Exfatprogs added at v1.2.9
  • Fzf added at v0.62.0
  • Kitty added at v0.41.1
  • Waybar added at v0.12.0

As mentioned earlier, the testing work was conducted on a separate branch of the recipes repository. Consequently, those of you on the old packages.aerynos.com/volatile/ repository, have not received any updates over the last 10-12 weeks.

This was a conscious decision to ensure that the mostly untested packages built during the infrastructure testing process did not reach end users immediately. Even though AerynOS is in Alpha and under continuous development, we still do our best not to break user systems if we can avoid it!

Now that we have a level of testing in place, with this blog post, we are announcing a new rolling unstable package repository for users. The old volatile package repository has received one final update to Moss that fixes an important bug when transitioning to the new unstable repository.

To ease the transition to the new repository for existing users, we are working on a script that can automatically modify the active repository on the system.

Once this script has been sufficiently productized, the next time existing users update their systems, they will notice that every single package will show an update available.

The exact number will vary from system to system depending on how many other packages are installed from the repository but for context, on a base AerynOS GNOME install, this is around 500 packages.

In the meantime, we have created a manual guide on how to transition existing installs to the new repository in our GitHub Discussions forum here. The process is fairly simple, but if you do have any issues transitioning manually, do get in touch via a comment under the GitHub Discussions post or via Matrix.

Content Delivery Network for Packages and ISOs

Section titled “Content Delivery Network for Packages and ISOs”

A common bit of feedback we have been receiving relates to the download speed of our repository, namely that it is not fast or even acceptable, especially if you live outside of Europe. This became more evident for those using the rebuild repository on ermo’s rebuild testing server, which felt noticeably faster for people in Europe in particular.

To remedy this, we have implemented CDN caching for our new cdn.aerynos.dev hosted assets. This means there will be synced copies of our ISOs and package repository on CDN servers around the world, which should help improve download speeds.

In particular, the new rolling unstable package repository mentioned above will be served via this CDN for the benefit of our users.

Please let us know how you get on with AerynOS ISO and package downloads in the coming weeks, as we would love to validate the improvement outside of our own internal testing.

So far, we have only outlined what we have already accomplished since late March.

The next part of this blog post is going to be a brief outline of where we are going from here in terms of infrastructure and repository development.

With the transition to the new infrastructure and the new unstable repository, we have been freed up to begin planning out the necessary steps to be able to deliver versioned repositories and versioned Moss format upgrades.

These topics have been mentioned in a previous blog post.

Versioned Repositories will enable us to deploy new Boulder and Moss features in a seamless fashion. This will enable us to introduce breaking code and on-disk format changes, that would otherwise cause installed systems to require manual intervention for them to continue to receive updates.

Once versioned repositories are in place, the goal is that users will be able to simply update and sync their system as normal via the sudo moss sync -u command.

With this:

  • Users will be upgraded to the new versions of Moss that uses a new repository format, without having to pay special attention.
  • It will enable AerynOS to iteratively expand the capability of Moss and Boulder on existing systems without breaking user systems in the process.

We consider versioned repositories a pre-requisite for what we call “try-builds” and eventually multi-arch support.

  • Automated try-builds denotes the process whereby the infrastructure discovers an update to the upstream source repository of a package, attempts to auto-update the recipe and then attempts to build the updated package recipe in question.
  • We think this will be a useful tool for contributors as it will automate some of the packaging tedium related to simple package version updates. It will also help enable automated regression testing and build flag optimisation in a future workstream.
  • Included under the multi-arch umbrella is our ability to target ARM, RISC-V, and different x86 architecture levels such as x86-64-v3 or v4.

Within the previous 3 month period, we have rebuilt a brand new Rust version of the infrastructure tooling that is robust enough to run in production on AerynOS servers, delivering packages to our contributors and users. This new version has proven to be more stable and performant than the old DLang version we were previously using.

From a day to day perspective, unlocking the infrastructure means that we can get back to reviewing and landing recipe PRs for our package maintainers or accepting new contributors into our AerynOS ecosystem. For those wishing to contribute to AerynOS, please make sure that you have manually switched over to our new repositories before making submissions to ensure you are using all the latest tooling.

Alternatively, you can wait until the automatic transition script is functional and have it make the change for you.

If you want to engage with the team, feel free to drop by our GitHub Discussions, raise issues across our various repositories or if you’re interested in contributing, feel free to raise PRs where you think our code can be improved or where you want to submit recipes for our repo.

We also have our matrix space that you can access via this link:

  • The Development room in particular is a great place for discussions around our code.
  • The General room is a great place to drop by and get to know the team.
  • The Packaging room is where you want to be if you’re interested in building packages for yourself and/or submitting them to the repository.

Concurrently to our work around the infrastructure re-write and repository rebuild, there has been several additional workstreams running in the background.

The team has been refactoring our existing Rust code, mainly focused on our os-tools (Moss and Boulder) and we are working on several additional improvements that we want to get over the finish line before our next ISO release.

We will be sharing details of this work in upcoming blog posts over the next few weeks.

🏗️ AerynOS: The OS As Infrastructure

Taking a break from our usual release-oriented updates, I thought it was high time to dive into what AerynOS actually is and what sets it apart from other distros. Fair warning, this is a meaty, in depth post, and still only scratches the surface. If you’re interested in the future of Linux distributions, read on. It may also help to visit our work-in-progress documentation site for more information.

Firstly, AerynOS isn’t Yet Another Linux Distribution. It’s a platform, a foundation, and a set of tools engineered in accordance with a vision and design that just so happens to produce a Linux distribution. Were we to go back in time and build a design brief for AerynOS, the initial question might be:

What if the operating system itself behaved like modern infrastructure?

AerynOS is the answer to that question. What if, instead of doing things the way they’ve always been done, we started from the ground up and produced a well designed system, rather than the traditional model of in-place mutation internal to a distribution?

Leveraging years of experience across the industry, including some notable pioneering projects such as Solus, Clear Linux, and others, we set out to build things the hard way, in order to make them easy.. eventually.

At the very core, is the decision to not be another GNU/Linux distribution. We default to the LLVM toolchain, using libc++ and compiler-rt by default. This isn’t just a case of “we like LLVM”, but rather a strategic decision to leverage the superior diagnostics, and ensure correctness / portability of packages. Whilst in our very early stages a few years ago we did trial musl, we do default to glibc for compatibility reasons and its superior performance characteristics. The performance advantage of glibc over musl is well-documented, particularly for compute-intensive workloads and applications requiring optimal threading performance. We’re here to build a working, usable system, for a multitude of use cases and verticals, and glibc provides the best balance of compatibility and performance.

Note, we do still package gcc and it’s trivial to enforce its use in a package recipe by setting the toolchain field in stone.yaml to gnu.

Lastly, it’s worth noting we build all packages for x86_64-v2 as our minimum baseline, and perform targeted optimisation in our package recipes using the extensive tuning options configurable in boulder.

Our packages are forbidden from containing any files outside of /usr. In order to enable this, packages and/or configurations are altered in AerynOS to ensure they can operate in the absence of a user provided configuration. This forces us to ensure sane-defaults are baked in at all levels, and eliminates dreaded 3-way merge conflicts on package updates. There are no conflicts, because everything in /etc and /var is yours. Likewise, /usr belongs exclusively to the system.

This approach was coined and developed during Clear Linux and Solus days, and we’re refining it further in AerynOS. Other projects have adopted similar approaches, termed “hermetic /usr”.

You might be wondering how files end up in /etc automatically. To this end, we support two forms of package triggers:

These triggers are run at the end of a transaction in an ephemeral container (linux namespace) and may affect the contents of the transaction-specific /usr tree. This is useful for interdependent packages that need to dynamically produce plugin registries, for example.

These triggers do not run in an isolated container, but instead are run in the context of the host system after the transaction has been successfully built and applied. It is these (minimally used) triggers that invoke systemd-tmpfiles, systemd-sysusers, etc. Even for these cases we take special care to ensure that our default configs are sane and that a rebuild is always possible.

Another important aspect is the handling of local system accounts. Traditionally these are snippets/shell scripts that invoke useradd, groupadd, etc. In AerynOS we default to systemd userdb for any users/groups that do not explicitly need groups that users would join. Thus, using drop-ins in /usr/lib/userdb, most accounts are defined and made available via NSS. We use this already in gdm, polkit, colord, and many others.

For the other cases, we utilise systemd-sysusers using snippets in /usr/lib/sysusers.d to create the necessary system accounts. In time, when GNOME and other desktops (as well as shadow et al.) are more adapted to userdb and systemd-homed, we won’t need to rely so much on sysusers. The goal of course is elimination of /etc/passwd and /etc/group entirely, facilitating quicker recovery, provisioning, etc. For now some packages will ship sysusers-based groups, ie docker group, etc.

Every moss transaction is atomic. Very high level: we produce a new usr tree (rootfs essentially due to mandated usr-merge) very quickly using hardlinks from a deduplicated cache. Once successfully built and primed, we atomically swap the new tree into place. Effectively, the staged transaction is swapped with the real /usr using renameat2 with RENAME_EXCHANGE. It either works or it doesn’t. Nothing in between.

Leaning heavily on our blsforme and disks-rs projects, we’ve taken a very different approach to boot management. As part of applying a transaction, we produce a Boot Loader Specification (BLS) entry for the new transaction. The tooling will discover the EFI System Partition (ESP), and mount it if necessary. It then synchronises the bootloader itself (systemd-boot), the relevant BLS entries, and the kernel/initramfs. Additionally it will automatically garbage collect older entries, currently retaining 5 of the last transactions.

It’s not just a case of copy/paste and hope for the best. We dynamically produce the root parameters for the kernel command line by directly reading the superblocks (natively, in Rust) of the devices all the way up the chain for the root filesystem. It means there’s no configuration file anywhere in AerynOS containing your root= parameter, and we’re even able to read LUKS2 encrypted devices to add the rd.luks.uuid parameter.

If that wasn’t cool enough, we also encode the moss transaction ID into the kernel command line. This is picked up during early boot in our initramfs, before /sysroot is pivoted to. Long story short it means that every kernel is correctly synchronised with the right rootfs, and that rollback is cheap, easy, and accessible directly from the boot menu.

In addition, we also automatically support XBOOTLDR partitions. In the absence of LoaderDevicePartUUID EFI variable, we’ll scan the GPT table itself relative to the rootfs to find the ESP, and we’ll always scan GPT relative to ESP to find the XBOOTLDR partition.

Long story short? No /etc/default/grub. In fact, if you wipe your ESP, moss can rebuild it from scratch.

At the heart of moss lies our .stone format. At a basic level, it’s our binary package format. Using a version-agnostic header, we’ve ensured that we design for the future. We also version every payload within the stone, allowing us to refine and evolve the format over time. Currently we support 4 payloads in a single stone:

  • 📄 Content payload: A sequential blob of deduplicated data
  • 🔍 Index payload: Contains offsets for the content payload, keyed by the XXH128 hash of the content
  • 🗂️ Layout payload: Describes the intended filesystem layout when the stone is applied
  • 📋 Metadata payload: Sequence of strongly typed, tagged metadata entries such as name, providers, etc.

Right now we default to XXH128 for hashing, but Blake3 is on the horizon (and used in blsforme). We compress all payloads using Zstd, offering great decompression performance whilst still providing a decent compression ratio.

The process for “installing” a .stone is quite different to other systems.

  • 📥 The stone is fetched according to the repository data
  • 🧠 Index payload is unpacked in memory
  • 🗜️ Content payload is decompressed in a single run to a temporary location
  • 🔗 Using the index payload, the content payload is spliced into the content addressable storage (CAS)
  • 📂 The layout payload is loaded and merged into the LayoutDB, keyed by the unique package ID (SHA256 for the entire .stone, per the repo)
  • 📊 Using the same key, the metadata payload is loaded into the “InstallDB”.

Notice that at no point are we actually “installing” anything. We cache into the CAS, and store metadata and layout details. This is used when producing a transaction. Also note there are various safe guards, integrity checks and such in place. For example every payload contains a “CRC” in the payload header, verified on read. We actually use XXH64 for this.

In AerynOS, a transaction is an entirely self-contained rootfs produced by moss. Right now we have to emulate imperative package management, by producing a new dependency graph seeded from the previous system state as recorded in the StateDB. Alterations are made and validated, and then the new DAG selects the packages to be included in the transaction.

At a high level, the transaction is produced by:

  • 📊 Producing a valid dependency graph, seeded from the previous system state
  • 🧮 Determine cache availability for each package
  • 📥 Fetch and cache every missing .stone to produce the transaction
  • 📂 Load all layouts for the stones by ID
  • 🌐 Using our multi-layered vfs approach, we build an arena graph of the target filesystem, detecting conflicts ahead of time, whilst also precomputing reparented nodes (i.e symlink redirection) and produce an optimal iteration order for transaction application
  • 🚶 Walking the vfs iterator to produce the new rootfs in a staging location, using linkat, mkdirat, etc, to optimally produce the new filesystem, linked from the deduplicated cache
  • 📦 Binding an ephemeral container to the new rootfs, and running transaction triggers
  • 💾 Recording the transaction in the StateDB.

As mentioned above, once a successful transaction is produced, it is atomically swapped into place. If the transaction fails, no ill effects are observed on the system, and your work day continues as normal. In the event of a successful transaction, the system is updated and ready to go, along with system trigger execution and boot management updates.

All of this is just the start. It’s taken a few years, and perhaps now it’s clear why. We’ve not even covered our package build tool, boulder, or indeed our build infrastructure! With that said, lets skip forward a short while and see what’s coming down the pipe.

To reiterate, we emulate imperative package management. And to be honest, it’s totally pointless. It actually introduces more bugs than it solves. Given that we produce a new rootfs for every transaction, we could just as easily produce an entirely new graph each time too.

That’s exactly what we’re planning to do. In a similar vein to Gentoo or Nix, the intent is a global file to explicitly state the desired state of the system, whilst the tooling will simply fulfil it based on the moss plugin cascade. Whilst we have no intention of conflating state and configuration, we will in time extend this system model to support variants of packages in order to provide a kind of “slots” system, and to implement a richer, saner alternative to the infamous update-alternatives.

Obvious candidates include mutually exclusive packages like coreutils and uutils-coreutils (a gnu variant weight, perhaps?) or coinstallables such as clang, ensuring a default vendor version whilst allowing overrides at the local install or indeed package recipe level.

Often AerynOS is described as an immutable OS, and that’s not strictly true. Granted, every transaction results in a new /usr tree, so local changes won’t persist and recovery is immediately available. However, we’re not immutable in the sense of read-only.

Having produced a composition-first, developer&user friendly atomic update implementation, we want to take this further in order to also support immutability without compromise: no unnecessary reboots. To do so we’ll implement something similar to composefs, without the drawbacks. Using the same transaction driven approach we have now, we’ll simply produce an erofs metadata image dynamically instead of the exploded filesystem tree. Utilising trusted.overlay.redirect xattr, and an overlayfs mount, we’ll expose our own CAS through the layouts in erofs, and also support fsverity hashes. Icing on the cake? We’ll leverage mount stacks to retain our composition-first, no unnecessary reboot ethos.

We’ve touched on this a few times in our blog posts. Essentially the binary repository will be produced as a result of available build artefacts correlating to the state of our git repo manifest files (boulder proofs of build). The main repository index will link releases to moss/stone format versions, such that moss would only update to the latest release that it supports. This allows us to stagger breaking changes in a way that nothing breaks at all.

  • 🚀 Actively shipping GNOME ISOs
  • 🎮 Quite usable for gaming with NVIDIA drivers, Steam, Flatpak, etc
  • 👥 Real users are already praising stability and innovation
  • 🛠️ Focused heavily on disks-rs and lichen-installer to leverage disk strategy files (automatic provisioning described in KDL)

This isn’t just another distro. It’s us redefining distributing Linux. We’ve achieved a tremendous amount, having successfully integrated all of the above into a cohesive, singular whole. The amusing part of course being that the resulting system is “boring” - it just works.

Now, we are alpha. We’re not done, we’re not without issues. That said, we’re building the future and with your support, we’ll get there even faster. This post has only scratched the surface of AerynOS, but if you like what we’re doing, please do get involved or support us.

Or see other ways in which you can support the project financially.

🚀 Hello AerynOS

Hello from AerynOS! ✨ As you recall from February, we set about rebranding Serpent OS into what you now see, AerynOS. It’s not been without challenges, and where possible we’ve ensured continuity. Additionally we silently dropped a few ISOs, and can now take our first formal steps under our new identity. TLDR: the transition has been entirely fluid and you only need to keep updating, no manual intervention is necessary! 🎉

Progress has been steady but quiet lately, as due to unfortunate circumstances I’ve been working in bursts from my wifes hospital bedside.

OK, lets get right to it. We’ve released AerynOS 2025.03, with the following shinies:

  • 🖥️ GNOME 48.0
  • 🐧 Linux 6.13.8
  • 🦊 Firefox 136.0.2
  • 🎮 Mesa 25.0.2
  • 🚀 Vulkan SDK 1.4.309.0
  • 🛠️ LLVM 19.1.7

We’ve also decoupled the internal tooling version from the ISO versions to make them a little bit more.. well.. readable, for humans.

Grab it now from the download page.

Installer preview

I want to thank everyone for their support of the project since we got more transparent around goals. It’s been a huge help and has facilitated massive progress on the project! 🙏 AerynOS no longer feels like a hacked together PoC, but a very solid daily runner. Yes, it’s certainly alpha, and the installer leaves a lot to be desired. For daily use and updates? It’s really quite something.

Please note that I never realised the Ko-fi goals don’t automatically reset, so the currently listed goal has been running for way more than a month.. xD However, over a period of time, we did manage to achieve it! 🎉 I’ll be resetting it shortly on a fixed date for each month ahead.

Rebranding can be more challenging than one expects. The easy stuff is out of the way, HTTP redirects and DNS trickery to ensure old URLs continue to work without manual intervention. In the case of the repos, we’ve updated the scheme:

packages.serpentos.com -> packages.aerynos.dev

dev.serpentos.com -> packages.aerynos.com

docs.serpentos.com -> aerynos.dev

Note the former scheme made no sense at all whereas now we take advantage of the dotdev for unstable work. A future release of moss will automatically handle the transition on installs for the sake of consistency.

Previously we had moss generate the /usr/lib/os-release file on demand using compiled in defaults, which was somewhat inflexible. Now, we ship a JSON file (/usr/lib/os-info.json) containing a description of the OS, the composition of technologies, and the capabilities. While os-release and lsb_release exist, they provide very primitive identification and metadata. os-info is designed to provide compatibility with those formats while being far more expressive. Importantly, it also contains a mechanism for identifying the former identities of an operating system:

...
"former_identities": [
{
"id": "serpentos",
"name": "Serpent OS",
"start_date": "2020-06-15T00:00:00Z",
"end_date": "2025-03-17T00:00:00Z",
"end_version": "0.24.6",
"announcement": "https://aerynos.com/blog/2025/02/14/evolve-this-os/"
}
]
...

moss now utilises os-info to generate the os-release file, as well as to provide identification and history for our blsforme crate to manage each entry on the boot partition. This has allowed us to sync the branding on a per transaction basis, while still “owning” the legacy branded transactions on the ESP too (i.e /EFI/serpentos) and ensure they are correctly garbage collected. Interestingly this means that for a short period of time the boot menu will still show some of the older Serpent OS entries, allowing you to roll back to it.

Please visit the os-info repository for more details, the schema, etc. We’re very keen to open collaboration and adoption of this project, envisaging use cases in installers, welcome apps, and indeed even container introspection tooling.

We’ve officially begun the revamp of lichen-installer using a proper split between the privileged backend and frontend. In order to permit a number of frontends and use cases in future, we’re using tonic gRPC messages to communicate, currently just on a UNIX domain socket. In future, we’ll have use cases for TCP using WASM frontend.

More importantly than that, we’ve now implemented the foundational aspects for disks-rs. Long story short - we’re using provisioning strategies defined in custom .kdl files that are dynamically tested/validated against an input storage pool to produce a working set of changes. Such as, create a new partition table and add some partitions to it, using some constraints logic.

It is early days, but its quite awesome that we’re able to automatically partition disks based on these strategy files! ✨ These will form the backbone of lichen, allowing us to offer rich automatic storage strategies as well as manual partitioning options. Notable is the ability to set the volid or uuid of a specific filesystem, and partuuid, etc. This in essence allows shallow reproduction of an installation/disk topology using a configuration file.

This transition wouldn’t have been possible without our amazing community. Both long-time contributors and newcomers have stepped up to the plate to make AerynOS a reality, pushing us across the finish line during this critical transition.

Special thanks to Cameron for his instrumental work in porting our websites to a more maintainable, Astro-based infrastructure. This migration has not only improved our development workflow but also enhanced the user experience across all our web properties.

Whether you’ve contributed code, reported bugs, tested releases, spread the word, or supported us financially - thank you. AerynOS is truly a community effort, and we’re excited to continue this journey together.

As always, you can join our community on Matrix or contribute on GitHub.

For existing Serpent OS users, migrating to AerynOS is straightforward:

  1. Simply run the following command in your terminal:

    Terminal window
    sudo moss sync -u
  2. Some branding changes may require a second moss operation to fully “take” effect (as they’ll be using the new moss binary). After the initial sync, run one of these commands to complete the transition:

    Terminal window
    sudo moss sync
    # or
    sudo moss install some-package

That’s it! Your system will seamlessly transition to AerynOS while maintaining all your existing configurations and installed packages.

We’ll continue on the path we set at the start of the year. By and large we’ll improve the core tooling to continue delivering a better experience for users, developers, gamers, etc.

By the way, we do have nvidia-open-gpu-kernel-modules / nvidia-graphics-driver for the linux-desktop users wanting to game on their shiny AerynOS installs. Our Steam package works great, and any feedback is appreciated on improving it (ie udev/controller stuff, 6.14 kernel is planned).

  • Shifting more towards the installer images, using Slint for the installer frontend, and dropping unnecessary packages from the installer image. Live ISOs will of course be available, but our primary target is installer images.
  • Integration of upstreams-rs and ABI tracking into the tooling in order to vastly lighten the load for maintainers.
  • Accelerated delivery of milestone ISOs.