Welcome to my blog!

Feeding the penguin some RAM.
Performing global updates. Calculating dependencies... done!

Improve programming somewhat, original by Matt Bors

Made with Windows... maybe Written in HTML... maybe Edited with VIM... maybe

Change location of intermediate objects in .NET

:: dotnet, programming, software engineering

By: Maciej Barć

.NET creates the so-called intermediate objects while building .NET projects, those are located in the “bin” and “obj” directories. The default is not very satisfying, primarily because if a program from a different machine or a container modifies those, then any cached file system paths that are encoded in the objects will be broken. But also it is probably mostly a legacy behavior to have them split between “bin” and “obj” directories.

I prefer for them to say in one - ".cache", because that’s that they are - cache. With the following configuration objects will be stored inside the ".cache" directory. Furthermore, the objects produced by the native machine in the “native” subdirectory and the ones produced by container software in “container” subdirectory.

  <CachePath Condition="'$(DOTNET_RUNNING_IN_CONTAINER)' == 'true'">.\.cache\container</CachePath>

If anybody want to go hardcore and cache the intermediate objects based on the RID or architecture triplet, then this can also be done, for example, by adding environment variables to the path.

Safer Nix installation

:: linux, nix, packaging, sandbox, shell, system, test, testing, tutorial

By: Maciej Barć

Nix is useful for quickly testing out software and providing a strict environment that can be shared between people.

Today I’m trying out Nix again, this time I want to do it my way.

Installation process

Nix store

I know Nix needs “Nix store” installation on / (the system root).

Create it manually to prevent the installation script from calling sudo. 1st I switch to the root account, and then I run:

mkdir -p -m 0755 /nix
chown -R xy:xy /nix

Running the install script

Download the Nix install script and examine the contents.

curl -L https://nixos.org/nix/install > nix_install.sh

Then, run it with --no-daemon to prevent it running as system service.

sh ./nix_install.sh --no-daemon
performing a single-user installation of Nix...
copying Nix to /nix/store...
installing 'nix-2.20.1'
building '/nix/store/1ahlg3bviy174d6ig1gn393c23sqlki6-user-environment.drv'...
unpacking channels...
modifying /home/xy/.bash_profile...
modifying /home/xy/.zshenv...
placing /home/xy/.config/fish/conf.d/nix.fish...

Installation finished!  To ensure that the necessary environment
variables are set, either log in again, or type

. /home/xy/.nix-profile/etc/profile.d/nix.fish

in your shell.


modifying /home/xy/.bash_profile...
modifying /home/xy/.zshenv...
placing /home/xy/.config/fish/conf.d/nix.fish...

That’s very rude!

Stopping Nix from making a mess

I need to prevent Nix from mess up with my environment when I do not want it to. Nix puts some code into the Bash, ZSH and Fish initialization files during installation to ease it’s use. I do not want that since I do not want Nix to meddle with my environment without me knowing it.

I keep my .bash_profile and .zshenv in a stow-managed git repo so I can just cd into my repo and do git reset --hard, but for you will have to revert those files to their old forms manually.

Playing with Nix

We do not have nix in PATH but we still can launch it. Nix executables are located inside ~/.nix-profile/bin/.

By invoking nix-shell one can create a ephemeral environment containing only packages specified after the -p flag. I always add -p nix to have the Nix tools available also inside the spawned environment.

I will test out chibi (small Scheme interpreter) + rlwrap (REPL support for software lacking it) inside a Nix ephemeral environment:

~/.nix-profile/bin/nix-shell -p nix chibi rlwrap

Inside the spawned shell:

rlwrap chibi-scheme

In the chibi REPL, let’s see the contents of the PATH environment variable:

(get-environment-variable "PATH")

And exit the Scheme REPL:


After the playtime, run garbage collection:


.NET in Gentoo in 2023

:: dotnet, gentoo, packaging, portage, powershell

By: Maciej Barć

.NET ecosystem in Gentoo in year 2023

The Gentoo Dotnet project introduced better support for building .NET-based software using the nuget, dotnet-pkg-base and dotnet-pkg eclasses. This opened new opportunities of bringing new packages depending on .NET ecosystem to the official Gentoo ebuild repository and helping developers that use dotnet-sdk on Gentoo.

New software requiring .NET is constantly being added to the main Gentoo tree, among others that is:

  • PowerShell for Linux,
  • Denaro — finance application,
  • Ryujinx — NS emulator,
  • OpenRA — RTS engine for Command & Conquer, Red Alert and Dune2k,
  • Pinta — graphics program,
  • Pablodraw — Ansi, Ascii and RIPscrip art editor,
  • Dafny — verification-aware programming language
  • many packages aimed straight at developing .NET projects.

Dotnet project is also looking for new maintainers and users who are willing to help out here and there. Current state of .NET in Gentoo is very good but we can still do a lot better.

Special thanks to people who helped out

Portage Continuous Delivery

:: gentoo, linux, sysadmin, system

By: Maciej Barć

Portage as a CD system

This is a very simple way to use any system with Portage installed as a Continuous Delivery server.

I think for a testing environment this is a valid solution to consider.

Create a repository of software used in your organization

Those articles from the Gentoo Wiki describe how to create a custom ebuild repository (overlay) pretty well:

Set up your repo with eselect-repository

Install the my-org repository:

eselect repository add my-org git https://git.my-org.local/portage/my-org.git

Sync my-org:

emerge --sync my-org

Install live packages of a your software

First, enable live packages (keywordless) for your my-org repo:

echo '*/*::my-org' >> /etc/portage/package.accept_keywords/0000_repo_my-org.conf

Install some packages from my-org:

emerge -av "=mycategory/mysoftware-9999"

Install smart-live-rebuild

smart-live-rebuild can automatically update live software packages that use git as their source URL.

Set up cron to run smart-live-rebuild

Refresh your my-org repository every hour:

0 */1 * * * emerge --sync my-org

Refresh the main Gentoo tree every other 6th hour:

0 */6 * * * emerge --sync gentoo

Run smart-live-rebuild every other 3rd hour:

0 */3 * * * smart-live-rebuild

Restarting services after update

All-in-one script

You can either restart all services after successful update:

File: /opt/update.sh


set -e


systemctl restart my-service-1.service
systemctl restart my-service-2.service


0 */3 * * * /opt/update.sh

Via ebuilds pkg_ functions

File: my-service-1.ebuild

pkg_postinst() {
    systemctl restart my-service-1.service

More about pkg_postinst:

Example Gentoo overlays

Firefox is still the best browser. Deal with it Google!

:: browser, firefox, linux

By: Maciej Barć

Firefox began as the first open source browser to live through the browser wars, overcoming Microsoft’s Internet Explorer and continues to deliver competition-smashing technology to this day.

Chromium code

The only advantages of Chromium are that it was adopted by Electron and spread partially because of a more liberal license and Google’s own efforts.

Google will never be able to cope with the worst imaginable code base of Chromium.

Chromium is near-impossible to compile

On a 4cores/8threads Ryzen CPU Chromium compiles in ~12h and requires at least 20GB of disk space for build. At the same time Firefox compiles in ~1.5h and requires ~8GB for disk space.

Programming language adoption

Additionally Firefox team was able to rewrite a very large portion of Firefox codebase in Rust which improved the browser’s safety. There were attempts to add rust to Chromium but they all are in more of a addon-like fashion.

Porting to UNIXes

Because Chromium is extremely large it’s very hard to port and maintain for Linux and BSD based systems. There were numerous bugs with Chromium’s UI on Linux that cause crashes on pressing random controls. I believe Google has no Linux testers beside the “free software freeloaders” (wink, wink, IBM :P).


This days Google has to result to dirty tactics where certain Google-owned websites would either refuse to work on Firefox or give a fake performance hit that is entirely caused by malicious JavaScript code.

Several popular FOSS-related sources have covered this news recently, check them out on the WWW.

Common Project Layout, version 0

:: programming, software engineering

By: Maciej Barć

This is a tongue-in-cheek “draft” for Common Project Layout, version 0. It will probably never become any sort of adopted standard but I think it is good to share some ideas that I had while working on this.


Common Project Layout (CPL) is a set of good practices for structuring medium-to-large monorepo-like software repositories.


CPL helps with code organization. It can be a good “framework” (in a very loose meaning of this word) to modularize product components.

It can make large repositories easier to work with and understand.

Upfront limitations

CPL is strictly designed for “hosting” software and all the non-code assets are left up to the engineers to decide their location.

For example branding assets could be put into the Branding top-level directory, but on the other hand are we sure they will stay the same with major version?

Since we can agree that we consider documentation “producers” (not the produced artifacts) to be code we could also acknowledge that some assets could have their own versioned subproject.



CPL requires that the software is versioned inside directories whose names include the version. Recommended pattern is to name directories vMAJOR where MAJOR is either the current tagged major version or one that will be if no tags exist. It is also recommended to group the vMAJOR directories under one common directory, for example Source.


The vMAJOR could theoretically contain all the source code mixed together but it should be grouped and organized by their purpose.

Subproject is defined as a directory inside a versioned (vMAJOR) directory. “Versioned subproject” and “subproject” are synonymous to CPL.

To mark the purpose of a subproject, whether it is to be used as a helper or as a “container” for source that is actually exposed (or binaries created from it), it should be adequately named.

For helpers name does not matter but for source subproject it should be prefixed by project name.

For example we could have this layout:

└── v1/
    ├── Makefile
    ├── VERSION
    ├── admin/
    ├── make/
    ├── my-project-app/
    └── my-project-util/

In the above example my-project-app and my-project-lib are the source subproject and admin and make are subproject that are there only to help in building, managing and deploying the actual source subprojects.

At the and it is up to the engineer to choose if something is considered a source subproject. For example: If we have a helper subproject that all it does is hold Docker / Podman files for creating a development container what should we name it? As of now I had named them PROJECT-dev-container.


Make and admin

I think it is a good practice for each vMAJOR to have a Makefile, or equivalent in other build system, that will call scripts inside vMAJOR/admin directory that each take care of some small / specific task.

For example the vMAJOR/Makefile recipe for build can call admin/build_my_project_app.py and admin/build_my_project_lib.py. Each those scripts would call the “real” build system specific to the subproject they act upon.


It is nice to have a VERSION file in the vMAJOR directory. It can be reused by build tools and also to show what was the last version worked upon inside vMAJOR, the latest git tag can either be put on different major version or simply not be there yet.


See those repositories for referencing the CPL layout: