Welcome to my blog!
Feeding the penguin some RAM.
Performing global updates.
Calculating dependencies... done!
How it works
- There will be a FUSE OneDrive filesystem mounted on the machine Nextcloud runs on,
- You configure filesystem-local “External storage” and point to the mountpoint of a cloud drive (in this case OneDrive),
- users connected via Nextcloud Client will have a option to sync any chosen files from the External storage as if they were Nextcloud-owned files.
Benefits
There are some benefits from connecting OneDrive to Nextcloud:
- faster cached sync than by normal straight-to OneDrive connection - files will be pushed to Nextcloud and then uploaded to Nextcloud, this will take less time than uploading straight to OneDrive because of Microsoft OneDrive rate-limiting the uploads (especially in case of larger files),
- clients of Nextcloud do not have to configure any OneDrive connection,
- you will be able to have two-way sync of your OneDrive files (currently two-way sync on the RClone side is experimental, this will use Nextcloud’s two-way sync mechanism).
Set up RClone
First You will have to set up RClone. Connect to your cloud of choice and then copy the config to a location that will be readable by a service that mounts a given cloud drive.
For OneDrive I use /usr/local/share/rclone/config/rclone.conf
that is accessible only to the apache
user.
The config will look something like this:
|
[OneDrive]
type = onedrive
region = global
token = *REDACTED*
drive_id = *REDACTED*
drive_type = personal
|
Mounting OneDrive
I created this helper script (in /usr/local/bin/rclone-mount.sh
):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32 |
#!/bin/sh
set -e
set -u
trap "exit 128" INT
conf_path="${1}"
local_user="${2}"
local_path="${3}"
cloud_name="${4}"
cloud_path="${5}"
log_dir="/var/log/rclone"
log_file="${log_dir}/rclone-mount-${local_user}-${cloud_name}.log"
mkdir -p "${log_dir}"
touch "${log_file}"
chmod a+r "${log_file}"
chown "${local_user}" "${log_file}"
exec rclone \
--default-permissions \
--allow-other \
--verbose \
--vfs-cache-mode full \
--config "${conf_path}" \
mount \
"${cloud_name}:${cloud_path}" "${local_path}" \
>> "${log_file}" 2>&1
|
Then, I use it in a OpenRC service like this (/etc/init.d/mount-OneDrive
):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32 |
#!/sbin/openrc-run
conf_path="/usr/local/share/rclone/config/rclone.conf"
cloud_name="OneDrive"
cloud_path="/"
local_user="apache"
local_path="/mnt/${cloud_name}"
command="/usr/local/bin/rclone-mount.sh"
command_args="${conf_path} ${local_user} ${local_path} ${cloud_name} ${cloud_path}"
command_background="false"
command_user="${local_user}:$(id -g -n ${local_user})"
supervisor="supervise-daemon"
depend() {
need net
}
start_pre() {
ebegin "Unmounting leftovers from ${local_path} before service start"
umount "${local_path}"
eend 0
}
stop_post() {
ebegin "Unmounting leftovers from ${local_path} after service stop"
umount "${local_path}"
eend 0
}
|
Enabling the RClone service
Set up directories and correct permissions:
|
mkdir -p /usr/local/share/rclone/config
chown -R apache:apache /usr/local/share/rclone/config
mkdir -p /var/log/rclone
chown -R apache:apache /var/log/rclone
|
Do not forget to make the mount service script executable:
|
chmod +x /etc/init.d/mount-OneDrive
|
Enable and start this service on OpenRC:
|
rc-update add mount-OneDrive default
rc-service mount-OneDrive start
|
Drive permissions
RClone mounts cloud drives by using FUSE. To have the RClone option --allow-other
available in order to allow root
to access the drive you will have to modify the FUSE config file (/etc/fuse.conf
) - add user_allow_other
.
Nextcloud configuration
Download and enable the “External storage” app. Then, in “Administration” settings add a external storage:
- name:
ExternalStorage_OneDrive
- type:
Local
- authentication:
None
- configuration:
/mnt/OneDrive
- available for: YOUR USER
The official ::gentoo
repository currently contains only GHC on version 9.2.8. To install newer GHC one has to either download/build themselves or use the ::haskell
overlay (https://github.com/gentoo-haskell/gentoo-haskell).
Enable the ::haskell overlay
Enable:
|
eselect repository enable haskell
|
Sync:
|
emerge --sync haskell
egencache --update --repo haskell --jobs 12 --load 6
eix-update
|
Unmask needed packages
Add to /etc/portage/package.unmask/0000_hs.conf
|
<dev-lang/ghc-9.9
<dev-haskell/cabal-3.11
<dev-haskell/cabal-install-3.11
<dev-haskell/cabal-install-solver-3.11
<dev-haskell/cabal-syntax-3.11
<dev-haskell/text-2.2
<dev-haskell/parsec-3.1.18
|
Add to /etc/portage/package.accept_keywords/0000_hs.conf
|
app-admin/haskell-updater
dev-haskell/*
dev-lang/ghc
|
Install
|
emerge --ask --verbose ">=dev-lang/ghc-9.8" ">=dev-haskell/cabal-install-3.10"
|
Build of GHC 9.8 takes around ~2 hours on a 8-core laptop-grade CPU.
Bonus: masking packages from ::haskell
If you want to exclude a given version from the ::haskell
overly from being installed/updated, then you can add a similar line(s) to /etc/portage/package.mask/0000_hs.conf
:
|
app-emacs/haskell-mode::haskell
|
SysStat is a amazing tool. In the age where telegraf and grafana are all the rage everybody forgot about the good old sysstat
.
Selected command examples
iostat -d -p nvme0n1 3
- disk I/O for a NVME drive (nvme0n1
),
sar -n DEV 3
- network throughput,
sar -h -r 3
- memory usage,
sar -P ALL 3
- CPU utlization
sar -q 3
- system load levels,
sar -A 3
- all the metrics.
Gathered info
qlist app-admin/sysstat | grep /usr/bin/
The app-admin/sysstat contains the following binaries and their respective statictic fields:
sar
- general utilization statistics,
cifsiostat
- CIFS,
iostat
- device input/output,
mpstat
- processors,
pidstat
- Linux tasks,
tapestat
- tape (yes, the real tape disks).
Installation
Gentoo
|
emerge --noreplace --verbose sys-process/cronie app-admin/sysstat
rc-service cronie start
rc-update add cronie default
|
Files
By default (on Gentoo): * sa
(the collector) saves statistics to /var/log/sa
, * /etc/sysstat
is the configuration file * cron jobs are run via the *system*
cronjob table.
Tag your releases!
There were so many times when I wanted to use some package, it has a version on repology, but when I open the project’s repo there are no tags.
Nowadays it is extremely easy to bump software versions according to semantic versioning and also to tag them, that it should go without saying that most OSS and proprietary projects could also follow that workflow.
There are many projects both language-specific and agnostic to manage the release (and tagging) process. My favorite agnostic one is tbump and I highly recommend it.
|
(
repo="bazelbuild/bazel";
raw="https://raw.githubusercontent.com/${repo}";
url="${raw}/master/scripts/zsh_completion/_bazel";
site_functions="/usr/share/zsh/site-functions";
wget "${url}" -O ${site_functions}/_bazel
)
|
Afterwards, restart your ZSH shell session.
This also works if you have bazelisk
installed but bazel
has to be symlinked to the bazelisk
executable path.
I made a mistake when splitting my Portage make.conf
file, having it as one file instead of a directly with many small files is a lot easier to maintain.
Portage allows users to split all of files inside /etc/portage
such as make.conf
, package.use
, package.mask
and other into groups of files contained in directories of the same name. This is very helpful when using automation to add some wanted configuration. But in case of make.conf
it becomes a “form over function” issue.
I would also recommend to keep make.conf
as simple as possible, without useless overrides and variable reassignment.
See also:
Bonus: config
And of course, this is my current /etc/portage/make.conf
of my main dev machine:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58 |
BINPKG_FORMAT="gpkg"
CCACHE_DIR="/var/cache/ccache"
EMERGE_WARNING_DELAY="0"
LC_MESSAGES="C"
PORTAGE_NICENESS="10"
PORTAGE_WORKDIR_MODE="0775"
PORTAGE_LOGDIR="${EPREFIX}/var/log/portage"
PORTAGE_ELOG_CLASSES="warn error log"
PORTAGE_ELOG_SYSTEM="save"
QUICKPKG_DEFAULT_OPTS="--include-config=y --umask=0003"
MAKEOPTS="--jobs=7 --load-average=6"
COMMON_FLAGS="
-march=znver1 -O2 -falign-functions=32
-fstack-clash-protection -fstack-protector-strong
-fdiagnostics-color=always -frecord-gcc-switches -pipe"
ADAFLAGS="${COMMON_FLAGS}"
CFLAGS="${COMMON_FLAGS}"
CXXFLAGS="${COMMON_FLAGS}"
FCFLAGS="${COMMON_FLAGS}"
FFLAGS="${COMMON_FLAGS}"
CARGO_TERM_VERBOSE="false"
RUSTFLAGS="-C opt-level=3 -C debuginfo=0"
LDFLAGS="${LDFLAGS} -Wl,--defsym=__gentoo_check_ldflags__=0"
L10N="en de pl"
VIDEO_CARDS="amdgpu radeon radeonsi"
CPU_FLAGS_X86="
aes avx avx2 f16c fma3 mmx mmxext pclmul popcnt
sha sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3"
EMERGE_DEFAULT_OPTS="
--binpkg-changed-deps=y --binpkg-respect-use=y
--nospinner --keep-going=y
--jobs=3 --load-average=8"
GENTOO_MIRRORS="
https://mirror.leaseweb.com/gentoo/
https://gentoo.osuosl.org/
https://distfiles.gentoo.org/"
FEATURES="
userpriv usersandbox usersync
downgrade-backup unmerge-backup binpkg-multi-instance buildsyspkg
parallel-fetch parallel-install
ccache
-binpkg-logs -ebuild-locks"
USE="
custom-cflags custom-optimization firmware initramfs vaapi vulkan
-bindist -zeroconf"
|
The dilemma between #Gentoo and #NixOS is this:
The most important value of #Gentoo is configuration/customization and reproducibility comes 2nd.
In case of NixOS those value are reversed. The most important to NixOS is ability to reproduce given setup.
Both of those systems will suit users that value control over their systems very highly (unlike, say, Ubuntu - where the most important value is convenience), but the tie-breaking is between what value out of those two should come 1st.
Imported via Fedimpost from emacs.ch/@xgqt/112581104037953790