October 04, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

synths

Although I've never written about them, I've been interested in music synthesisers for ages. My colleagues know this. Whilst I've been off sick, they had a whip-round and bought me a voucher for Andertons, a UK-based music store, to cheer me up.

I'm absolutely floored by this generosity. And so, I'm now on a quest to buy a synthesizer! Although, not my first one.

Alesis Micron on my desk, taunting me

Alesis Micron on my desk, taunting me

I bought my first synth, an Alesis Micron, from a colleague at $oldjob, 16 years ago. For various reasons, I've struggled to engage with it, and it's mostly been gathering dust on my desk in all that time. (I might write more about the Micron in a later blog post). "Bad Gear" sums it up better than I could:

So, I'm not truly buying my "first" synth, but for all intents and purposes I'm on a similar journey to if I was, and I thought it might be fun to write about it.

Goals

I want something which has as many of its parameters presented physically, as knobs or sliders etc., as possible. One reason I've failed to engage with the Micron (so far) is it's at the other end of this spectrum, with hundreds of tunable parameters but a small handful of knobs. To change parameters you have to go diving into menus presented on a really old-fashioned, small LCD display. If you know what you are looking for, you can probably find it; but if you just want to experiment and play around, it's off-putting.

Secondly, I want something I can use away from a computer, as much as possible. Computers are my day-job, largely dominate my existing hobbies, and are unavoidable even in some of the others (like 3d printing). Most of the computers I interact with run Linux. And for all its strengths, audio management is not one of them. If I'm going to carve out some of my extremely limited leisure time to explore this stuff, I don't to spend any of it (at least now) fighting Pulseaudio/ALSA/Pipewire/JACK/OSS/whatever, or any of the other foibles that might crop up1.

Thirdly, I'd like something which, in its soul, is an instrument. You can get some amazing little synth boxes with a huge number of features in them. Something with a limited number of features but which really feels well put together would suit me better.

So… next time, I'll write about the 2-3 top candidates on my list. Can you guess what they might be?


  1. To give another example. The other day I sat down to try and use the Micron, which had its audio out wired into an external audio interface, in turn plugged into my laptop's Thunderbolt dock. For a while I couldn't figure out why I couldn't hear anything, until I realised the Thunderbolt dock was having "a moment" and not presenting its USB devices to the laptop. Hobby time window gone!

04 October, 2024 08:55PM

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes Freexian as our newest partner!

Freexian logo

We are excited to announce and welcome Freexian into Debian Partners.

Freexian specializes in Free Software with a particular focus on Debian GNU/Linux. Freexian can assist with consulting, training, technical support, packaging, or software development on projects involving use or development of Free software.

All of Freexian's employees and partners are well-known contributors in the Free Software community, a choice that is integral to Freexian's business model.

About the Debian Partners Program

The Debian Partners Program was created to recognize companies and organizations that help and provide continuous support to the project with services, finances, equipment, vendor support, and a slew of other technical and non-technical services.

Partners provide critical assistance, help, and support which has advanced and continues to further our work in providing the 'Universal Operating System' to the world.

Thank you Freexian!

04 October, 2024 01:17AM by Donald Norwood

October 03, 2024

hackergotchi for Mike Gabriel

Mike Gabriel

Creating (a) new frontend(s) for Polis

After (quite) a summer break, here comes the 4th article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Have fun with the read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series

  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis (this article)
  5. Current status and roadmap

4. Creating (a) new frontend(s) for Polis

Why a new frontend was needed...

Our initial experiences of working with Polis, the effort required to implement more invasive changes and the desire of iterating changes more rapidly ultimately lead to the decision to create a new foundation for frontend development that would be independent of but compatible with the upstream project.

Our primary objective was thus not to develop another frontend but rather to make frontend development more flexible and to facilitate experimentation and rapid prototyping of different frontends by providing abstraction layers and building blocks.

This also implied developing a corresponding backend since the Polis backend is tightly coupled to the frontend and is neither intended to be used by third-party projects nor supporting cross-domain requests due to the expectation of being embedded as an iframe on third-party websites.

The long-term plan for achieving our objectives is to provide three abstraction layers for building frontends:

  • a stable cross-domain HTTP API
  • a low-level JavaScript library for interacting with the HTTP API
  • a high-level library of WebComponents as a framework-neutral way of rapidly building frontends

The Particiapp Project

Under the umbrella of the Particiapp project we have so far developed two new components:

  • the Particiapi server which provides the HTTP API
  • the example frontend project which currently contains both the client library and an experimental example frontend built with it

Both the participation frontend and backend are fully compatible and require an existing Polis installation and can be run alongside the upstream frontend. More specifically, the administration frontend and common backend are required to administrate conversations and send out notifications and the statistics processing server is required for processing the voting results.

Particiapi server

For the backend the Python language and the Flask framework were chosen as a technological basis mainly due to developer mindshare, a large community and ecosystem and the smaller dependency chain and maintenance overhead compared to Node.js/npm. Instead of integrating specific identity providers we adopted the OpenID Connect standard as an abstraction layer for authentication which allows delegating authentication either to a self-hosted identity provider or a large number of existing external identity providers.

Particiapp Example Frontend

The experimental example frontend serves both as a test bed for the client library and as a tool for better understanding the needs of frontend designers. It also features a completely redesigned user interface and results visualization in line with our goals. Branded variants are currently used for evaluation and testing by the stakeholders.

In order to simplify evaluation, development, testing and deployment a Docker Compose configuration is made available which contains all necessary components for running Polis with our experimental example frontend. In addition, a development environment is provided which includes a preconfigured OpenID Connect identity provider (KeyCloak), SMTP-Server with web interface (MailDev), and a database frontend (PgAdmin). The new frontend can also be tested using our public demo server.

03 October, 2024 05:27AM by sunweaver

October 01, 2024

Ravi Dwivedi

State of the Map Conference in Kenya

Last month, I traveled to Kenya to attend a conference called State of the Map 2024 (“SotM” for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.

University of Nairobi.

I have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn’t wait to attend. The grant had an upper limit of €1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee.

Pre-travel tasks included obtaining Kenya’s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.

Our group at the conference.

The venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi’s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.

View of Nairobi's skyline from the open area on the fifth floor.

A library in Mahatma Gandhi wing of the University of Nairobi.

The diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program.

I learned about the extent of OSM use through Pragya and Rabina’s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal’s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi/NCR mapping parties.

One of the conference halls where talks took place.

Outside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as “white tea”) is usually not as strong compared to India, so I switched to coffee (which is also called “white coffee” when taken with milk). The food wasn’t spicy, but I can’t complain :) Fruit juices served as a nice addition to lunch.

One of the lunch meals served during the conference.

At the end of the second day of the conference, there was a surprise in store for us — a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff’s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn’t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.

Minibus which took us from the university to Bao Box restaurant.

This minibus in the picture gave a sense of a real matatu.

I am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn’t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa.

While most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don’t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don’t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well.

My suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it’s not ideal - as Telegram’s servers are proprietary and centralized - but it’s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea.

Finally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip.

Thanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.

01 October, 2024 02:05PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in September 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Pydantic

My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I’ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo Röhling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck.

Learning Rust is on my to-do list, but merely not knowing a language hasn’t stopped me before. So I learned how the Debian Rust team’s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing.

I’m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie!

OpenSSH

I upgraded the Debian packaging to OpenSSH 9.9p1.

YubiHSM

I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions.

I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I’d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests.

I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly.

Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed.

As usual, bookworm-backports is up to date with all these changes.

Python team

setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn’t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid.

As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits.

I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted.

buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven’t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3.

Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway.

On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request.

I upgraded some embedded CSS files in nbconvert.

I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.

debmirror

The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

01 October, 2024 01:19PM by Colin Watson

hackergotchi for Junichi Uekawa

Junichi Uekawa

Hello October.

Hello October. I've been trying to do the GPG signing from Debconf but my backlog of stuff is in my way.

01 October, 2024 01:03PM by Junichi Uekawa

hackergotchi for Guido Günther

Guido Günther

Free Software Activities September 2024

Another short status update of what happened on my side last month. Besides the usual amount of housekeeping last month was a lot about getting old issues resolved by finishing some stale merge requests and work in pogress MRs. I also pushed out the Phosh 0.42.0 Release

phosh

  • Mark mobile-data quick setting as insensitive when modem is off (MR)
  • Document handler naming (MR)
  • Phosh 0.41.1 (MR)
  • Phosh 0.42~rc1 (MR)
  • Phosh 0.42.0 (MR)
  • Handle per app notification enable setting (MR) (a 3y old MR cleaned up and out of the way)
  • Use parent's icon if child doesn't have one (MR (another 1y old MR moved out of draft status)
  • Fix Rust build and upcoming events .plugin file (MR)
  • Lint markdown (MR)
  • Sanitize versions as this otherwise breaks the libphosh-rs build (MR)
  • lockscreen: Swap deck and carousel to avoid triggering the plugins page when entering pin and let the lockscreen shrink to smaller sizes (MR) (two more year old usability issues out of the way)
  • Let bitfield values end up in the docs again (MR)
  • Don't focus incorrect app on launch (MR). This could happen with apps like calls that run a daemon (and needs more work for a clean solution).
  • Continue with wallpaper MR (MR) (still draft)
  • Brush up and land an old MR to avoid crashes on scale changes (MR). Another five month old MR out of the way.
  • API version the shared library (MR)
  • Ensure we send enough feedback when phone is blanked/locked (MR). This should be way easier now for apps as they don't need to do anything and we can avoid duplicate feedback sent from e.g. Chatty.
  • Fix possible use after free when activating notifications on the lock screen (MR)

phoc

  • Simplify layer-surface creation / destruction (MR)
  • Don't lose preedit when switching applications, opening menus, etc (MR). This fixes the case (e.g. with word completion in phosh-osk-stub enabled) where it looks to the user as if the last typed word would get lost when switching from a text editor to another app or when opening a menu
  • Ease focus debugging (MR)
  • Release 0.42~rc1 (MR)
  • Release 0.42.0 (MR)
  • Mention examples in docs and check more things (MR)

phosh-mobile-settings

  • Release 0.42~rc1 (MR)
  • Release 0.42 (MR)
  • Update ci-fairy (MR)

libphosh-rs

  • Update Phosh-0.gir with above phosh fixes to unbreak the build (MR)
  • Rework to work with API versioned libphosh (MR)

phosh-osk-stub

  • Add paste button to easy pasting text (MR)
  • Add copy button (draft) (MR)
  • Fix word salad with presage completer when entering cursor navigation mode (and in some other cases) (MR 1). Presage has the best completion but was marked experimental due to that.
  • Submit preedit on changes to terminal and emoji layout (MR)
  • Enable hint based completion by default (MR)
  • Release 0.42~r1 (MR)
  • Release 0.42.0 (MR)

phosh-wallpapers

  • Add sound for cellbroadcast (MR)
  • Release 0.42.0 (MR)

meta-phosh

  • Weekly image builds of nightly packages are now built in CI and uploaded.
  • Handle Fixes: tag in git commit messages as well (MR)
  • Let release prep handle non-RC versions as well (MR)
  • Add common markdown linter job (MR)

Debian

  • Update wlr-randr (MR)
  • Upload libqmi developement snapshot (MR) (Helps eSIM and CellBroadcast)
  • Update phosh to not crash with GSD from GNOME 47 (MR)
  • Fix systemd unit path in calls (MR)
  • Package wikietractor (MR)

ModemManager

  • More work on Cell Broadcast so we can finally undraft (MR)

Calls

  • Check consistency when building releases (MR
  • Object life cycle fixes (MR)
  • Use DBus activation (MR). This ensures it spawns quickly rather than phosh's splash screen timing out.

bluez

  • Add user unit for mpris proxy so it works out of the box (Patch) and one can skip e.g. songs in a cars media unit

gnome-text-editor

  • Wrap info-bar more (MR) to fit smalls screens
  • Forward metainfo/desktop file updates from Mobian (MR) (patch originally by Arnaud Ferraris)

feedbackd

  • Add udev rule to support haptic on Oneplus Fajita / Enchilada's (non-mailine driver) (MR)
  • Support alert-slider on OnePlus 6/6T (MR. Based on a script by "isyourbrain foss".
  • Release 0.5.0 (MR)
  • Improve spec a bit regarding notification events (MR)

Chatty

  • Don't send feedback for notifications (MR). The notification daemon does this already.
  • Add event for cellbroadcast messages (MR)
  • Switch to DBus activation (MR). This ensures the compositor sees the activation token and is will be useful for unified push.
  • Don't let scroll_down button take focus (MR). This prevents the OSK from folding when the text view is focused and ones scrolls to the bottom.
  • Use revealer to show/hide scroll_down button (MR) - just to make the visual more appealing
  • Unbreak messge display (MR)
  • Unbreak application icon (MR)
  • Drop special preedit handling (MR).

libcall-ui

  • Drop margin so we can fit on smaller screens (MR). This helps phosh on lower effective resolutions.
  • Backport margin patch (MR)

glib

  • Fix doc formatting for g_input_stream_read_all* (MR)

wlr-protocols

  • Add toplevel responsiveness state (MR) so phosh can inform about unresponsive apps

git-buildpackage

iio-sensor-proxy

  • Unbreak and modernize CI a bit (MR). A passing CI is so much more motivating for contributers and reviewers.

Fotema

  • Fix app-id and hence the icon shown in Phosh's overview (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

01 October, 2024 11:43AM

September 30, 2024

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (July and August 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carlos Henrique Lima Melara (charles)
  • Joenio Marques da Costa (joenio)
  • Blair Noctis (ncts)

The following contributors were added as Debian Maintainers in the last two months:

  • Taihsiang Ho

Congratulations!

30 September, 2024 02:30PM by Jean-Pierre Giraud

Russell Coker

September 29, 2024

hackergotchi for Vasudev Kamath

Vasudev Kamath

Signing the systemd-boot on Upgrade Using Dpkg Triggers

In my previous post on enabling SecureBoot, I mentioned that one pending improvement was signing the systemd-boot EFI binary with my keys on every upgrade. In this post, we'll explore the implementation of this process using dpkg triggers.

For an excellent introduction to dpkg triggers, refer to this archived blog post. The source code mentioned in that post can be downloaded from alioth archive.

From /usr/share/doc/dpkg/spec/triggers.txt, triggers are described as follows:

A dpkg trigger is a facility that allows events caused by one package but of interest to another package to be recorded and aggregated, and processed later by the interested package. This feature simplifies various registration and system-update tasks and reduces duplication of processing.

To implement this, we create a custom package with a single script that signs the systemd-boot EFI binary using our key. The script is as simple as:

#!/bin/bash

set -e

echo "Signing the new systemd-bootx64.efi"
sbsign --key /etc/secureboot/db.key --cert /etc/secureboot/db.crt \
       /usr/lib/systemd/boot/efi/systemd-bootx64.efi

echo "Invoking bootctl install to copy stuff"
bootctl install

Invoking bootctl install is optional if we have enabled systemd-boot-update.service, which will update the signed bootloader on the next boot.

We need to have a triggers file under the debian/ folder of the package, which declares its interest in modifications to the path /usr/lib/systemd/boot/efi/systemd-bootx64.efi. The trigger file looks like this:

# trigger 1 interest on systemd-bootx64.efi
interest-noawait /usr/lib/systemd/boot/efi/systemd-bootx64.efi

You can read about various directives and their meanings that can be used in the triggers file in the deb-triggers man page.

Once we build and install the package, this request is added to /var/lib/dpkg/triggers/File. See the screenshot below after installation of our package:

installed trigger

To test the functionality, I performed a re-installation of the systemd-boot-efi package, which provides the EFI binary for systemd-boot, using the following command:

sudo apt install --reinstall systemd-boot-efi

During installation, you can see the debug message being printed in the screenshot below:

systemd-boot-signer triggered

To test the systemd-boot-update.service, I commented out the bootctl install line from the above script, performed a reinstallation, and restarted the systemd-boot-update.service. Checking the log, I saw the following:

Sep 29 13:42:51 chamunda systemd[1]: Stopping systemd-boot-update.service - Automatic Boot Loader Update...
Sep 29 13:42:51 chamunda systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update...
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/systemd/systemd-bootx64.efi", same boot loader version in place already.
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/BOOT/BOOTX64.EFI", same boot loader version in place already.
Sep 29 13:42:51 chamunda bootctl[1801516]: Skipping "/efi/EFI/BOOT/BOOTX64.EFI", same boot loader version in place already.
Sep 29 13:42:51 chamunda systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update.
Sep 29 13:43:37 chamunda systemd[1]: systemd-boot-update.service: Deactivated successfully.
Sep 29 13:43:37 chamunda systemd[1]: Stopped systemd-boot-update.service - Automatic Boot Loader Update.
Sep 29 13:43:37 chamunda systemd[1]: Stopping systemd-boot-update.service - Automatic Boot Loader Update...

Indeed, the service attempted to copy the bootloader but did not do so because there was no actual update to the binary; it was just a reinstallation trigger.

The complete code for this package can be found here.

With this post the entire series on using UKI to Secureboot with Debian comes to an end. Happy hacking!.

29 September, 2024 07:38AM by copyninja

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RApiSerialize 0.1.4 on CRAN: Added C Namespace

A new minor release 0.1.5 of RApiSerialize arrived on CRAN today. The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. This release adds an optional C namespace, available when the API header file is included in a C source file. And as one often does, the release also brings a few small updates to different aspects of the packaging.

Changes in version 0.1.4 (2024-09-28)

  • Add C namespace in API header (Dirk in #9 closing #8)

  • Several packaging updates: switched to Authors@R, README.md badge updates, added .editorconfig and cleanup

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 September, 2024 12:58AM

Reproducible Builds

Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler.

Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.



Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on?

Kees Cook: I’m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel’s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.


Vagrant: Could you describe the path that lead you to working on this sort of thing?

Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo’s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian’s packaging building process at the time was sort of a chaotic free-for-all as there wasn’t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an “upstream first” requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn’t already their job.


Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler?

Kees: I’ve found the term “undefined behavior” to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating “unexpected behavior” or “ambiguous language constructs”. At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I’ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints.

None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C’s “implicit fall-through” in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn’t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the “[[fallthrough]]” statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added “[[fallthrough]]” statements were actually missing break statements. This was an extraordinarily common bug!

So getting rid of that ambiguity is where we have been. Another area I’ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can’t say “because we installed locks on the doors, 20% fewer break-ins have happened.” Much of our signal is always secondary or retrospective, which is frustrating: “This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact?” Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.


Vagrant: So it is hard to identify how effective this is… how bad would it be if people just gave up?

Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution.

If those were cheap then obviously we are not doing something right, and it becomes clear that it’s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can’t accept. The safety of my grandparents shouldn’t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?


Vagrant: What has been your biggest success in recent years?

Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people’s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language.

The worry was that developers would come to depend on zero-initialized stack variables, but this hasn’t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.


Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work?

Kees: One of the questions I frequently get is, “What version of the Linux kernel has feature $foo?” If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.


Vagrant: Have you used diffoscope?

Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn’t change. We have used diffoscope to compare the before and after binaries to confirm that “yep, there is no change in binary”.


Vagrant: You cannot just use checksums for that?

Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there’s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!


Vagrant: Where has reproducible builds affected you?

Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question “is my build environment running the expected code?” and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn’t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.


Vagrant: What do you want to see for the near or distant future in security work?

Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.


Vagrant: Well, thanks for that! Any last closing thoughts?

Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.


Vagrant: Likewise for your work!




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing [email protected].

29 September, 2024 12:00AM

September 28, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Whisper (pipewire tool)

Whisper (pipewire tool)

It's time to mint a new blog tag…

I want to write to pour praise on some software I recently discovered.

I'm not up to speed on Pipewire—the latest piece of Linux plumbing related to audio—nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think.

I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.

_Whisper_'s UI. Screenshot from upstream.

Whisper's UI. Screenshot from upstream.

Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error.

More stuff like this please!

28 September, 2024 03:22PM

Dave Hibberd

EuroBSDCon 2024 Report

This year I attended EuroBSDCon 2024 in Dublin. I always appreciate an excuse to head over to Ireland, and this seemed like a great chance to spend some time in Dublin and learn new things.

Due to constraints on my time I didn’t go to the 2 day devsummit that precedes the conference, only the main event itself.

The Event

EuroBSDCon was attended by about 200-250 people, the hardcore of the BSD community! Attendees came from all over, I met Canadians, USAians, Germanians, Belgians and Irelandians amongst other nationalities!

The event was at UCD Dublin, which is a gorgeous university campus about 10km south of Dublin proper in Stillorgan. The speaker hotel was a 20 minute walk (at my ~9min/km pace) from the hotel, or a quick bus journey. It was a pleasant walk, through the leafy campus and then along some pretty broad pavements, albeit beside a dual carriageway. The cycle infrastructure was pretty excellent too, but I sadly was unable to lease a city bike and make my way around on 2 wheels - Dublinbikes don’t extend that far out the city.

Lunch each day was Irish themed food - Saturday was beef stew (a Frenchman asked me what it was called - his only equivalent words were “Beef Bourguignon”) and Sunday was Bangers & Mash! The kitchen struggled a bit - food was brought out in bowls in waves, and that ensured there was artificial scarcity that clearly left anxiety for some that they weren’t going to be fed!

Everyone I met was friendly from the day I arrived, and that set me very much at ease and made the event much more enjoyable - things are better shared with others. Big shout out to dch and Blake Willis for spending a lot of time talking to me over the weekend!

Talks I Attended

Keynote: Evidence based Policy formation in the EU

Tom Smyth

This talk given by Tom Smyth was an interesting look into his work with EU Policymakers in ensuring fair competition for his small, Irish ISP. It was an enlightening look into the workings of the EU and the various bodies that set, and manage policy. It truly is a complicated beast, but the feeling I left with was that there are people all through the organisation who are desperate to do the right thing for EU citizens at all costs.

Sadly none of it is directly applicable to me living in the UK, but I still get to have a say on policy and vote in polls as an Irish citizen abroad.

10(ish) years of FreeBSD/arm64

Andrew Turner

I have been a fan of ARM platforms for a long, long time. I had an early ARM Chromebook and have been equal measures excited and frustrated by the raspberry pi since first contact. I tend to find other ARM people at events and this was no exception!

It was an interesting view into one person’s dedication to making arm64 a platform for FreeBSD, starting out with no documentation or hardware to becoming a first-class platform. It’s interesting to see the roadmap and things upcoming too and makes me hopeful for the future of arm64 in various OSes!

1-800-RC(8)-HELP: Dial Into FreeBSD Service Scripts Mastery!

Mateusz Piotrowski

rc scripts and startup applications scare me a bit. I’m better at systemd units than sysvinit scripts, but that isn’t really a transferable skill!

This was a deep dive into lots of the functionality that FreeBSD’s RC offers, and highlighted things that I only thought were limited to Linux’s systemd. I am much more aware of what it’s capable of now, but I’m still scared to take it on!

Afterwards I had a great chat in the hallway with Mateusz about our OS’s different approaches to this problem and was impressed with the pragmatic view he had on startup, systemd, rc and the future!

Package management without borders. Using Ravenports on multiple BSDs

Michael Reim

Ports on the BSDs interest me, but I hadn’t realised that outside of each major BSD’s collection there were other, cross platform ports collections offered. Ravenports is one of these under developments, and it was good to understand the hows, why and what’s happenings of the system. Plus, with my hibbian obsession on building other people’s software as my own packages, it’s interesting to see how others are doing it!

Building a Modern Packet Radio Network using Open Software

me

I spoke for 45 minutes to share my passion and frustration for amateur radio, packet radio, the law, the technology and what we’re doing in the UK Packet network.

This was a lot of fun - it felt like I had a busy room, lots of people interested in the stupid stuff I do with technology and I had lots of conversations after the fact about radio, telecoms, networking and at one point was cornered by what I describe as the “Erlang Mafia” to talk about how they could help!

Hacking - 30 years ago

Walter Belgers

This unrecorded talk looked at the history of the Dutch hacker scene, and a young group of hackers explorations of the early internet before modern security was a thing.

It was exciting, enrapturing, well presented and a great story of a well spent youth in front of computers.

Social

By 1730 I was pretty drained so I took myself back to the hotel missing the last talks, had some down time, and got the DART train to the social event at Brewdog.

This invovled about an hour’s walking and some train time and that was a nice time to reset my head and just watch the world. The train I was on had a particularly interesting ‘feature’ where when the motors were not loaded (slowdown or coasting) the lights slowly flickered dim-bright-dim. I don’t know if this is across the fleet or just this one, but it was fun to pontificate as I looked out the window at South Dublin passing by.

The social was good - a few beer tokens (cider in my case, trying to avoid beer-driven hangovers still), some pleasant junk food and plenty of good company to talk to, lots of people wanting to talk about radio and packet to me!

Brewdog struggled a bit - both in bar speed (a linear queue formed despite the staff preferring the crowd-around method of queue) and buffet food appeared in somewhat disjointed waves, meaning that people loitered around the food tables and cleared the plates of wings, sliders, fries, onion rings, mac & cheese as they appeared 4-5 plates at a time. Perhaps a few hundred hungry bodies was a bit too much for them to feed at once.

They had shuffleboard that was played all night by various groups!

I caught the last bus home, which was relatively painless!

Is our software sustainable?

Kent Inge Fagerland Simonsen

This was an interesting look into reducing the footprint of software to make it a net benefit. Lots of examples of how little changes can barrel up to big, gigawatthour changes when aggregated over the entire installbase of android or iOS!

A Packet’s Journey Through the OpenBSD Network Stack

Alexander Bluhm

This was an analysis of what happens at each stage of networking in OpenBSD and was pretty interesting to see. Lots of it was out my depth, but it’s cool to get an explanation and appreciation for various elements of how software handles each packet that arrives and the differences in the ipv4 and ipv6 stack!

FreeBSD at 30 Years: Its Secrets to Success

Kirk McKusick

This was a great statistical breakdown of FreeBSD since inception, including top committers, why certain parts of the system and community work so well and what has given it staying power compared to some projects on the internet that peter out after just a few years! Kirk’s excitement and passion for the project really shone through, and I want to read his similarly titled article in the FreeBSD Journal now!

Building an open native FreeBSD CI system from scratch with lua, C, jails & zfs

Dave Cottlehuber

Dave spoke pretty excitedly about his work on a CI system using tools that FreeBSD ships with, and introduced me to the integration of C and Lua which I wasn’t fully aware of before. Or I was, and my brain forgot it!

With my interest in software build this year, it was quite a timely look at how others are thinking of doing things (I am doing similar stuff with zfs!). I look forward to playing with it when it finally is released to the Real World!

Building an Appliance

Allan Jude

This was an interesting look into the tools that FreeBSD provides which can be used to make immutable, appliance OSes without too much overhead. Fail safe upgrades and boots with ZFS, running approved code with secure boot, factory resetting and more were discussed!

I have had thoughts around this in the recent past, so it was good to have some ideas validated, some challenged and gave me food for thought.

Experience as a speaker

I really enjoyed being a speaker at the event! I’ve spoken at other things before, but this really was a cut above. The event having money to provide me a hotel was a really welcome surprise, and also receiving a gorgeous scarf as a speaker gift was a great surprise (and it has already been worn with the change of temperature here in Scotland this week!).

I would definitely consider returning, either as an attendee or as a speaker. The community of attendees were pragmatic, interesting, engaging and welcoming, the organising committee were spot-on in their work making it happen and the whole event, while turning my brain to mush with all the information, was really enjoyable and I left energised and excited by things instead of ground down and tired.

28 September, 2024 12:24PM

September 27, 2024

Reproducible Builds (diffoscope)

diffoscope 278 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 278. This version includes the following changes:

[ Chris Lamb ]
* Temporarily remove procyon-decompiler from Build-Depends as it was removed
  from testing (#1057532). (Closes: #1082636)
* Add a helpful contextual message to the output if comparing Debian .orig
  tarballs within .dsc files without the ability to "fuzzy-match" away the
  leading directory. (Closes: reproducible-builds/diffoscope#386)
* Correctly invert "X% similar" value and do not emit "100% similar".
  (Closes: reproducible-builds/diffoscope#391)
* Update copyright years.

You find out more by visiting the project homepage.

27 September, 2024 12:00AM

September 26, 2024

hackergotchi for Vasudev Kamath

Vasudev Kamath

Disabling Lockdown Mode with Secure Boot on Distro Kernel

In my previous post, I mentioned that Lockdown mode is activated when Secure Boot is enabled. One way to override this was to use a self-compiled upstream kernel. However, sometimes we might want to use the distribution kernel itself. This post explains how to disable lockdown mode while keeping Secure Boot enabled with a distribution kernel.

Understanding Secure Boot Detection

To begin, we need to understand how the kernel detects if Secure Boot is enabled. This is done by the efi_get_secureboot function, as shown in the image below:

Secure Boot status check

Disabling Kernel Lockdown

The kernel code uses the value of MokSBStateRT to identify the Secure Boot state, assuming that Secure Boot can only be enabled via shim. This assumption holds true when using the Microsoft certificate for signature validation (as Microsoft currently only signs shim). However, if we're using our own keys, we don't need shim and can sign the bootloader ourselves. In this case, the Secure Boot state of the system doesn't need to be tied to the MokSBStateRT variable.

To disable kernel lockdown, we need to set the UEFI runtime variable MokSBStateRT. This essentially tricks the kernel into thinking Secure Boot is disabled when it's actually enabled. This is achieved using a UEFI initializing driver.

The code for this was written by an anonymous colleague who also assisted me with various configuration guidance for setting up UKI and Secure Boot on my system. The code is available here.

Implementation

Detailed instructions for compiling and deploying the code are provided in the repository, so I won't repeat them here.

Results

I've tested this method with the default distribution kernel on my Debian unstable system, and it successfully disables lockdown while maintaining Secure Boot integrity. See the screenshot below for confirmation:

Distribution kernel lockdown disabled

26 September, 2024 04:43PM by copyninja

Melissa Wen

Reflections on 2024 Linux Display Next Hackfest

Hey everyone!

The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year’s event in A Coruña, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year’s edition and this blog post summarizes the experience and its fruits.

One of the highlights of this year’s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here.

The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights

The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement.

Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.

  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.

Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.

  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.

Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.

  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.

  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up

The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists.

With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR

Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level)

Harry Wentland (AMD) led this session.

Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware.

Upstream work related to this session:

Color/HDR (Compositor-Level)

Sebastian Wick (RedHat) led this session.

It started with Sebastian’s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria.

Upstream work related to this session:

Color/HDR (Use Cases and Testing)

Christopher Cameron (Google) and Melissa Wen (Igalia) led this session.

In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow “emulating” HDR.

We also discussed on the usage of a kernel background color property.

Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency

Mario Limonciello (AMD) led this session.

Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development.

Upstream work related to this session:

Strategy for video and gaming use-cases

Leo Li (AMD) led this session.

Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux.

Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API

Xaver Hugl (KDE/BlueSystems) led this session.

Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them.

Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer.

In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more… and beer

We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more.

On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people.

The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs.

In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display “wish list”, which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported.

We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don’t match, etc.

After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions

Participants provided valuable feedback on the hackfest, including suggestions for future improvements.

  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest

We can’t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda.

A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coruña: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face.

Finally, thanks virtual participants who couldn’t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo (Red Hat), Jiri Koten (RedHat), Jonas Ådahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel Dänzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD).

We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

26 September, 2024 01:50PM

September 25, 2024

Russell Coker

The PiKVM

Hardware

I have just setup a PiKVM, here’s the Amazon link for the KVM hardware (case and Pi hat etc) and here’s an Amazon link for a Pi4 to match.

The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It’s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren’t covered in the videos, the device I received had some improvements that made it easier to assemble which weren’t in the video.

When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn’t work for reasons I don’t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it.

The system has a bright OLED display to show the IP address and some other information which is very handy.

The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn’t test. I definitely recommend this.

Software

This is the download link for the RaspberryPi images for the PiKVM [3]. The “v3” image matches the hardware from the Amazon link I provided.

The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it’s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication).

If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can’t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.

rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro

The default webadmin username/password is admin/admin.

To change the passwords run the following commands:

rw
kvmd-htpasswd set admin
passwd root
ro

It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don’t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot.

By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select “advanced” and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL.

The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely “lsusb” on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse.

Managing Computers

For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM, then it should all just work.

For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn’t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can’t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings.

It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn’t seem useful to me as the use cases for this device don’t require that. If you are using it for a server that doesn’t have iDRAC/ILO or other management hardware there will be no other “monitor” and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time.

For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is “display a login screen on anything that’s available”. Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration?

Conclusion

The PiKVM is a well engineered and designed product that does what’s expected at a low price. There are lots of minor issues with using it which aren’t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements.

The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it’s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive.

The web UI wasn’t as user friendly as it might have been, but it’s a lot better than iDRAC so I don’t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

25 September, 2024 11:01PM by etbe

Melissa Wen

Reflections on 2024 Linux Display Next Hackfest

Hey everyone!

The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year’s event in A Coruña, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year’s edition and this blog post summarizes the experience and its fruits.

One of the highlights of this year’s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here.

The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights

The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement.

Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.

  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.

Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.

  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.

Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.

  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.

  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up

The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists.

With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR

Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level)

Harry Wentland (AMD) led this session.

Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware.

Upstream work related to this session:

Color/HDR (Compositor-Level)

Sebastian Wick (RedHat) led this session.

It started with Sebastian’s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria.

Upstream work related to this session:

Color/HDR (Use Cases and Testing)

Christopher Cameron (Google) and Melissa Wen (Igalia) led this session.

In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow “emulating” HDR.

We also discussed on the usage of a kernel background color property.

Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency

Mario Limonciello (AMD) led this session.

Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development.

Upstream work related to this session:

Strategy for video and gaming use-cases

Leo Li (AMD) led this session.

Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux.

Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API

Xaver Hugl (KDE/BlueSystems) led this session.

Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them.

Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer.

In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more… and beer

We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more.

On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people.

The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs.

In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display “wish list”, which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported.

We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don’t match, etc.

After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions

Participants provided valuable feedback on the hackfest, including suggestions for future improvements.

  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest

We can’t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda.

A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coruña: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face.

Finally, thanks virtual participants who couldn’t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas Ådahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel Dänzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD).

We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

25 September, 2024 01:50PM

September 24, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.4 on CRAN: Updated Again

A new release 0.0.4 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C library by James which provides a C implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C applications. This release updates the quick fix in release 0.0.3 from a good week ago. James took a good look and properly disambiguated the statement that lead clang to complain, so we are back to compiling as C 17 under all compilers which makes for a slightly wider reach.

The NEWS file for this release follows.

Changes in version 0.0.4 (2024-09-24)

  • The package now properly addresses a clang warning on empty variadic macros arguments and is back to C 17 (James in #10)

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 September, 2024 05:15PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Note to Self: Enabling Secure Boot with UKI on Debian

Note

This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.

In this guide, we'll implement Secure Boot by taking full control of the device, removing preinstalled keys, and installing our own. For a comprehensive overview of the benefits and process, refer to this excellent post from rodsbooks.

Key Components

To implement Secure Boot, we need three essential keys:

  1. Platform Key (PK): The top-level key in Secure Boot, typically provided by the motherboard manufacturer. We'll replace the vendor-supplied PK with our own for complete control.
  2. Key Exchange Key (KEK): Used to sign updates for the Signatures Database and Forbidden Signatures Database.
  3. Database Key (DB): Used to sign or verify binaries (bootloaders, boot managers, shells, drivers, etc.).

There's also a Forbidden Signature Key (dbx), which is the opposite of the DB key. We won't be generating this key in this guide.

Preparing for Key Enrollment

Before enrolling our keys, we need to put the device in Secure Boot Setup Mode. Verify the status using the bootctl status command. You should see output similar to the following image:

UEFI Setup mode

Generating Keys

Follow these instructions from the Arch Wiki to generate the keys manually. You'll need the efitools and openssl packages. I recommend using rsa:2048 as the key size for better compatibility with older firmware.

After generating the keys, copy all .auth files to the /efi/loader/keys/<hostname>/ folder. For example:

 sudo ls /efi/loader/keys/chamunda
db.auth  KEK.auth  PK.auth

Signing the Bootloader

Sign the systemd-boot bootloader with your new keys:

sbsign --key <path-to db.key> --cert <path-to db.crt> \
   /usr/lib/systemd/boot/efi/systemd-bootx64.efi

Install the signed bootloader using bootctl install. The output should resemble this:

bootctl install

Note

If you encounter warnings about mount options, update your fstab with the `umask=0077` option for the EFI partition.

Verify the signature using sbsign --verify:

sbsign verify

Configuring UKI for Secure Boot

Update the /etc/kernel/uki.conf file with your key paths:

SecureBootPrivateKey=/path/to/db.key
SecureBootCertificate=/path/to/db.crt

Signing the UKI Image

On Debian, use dpkg-reconfigure to sign the UKI image for each kernel:

sudo dpkg-reconfigure linux-image-$(uname -r)
# Repeat for other kernel versions if necessary

You should see output similar to this:

sudo dpkg-reconfigure linux-image-$(uname -r)
/etc/kernel/postinst.d/dracut:
dracut: Generating /boot/initrd.img-6.10.9-amd64
Updating kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
  sbverify --list /boot/vmlinuz-6.10.9-amd64
  sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi
Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi
/etc/kernel/postinst.d/zz-systemd-boot:
Installing kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
  sbverify --list /boot/vmlinuz-6.10.9-amd64
  sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi
Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi

Enrolling Keys in Firmware

Use systemd-boot to enroll your keys:

systemctl reboot --boot-loader-menu=0

Select the enroll option with your hostname in the systemd-boot menu.

After key enrollment, the system will reboot into the newly signed kernel. Verify with bootctl:

uefi enabled

Dealing with Lockdown Mode

Secure Boot enables lockdown mode on distro-shipped kernels, which restricts certain features like kprobes/BPF and DKMS drivers. To avoid this, consider compiling the upstream kernel directly, which doesn't enable lockdown mode by default.

As Linus Torvalds has stated, "there is no reason to tie Secure Boot to lockdown LSM." You can read more about Torvalds' opinion on UEFI tied with lockdown.

Next Steps

One thing that remains is automating the signing of systemd-boot on upgrade, which is currently a manual process. I'm exploring dpkg triggers for achieving this, and if I succeed, I will write a new post with details.

Acknowledgments

Special thanks to my anonymous colleague who provided invaluable assistance throughout this process.

24 September, 2024 06:00AM by copyninja

September 23, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.

I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value

Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.

Occupancy levels

That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.

Coexistence builds relationships

As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).

Coexistence allows for unexpected interactions

People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.

It’s easier for bad managers to manage bad performers

Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary

Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

23 September, 2024 05:31PM

September 22, 2024

hackergotchi for Adnan Hodzic

Adnan Hodzic

Effortless Linux backups: Power of OpenZFS Snapshots on Ubuntu 24.04

Linux snapshots? Back in the day (mid 2000’s) ReiserFS was my go to Linux filesystem, it was fast & reliable. But then after its creator...

22 September, 2024 04:00PM by Adnan Hodzic

September 21, 2024

Jamie McClelland

How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent.

Now we want to start using them for, among other things, sending email.

I haven’t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I’m curious: what’s the best 2024 way to warm up IP addresses, particularly using postfix?

Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I’m not entirely sure how to fit messages per day into our setup.

We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn’t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get).

Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay

If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain.

So, I’m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it’s a start.

A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can’t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name.

I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn’t work with active messages.

To help mitigate the problem I’m only using our bulk mail queue to warm up IPs, but really, this is not ideal.

Suggestions welcome!

Update #1

If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:

# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid

After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue.

[Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2

After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:

421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply

So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3

Well, this doesn’t seem to be working the way I want it to.

When a message arrives faster than the designated rate limit, it remains in the active queue.

I’m entirely sure how the timing is supposed to work, but at this point I’m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can’t these messages that exceed the rate delayed be deferred instead?

Update #4

I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

21 September, 2024 12:27PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM

The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.

The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.

The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.

The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.

This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

21 September, 2024 05:03AM

September 20, 2024

Sahil Dhiman

Educational and Research Institutions With Own ASN in India

Another one of the ASN list. This turned out longer than I expected (which is good). If you want to briefly understand what is an ASN, my Personal ASNs From India post carries an introduction to it.

Now, here’re the Educational and Research Institutions with their own ASN in India, which I could find:

  • AS2697 Education and Research Network
  • AS9885 NKN Internet Gateway
  • AS23770 Tata Institute of Fundamental Research (used as National Centre for Biological Sciences network)
  • AS38021 Network of Indian Institute of Foreign Trade
  • AS38620 National Knowledge Network
  • AS38872 Indian School of Business
  • AS45340 B.M.S College of Engineering
  • AS55296 National Institute of Public Finance and Policy
  • AS55479 Indian Institute of Technology, Kanpur
  • AS55566 Inter University Centre for Astronomy and Astrophysics
  • AS55824 NKN Core Network
  • AS56056 AMITY-IN
  • AS55847 NKN Edge Network
  • AS58703 Amrita Vishwa Vidyapeetham
  • AS58758 Tata Institute of Fundamental Research (used as Homi Bhabha Centre for Science Education (HBCSE) network)
  • AS59163 GLA University
  • AS59193 Indian Institute of Technology, Hyderabad
  • AS131226 Indian Institute of Technology, Roorkee
  • AS131473 SRM University
  • AS132423 Indian Institute of Technology, Bombay
  • AS132524 Tata Institute of Fundamental Research (used as main campus network)
  • AS132749 Indraprastha Institute of Information Technology, Delhi
  • AS132780 Indian Institute of Technology, Delhi
  • AS132984 Uka Tarsadia University
  • AS132785 Shiv Nadar Institution of Eminence Deemed to be University
  • AS132995 South Asian University
  • AS133002 Indian Institute of Tropical Meteorology
  • AS133233 S.N. Bose National Centre for Basic Sciences
  • AS133273 Tata Institute of Social Sciences
  • AS133308 Indira Gandhi Centre For Atomic Research
  • AS133313 Saha Institute of Nuclear Physics
  • AS133552 B.M.S. College of Engineering
  • AS133723 Institute for Plasma Research
  • AS134003 Centre For Cellular And Molecular Platforms
  • AS134023 Aligarh Muslim University
  • AS134322 Tata Institute of Fundamental Research (used as International Centre for Theoretical Sciences (ICTS) network)
  • AS134901 Indian Institute of Science Education And Research
  • AS134934 Institute For Stem Cell Biology And Regenerative Medicine
  • AS135730 Datta Meghe Institute Of Medical Sciences
  • AS135734 Birla Institute of Technology And Science
  • AS135835 Sardar Vallabhbhai Patel National Police Academy
  • AS136005 Raman Research Institute
  • AS136304 Institute of Physics, Bhubaneswar
  • AS136470 B.M.S. College of Engineering
  • AS136702 Physical Research Laboratory
  • AS137136 Indian Agricultural Statistics Research Institute
  • AS137282 Kalinga Institute of Industrial Technology
  • AS137617 Indian Institute of Management, Ahmedabad
  • AS137956 Indian Institute of Technology, Ropar
  • AS138155 Jawaharlal Nehru University
  • AS138231 Indian Institute of Information Technology, Allahabad
  • AS140033 Indian Institute of Technology, Bhilai
  • AS140118 Indian Institute of Technology Banaras Hindu University
  • AS140192 Indian Institute of Information Technology and Management, Kerala
  • AS140200 Panjab University
  • AS141270 Indian Institute Of Technology, Indore
  • AS141340 Indian Institute Of Technology, Madras
  • AS141477 Indira Gandhi National Open University
  • AS141478 Director National Institute Of Technology, Calicut
  • AS141288 National Institute of Science Education And Research Bhubaneswar
  • AS141507 National Institute of Mental Health And Neurosciences
  • AS142493 Sri Ramachandra Institute Of Higher Education And Research
  • AS147239 Lal Bahadur Shastri National Academy of Administration (LBSNAA)
  • AS147258 Dayalbagh Educational Institute
  • AS149607 National Forensic Sciences University
  • AS151086 Amrita Vishwa Vidyapeetham
  • AS152533 National Institute of Technology, Karnataka

Special Mentions

  • AS132926 Allen Career Institute
  • AS141841 Indian Institute of Hardware Technology Limited

Some observations:

Let me know if I’m missing someone.

20 September, 2024 04:29PM

September 19, 2024

hackergotchi for Vasudev Kamath

Vasudev Kamath

Note to Self: Enabling Unified Kernel Image on Debian

Note

These steps may not work on your system if you are using the default Debian installation. This guide assumes that your system is using systemd-boot as the bootloader, which is explained in the post linked below.

A unified kernel image (UKI) is a single executable that can be booted directly from UEFI firmware or automatically sourced by bootloaders with little or no configuration. It combines a UEFI boot stub program like systemd-stub(7), a Linux kernel image, an initrd, and additional resources into a single UEFI PE file.

systemd-boot already provides a hook for kernel installation via /etc/kernel/postinst.d/zz-systemd-boot. We just need a couple of additional configurations to generate the UKI image.

Installation and Configuration

  1. Install the systemd-ukify package:

    sudo apt-get install systemd-ukify
    
  2. Create the following configuration in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    This configuration specifies how to generate the UKI image for the installed kernel and which generator to use.

  3. Define the kernel command line for the UKI image. Create /etc/kernel/uki.conf with the following content:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

Generating the UKI Image

To apply these changes, regenerate the UKI image for the currently running kernel:

sudo dpkg-reconfigure linux-image-$(uname -r)

Verification

Use the bootctl list command to verify the presence of a "Type #2" entry for the current kernel. The output should look similar to this:

bootctl list
      type: Boot Loader Specification Type #2 (.efi)
     title: Debian GNU/Linux trixie/sid (2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi) (default) (selected)
        id: 2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
    source: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
  sort-key: debian
     linux: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.9-amd64.efi
   options: systemd.gpt_auto=no quiet root=LABEL=root_disk ro systemd.machine_id=2d0080583f1a4127ac0b073b1a9d3e61

      type: Boot Loader Specification Type #2 (.efi)
     title: Debian GNU/Linux trixie/sid (2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi)
        id: 2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
    source: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
  sort-key: debian
     linux: /boot/efi/EFI/Linux/2d0080583f1a4127ac0b073b1a9d3e61-6.10.7-amd64.efi
   options: systemd.gpt_auto=no quiet root=LABEL=root_disk ro systemd.machine_id=2d0080583f1a4127ac0b073b1a9d3e61

      type: Automatic
     title: Reboot Into Firmware Interface
        id: auto-reboot-to-firmware-setup
    source: /sys/firmware/efi/efivars/LoaderEntries-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f

Cleanup and Reboot

Once the "Type #2" entries are generated, remove any "Type #1" entries using the bootctl unlink command. After this, reboot your system to boot from the UKI-based image.

Future Considerations

The primary use case for a UKI image is secure boot. Signing the UKI image can also be configured in the settings above, but this guide does not cover that process as it requires setting up secure boot on your system.

19 September, 2024 06:10AM by copyninja

September 18, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.15: Updated and New BLP Library

bloomberg terminal

Version 0.3.15 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the fifteenth release since the package first appeared on CRAN in 2016. This release updates to the current version 3.24.6 of the Bloomberg API, and rounds out a few corners in the packaging from continuous integration to the vignette.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.15 (2024-09-18)

  • A warning is now issued if more than 1000 results are returned (John in #377 addressing #375)

  • A few typos in the rblpapi-intro vignette were corrected (Michael Streatfield in #378)

  • The continuous integration setup was updated (Dirk in #388)

  • Deprecation warnings over char* where C class Name is now preferred have been addressed (Dirk in #391)

  • Several package files have been updated (Dirk in #392)

  • The request formation has been corrected, and an example was added (Dirk and John in #394 and #396)

  • The Bloomberg API has been upgraded to release 3.24.6.1 (Dirk in #397)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 September, 2024 02:52PM

Jamie McClelland

Gmail vs Tor vs Privacy

A legit email went to spam. Here are the redacted, relevant headers:

[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
	*  1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
	*      [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
	*  3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
	*  2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
	*  0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
        by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
        for <[email protected]>
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Sep 2024 15:05:53 -0700 (PDT)

At first I though a Gmail IP address was listed in spamhaus - I even opened a ticket. But then I realized it wasn’t the last hop that Spamaus is complaining about, it’s the first hop, specifically the ip 185.220.101.64 which appears to be a Tor exit node.

The sender is using their own client to relay email directly to Gmail. Like any sane person, they don’t trust Gmail to protect their privacy, so they are sending via Tor. But WTF, Gmail is not stripping the sending IP address from the header.

I’m a big fan of harm reduction and have always considered using your own client to relay email with Gmail as a nice way to avoid some of the surveillance tax Google imposes.

However, it seems that if you pursue this option you have two unpleasant choices:

  • Embed your IP address in every email message or
  • Use Tor and have your email messages go to spam

I supposed you could also use a VPN, but I doubt the IP reputation of most VPN exit nodes are going to be more reliable than Tor.

18 September, 2024 12:27PM

September 17, 2024

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

My Chair

I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.

17 September, 2024 10:11PM by Benjamin Mako Hill

hackergotchi for Jonathan Dowland

Jonathan Dowland

ouch, part 2

Things developed since my last post. Some lesions opened up on my ankle which was initially good news: the pain substantially reduced. But they didn’t heal fast enough and so medics decided on surgical debridement. That was last night. It seemed to be successful and I’m in recovery from surgery as I write. It’s hard to predict the near-future, a lot depends on how well and fast I heal.

I’ve got a negative-pressure dressing on it, which is incredible: a constantly maintained suction to aid in debridement and healing. Modern medicine feels like a sci fi novel.

17 September, 2024 12:53PM

ouch, part 3

The debridement operation was a success: nothing bad grew afterwards. I was discharged after a couple of nights with crutches, instructions not to weight-bear, a remarkable, portable negative-pressure "Vac" pump that lived by my side, and some strong painkillers.

About two weeks later, I had a skin graft. The surgeon took some skin from my thigh and stitched it over the debridement wound. I was discharged same-day, again with the Vac pump, and again with instructions not to weight-bear, at least for a few days.

This time I only kept the Vac pump for a week, and after a dressing change (the first time I saw the graft), I was allowed to walk again. Doing so is strangely awkward, and sometimes a little painful. I have physio exercises to help me regain strength and understanding about what I can do.

The donor site remained bandaged for another week before I saw it. I was expecting a stitched cut, but the surgeons have removed the top few layers only, leaving what looks more like a graze or sun-burn. There are four smaller, tentative-looking marks adjacent, suggesting they got it right on the fifth attempt. I'm not sure but I think these will all fade away to near-invisibility with time, and they don't hurt at all.

I've now been off work for roughly 12 weeks, but I think I am returning very soon. I am looking forward to returning to some sense of normality. It's been an interesting experience. I thought about writing more about what I've gone through, in particular my experiences in Hospital, dealing with the bureaucracy and things falling "between the gaps". Hanif Kureishi has done a better job than I could. It's clear that the NHS is staffed by incredibly passionate people, but there are a lot of structural problems that interfere with care.

17 September, 2024 12:53PM

Russ Allbery

Review: The Book That Broke the World

Review: The Book That Broke the World, by Mark Lawrence

Series: Library Trilogy #2
Publisher: Ace
Copyright: 2024
ISBN: 0-593-43796-9
Format: Kindle
Pages: 366

The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory.

At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother.

I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together.

The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved.

I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes.

One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it.

In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy.

This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain.

It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better.

It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell.

The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance.

The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly.

The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced.

That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much.

If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending.

Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025.

Rating: 7 out of 10

17 September, 2024 02:57AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.10 on CRAN: Update

A minor update 0.3.10 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release updates one S4 methods to very recent changes in r-devel for which CRAN had reached out. This concerns the setdiff() method when applied to two nanotime objects. As it only affected R 4.5.0, due next April, if rebuilt in the last two or so weeks it will not have been visible to that many users, if any. In any event, it now works again for that setup too, and should be going forward.

We also retired one demo function from the very early days, apparently it relied on ggplot2 features that have since moved on. If someone would like to help out and resurrect the demo, please get in touch. We also cleaned out some no longer used tests, and updated DESCRIPTION to what is required now. The NEWS snippet below has the full details.

Changes in version 0.3.10 (2024-09-16)

  • Retire several checks for Solaris in test suite (Dirk in #130)

  • Switch to Authors@R in DESCRIPTION as now required by CRAN

  • Accommodate R-devel change for setdiff (Dirk in #133 fixing #132)

  • No longer ship defunction ggplot2 demo (Dirk fixing #131)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 September, 2024 12:58AM

September 16, 2024

Russ Allbery

Review: The Wings Upon Her Back

Review: The Wings Upon Her Back, by Samantha Mills

Publisher: Tachyon
Copyright: 2024
ISBN: 1-61696-415-4
Format: Kindle
Pages: 394

The Wings Upon Her Back is a political steampunk science fantasy novel. If the author's name sounds familiar, it may be because Samantha Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon awards. This is her first novel.

Winged Zemolai is a soldier of the mecha god and the protege of Mecha Vodaya, the Voice. She has served the city-state of Radezhda by defending it against all enemies, foreign and domestic, for twenty-six years. Despite that, it takes only a moment of errant mercy for her entire life to come crashing down. On a whim, she spares a kitchen worker who was concealing a statue of the scholar god, meaning that he was only pretending to worship the worker god like all workers should. Vodaya is unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's wings are ripped from her back and crushed in the hand of the god, and she's left on the ground to die of mechalin withdrawal.

The Wings Upon Her Back is told in two alternating timelines. The main one follows Zemolai after her exile as she is rescued by a young group of revolutionaries who think she may be useful in their plans. The other thread starts with Zemolai's childhood and shows the reader how she became Winged Zemolai: her scholar family, her obsession with flying, her true devotion to the mecha god, and the critical early years when she became Vodaya's protege. Mills maintains the separate timelines through the book and wraps them up in a rather neat piece of symbolic parallelism in the epilogue.

I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a story about a political awakening, in which Zemolai slowly realizes that she has been manipulated and lied to and that she may, in fact, be one of the baddies. The Wings Upon Her Back is more personal than some other books with that theme, since Zemolai was specifically (and abusively) groomed for her role by Vodaya. Much of the book is Zemolai trying to pull out the hooks that Vodaya put in her or, in the flashback timeline, the reader watching Vodaya install those hooks.

The flashback timeline is difficult reading. I don't think Mills could have left it out, but she says in the afterword that it was the hardest part of the book to write and it was also the hardest part of the book to read. It fills in some interesting bits of world-building and backstory, and Mills does a great job pacing the story revelations so that both threads contribute equally, but mostly it's a story of manipulative abuse. We know from the main storyline that Vodaya's tactics work, which gives those scenes the feel of a slow-motion train wreck. You know what's going to happen, you know it will be bad, and yet you can't look away.

It occurred to me while reading this that Emily Tesh's Some Desperate Glory told a similar type of story without the flashback structure, which eliminates the stifling feeling of inevitability. I don't think that would not have worked for this story. If you simply rearranged the chapters of The Wings Upon Her Back into a linear narrative, I would have bailed on the book. Watching Zemolai being manipulated would have been too depressing and awful for me to make it to the payoff without the forward-looking hope of the main timeline. It gave me new appreciation for the difficulty of what Tesh pulled off.

Mills uses this interwoven structure well, though. At about 90% through this book I had no idea how it could end in the space remaining, but it reaches a surprising and satisfying conclusion. Mills uses a type of ending that normally bothers me, but she does it by handling the psychological impact so well that I couldn't help but admire it. I'm avoiding specifics because I think it worked better when I wasn't expecting it, but it ties beautifully into the thematic point of the book.

I do have one structural objection, though. It's one of those problems I didn't notice while reading, but that started bothering me when I thought back through the story from a political lens. The Wings Upon Her Back is Zemolai's story, her redemption arc, and that means she drives the plot. The band of revolutionaries are great characters (particularly Galiana), but they're supporting characters. Zemolai is older, more experienced, and knows critical information they don't have, and she uses it to effectively take over. As setup for her character arc, I see why Mills did this. As political praxis, I have issues.

There is a tendency in politics to believe that political skill is portable and repurposable. Converting opposing operatives to the cause is welcomed not only because they indicate added support, but also because they can use their political skill to help you win instead. To an extent this is not wrong, and is probably the most true of combat skills (which Zemolai has in abundance). But there's an underlying assumption that politics is symmetric, and a critical reason why I hold many of the political positions that I do hold is that I don't think politics is symmetric.

If someone has been successfully stoking resentment and xenophobia in support of authoritarians, converts to an anti-authoritarian cause, and then produces propaganda stoking resentment and xenophobia against authoritarians, this is in some sense an improvement. But if one believes that resentment and xenophobia are inherently wrong, if one's politics are aimed at reducing the resentment and xenophobia in the world, then in a way this person has not truly converted. Worse, because this is an effective manipulation tactic, there is a strong tendency to put this type of political convert into a leadership position, where they will, intentionally or not, start turning the anti-authoritarian movement into a copy of the authoritarian movement they left. They haven't actually changed their politics because they haven't understood (or simply don't believe in) the fundamental asymmetry in the positions. It's the same criticism that I have of realpolitik: the ends do not justify the means because the means corrupt the ends.

Nothing that happens in this book is as egregious as my example, but the more I thought about the plot structure, the more it bothered me that Zemolai never listens to the revolutionaries she joins long enough to wrestle with why she became an agent of an authoritarian state and they didn't. They got something fundamentally right that she got wrong, and perhaps that should have been reflected in who got to make future decisions. Zemolai made very poor choices and yet continues to be the sole main character of the story, the one whose decisions and actions truly matter. Maybe being wrong about everything should be disqualifying for being the main character, at least for a while, even if you think you've understood why you were wrong.

That problem aside, I enjoyed this. Both timelines were compelling and quite difficult to put down, even when they got rather dark. I could have done with less body horror and a few fewer fight scenes, but I'm glad I read it.

Science fiction readers should be warned that the world-building, despite having an intricate and fascinating surface, is mostly vibes. I started the book wondering how people with giant metal wings on their back can literally fly, and thought the mentions of neural ports, high-tech materials, and immune-suppressing drugs might mean that we'd get some sort of explanation. We do not: heavier-than-air flight works because it looks really cool and serves some thematic purposes. There are enough hints of technology indistinguishable from magic that you could make up your own explanations if you wanted to, but that's not something this book is interested in. There's not a thing wrong with that, but don't get caught by surprise if you were in the mood for a neat scientific explanation of apparent magic.

Recommended if you like somewhat-harrowing character development with a heavy political lens and steampunk vibes, although it's not the sort of book that I'd press into the hands of everyone I know. The Wings Upon Her Back is a complete story in a single novel.

Content warning: the main character is a victim of physical and emotional abuse, so some of that is a lot. Also surgical gore, some torture, and genocide.

Rating: 7 out of 10

16 September, 2024 02:03AM

September 15, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.3 on CRAN: Updated

A new release 0.0.3 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C library by James which provides a C implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C applications. This release turns compilation to the C 20 standard as newer clang versions complained about a particular statement (it took to be C 20) when compiled under C 17. So we obliged.

The NEWS file for these two initial releases follows.

Changes in version 0.0.3 (2024-09-15)

  • The package now compiles under the C 20 standard to avoid a warning under clang -18 (Dirk addressing #9)

  • Minor updates to continuous integration and badges have been made as well

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 September, 2024 11:19PM

Raju Devidas

Setting a local test deployment of moinmoin wiki

~$ mkdir moin-test

~$ cd moin-test

~/d/moin-test►python3 -m venv .                                00:04

~/d/moin-test►ls                                        2.119s 00:04
bin/  include/  lib/  lib64@  pyvenv.cfg

~/d/moin-test►source bin/activate.fish                         00:04


~/d/moin-test►pip install --pre moin                 moin-test 00:04
Collecting moin
  Using cached moin-2.0.0b1-py3-none-any.whl.metadata (4.7 kB)
Collecting Babel>=2.10.0 (from moin)
  Using cached babel-2.16.0-py3-none-any.whl.metadata (1.5 kB)
Collecting blinker>=1.6.2 (from moin)
  Using cached blinker-1.8.2-py3-none-any.whl.metadata (1.6 kB)
Collecting docutils>=0.18.1 (from moin)
  Using cached docutils-0.21.2-py3-none-any.whl.metadata (2.8 kB)
Collecting Markdown>=3.4.1 (from moin)
  Using cached Markdown-3.7-py3-none-any.whl.metadata (7.0 kB)
Collecting mdx-wikilink-plus>=1.4.1 (from moin)
  Using cached mdx_wikilink_plus-1.4.1-py3-none-any.whl.metadata (6.6 kB)
Collecting Flask>=3.0.0 (from moin)
  Using cached flask-3.0.3-py3-none-any.whl.metadata (3.2 kB)
Collecting Flask-Babel>=3.0.0 (from moin)
  Using cached flask_babel-4.0.0-py3-none-any.whl.metadata (1.9 kB)
Collecting Flask-Caching>=1.2.0 (from moin)
  Using cached Flask_Caching-2.3.0-py3-none-any.whl.metadata (2.2 kB)
Collecting Flask-Theme>=0.3.6 (from moin)
  Using cached flask_theme-0.3.6-py3-none-any.whl
Collecting emeraldtree>=0.10.0 (from moin)
  Using cached emeraldtree-0.11.0-py3-none-any.whl
Collecting feedgen>=0.9.0 (from moin)
  Using cached feedgen-1.0.0-py2.py3-none-any.whl
Collecting flatland>=0.8 (from moin)
  Using cached flatland-0.9.1-py3-none-any.whl
Collecting Jinja2>=3.1.0 (from moin)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting markupsafe<=2.2.0 (from moin)
  Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting pygments>=1.4 (from moin)
  Using cached pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB)
Collecting Werkzeug>=3.0.0 (from moin)
  Using cached werkzeug-3.0.4-py3-none-any.whl.metadata (3.7 kB)
Collecting whoosh>=2.7.0 (from moin)
  Using cached Whoosh-2.7.4-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting pdfminer.six (from moin)
  Using cached pdfminer.six-20240706-py3-none-any.whl.metadata (4.1 kB)
Collecting passlib>=1.6.0 (from moin)
  Using cached passlib-1.7.4-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting sqlalchemy>=2.0 (from moin)
  Using cached SQLAlchemy-2.0.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.6 kB)
Collecting XStatic>=0.0.2 (from moin)
  Using cached XStatic-1.0.3-py3-none-any.whl.metadata (1.4 kB)
Collecting XStatic-Bootstrap==3.1.1.2 (from moin)
  Using cached XStatic_Bootstrap-3.1.1.2-py3-none-any.whl
Collecting XStatic-Font-Awesome>=6.2.1.0 (from moin)
  Using cached XStatic_Font_Awesome-6.2.1.1-py3-none-any.whl.metadata (851 bytes)
Collecting XStatic-CKEditor>=3.6.1.2 (from moin)
  Using cached XStatic_CKEditor-3.6.4.0-py3-none-any.whl
Collecting XStatic-autosize (from moin)
  Using cached XStatic_autosize-1.17.2.1-py3-none-any.whl
Collecting XStatic-jQuery>=1.8.2 (from moin)
  Using cached XStatic_jQuery-3.5.1.1-py3-none-any.whl
Collecting XStatic-jQuery-File-Upload>=10.31.0 (from moin)
  Using cached XStatic_jQuery_File_Upload-10.31.0.1-py3-none-any.whl
Collecting XStatic-svg-edit-moin>=2012.11.15.1 (from moin)
  Using cached XStatic_svg_edit_moin-2012.11.27.1-py3-none-any.whl
Collecting XStatic-JQuery.TableSorter>=2.14.5.1 (from moin)
  Using cached XStatic_JQuery.TableSorter-2.14.5.2-py3-none-any.whl.metadata (846 bytes)
Collecting XStatic-Pygments>=1.6.0.1 (from moin)
  Using cached XStatic_Pygments-2.9.0.1-py3-none-any.whl
Collecting lxml (from feedgen>=0.9.0->moin)
  Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (3.8 kB)
Collecting python-dateutil (from feedgen>=0.9.0->moin)
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting itsdangerous>=2.1.2 (from Flask>=3.0.0->moin)
  Using cached itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
Collecting click>=8.1.3 (from Flask>=3.0.0->moin)
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting pytz>=2022.7 (from Flask-Babel>=3.0.0->moin)
  Using cached pytz-2024.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting cachelib<0.10.0,>=0.9.0 (from Flask-Caching>=1.2.0->moin)
  Using cached cachelib-0.9.0-py3-none-any.whl.metadata (1.9 kB)
Collecting typing-extensions>=4.6.0 (from sqlalchemy>=2.0->moin)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting greenlet!=0.4.17 (from sqlalchemy>=2.0->moin)
  Using cached greenlet-3.1.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (3.8 kB)
Collecting charset-normalizer>=2.0.0 (from pdfminer.six->moin)
  Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting cryptography>=36.0.0 (from pdfminer.six->moin)
  Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (5.4 kB)
Collecting cffi>=1.12 (from cryptography>=36.0.0->pdfminer.six->moin)
  Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting six>=1.5 (from python-dateutil->feedgen>=0.9.0->moin)
  Using cached six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting pycparser (from cffi>=1.12->cryptography>=36.0.0->pdfminer.six->moin)
  Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Using cached moin-2.0.0b1-py3-none-any.whl (1.7 MB)
Using cached babel-2.16.0-py3-none-any.whl (9.6 MB)
Using cached blinker-1.8.2-py3-none-any.whl (9.5 kB)
Using cached docutils-0.21.2-py3-none-any.whl (587 kB)
Using cached flask-3.0.3-py3-none-any.whl (101 kB)
Using cached flask_babel-4.0.0-py3-none-any.whl (9.6 kB)
Using cached Flask_Caching-2.3.0-py3-none-any.whl (28 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached Markdown-3.7-py3-none-any.whl (106 kB)
Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
Using cached mdx_wikilink_plus-1.4.1-py3-none-any.whl (8.9 kB)
Using cached passlib-1.7.4-py2.py3-none-any.whl (525 kB)
Using cached pygments-2.18.0-py3-none-any.whl (1.2 MB)
Using cached SQLAlchemy-2.0.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
Using cached werkzeug-3.0.4-py3-none-any.whl (227 kB)
Using cached Whoosh-2.7.4-py2.py3-none-any.whl (468 kB)
Using cached XStatic-1.0.3-py3-none-any.whl (4.4 kB)
Using cached XStatic_Font_Awesome-6.2.1.1-py3-none-any.whl (6.5 MB)
Using cached XStatic_JQuery.TableSorter-2.14.5.2-py3-none-any.whl (20 kB)
Using cached pdfminer.six-20240706-py3-none-any.whl (5.6 MB)
Using cached cachelib-0.9.0-py3-none-any.whl (15 kB)
Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (141 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl (4.0 MB)
Using cached greenlet-3.1.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (626 kB)
Using cached itsdangerous-2.2.0-py3-none-any.whl (16 kB)
Using cached pytz-2024.2-py2.py3-none-any.whl (508 kB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl (4.9 MB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (479 kB)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: XStatic-svg-edit-moin, XStatic-Pygments, XStatic-JQuery.TableSorter, XStatic-jQuery-File-Upload, XStatic-jQuery, XStatic-Font-Awesome, XStatic-CKEditor, XStatic-Bootstrap, XStatic-autosize, XStatic, whoosh, pytz, passlib, typing-extensions, six, pygments, pycparser, markupsafe, Markdown, lxml, itsdangerous, greenlet, emeraldtree, docutils, click, charset-normalizer, cachelib, blinker, Babel, Werkzeug, sqlalchemy, python-dateutil, mdx-wikilink-plus, Jinja2, flatland, cffi, Flask, feedgen, cryptography, pdfminer.six, Flask-Theme, Flask-Caching, Flask-Babel, moin
Successfully installed Babel-2.16.0 Flask-3.0.3 Flask-Babel-4.0.0 Flask-Caching-2.3.0 Flask-Theme-0.3.6 Jinja2-3.1.4 Markdown-3.7 Werkzeug-3.0.4 XStatic-1.0.3 XStatic-Bootstrap-3.1.1.2 XStatic-CKEditor-3.6.4.0 XStatic-Font-Awesome-6.2.1.1 XStatic-JQuery.TableSorter-2.14.5.2 XStatic-Pygments-2.9.0.1 XStatic-autosize-1.17.2.1 XStatic-jQuery-3.5.1.1 XStatic-jQuery-File-Upload-10.31.0.1 XStatic-svg-edit-moin-2012.11.27.1 blinker-1.8.2 cachelib-0.9.0 cffi-1.17.1 charset-normalizer-3.3.2 click-8.1.7 cryptography-43.0.1 docutils-0.21.2 emeraldtree-0.11.0 feedgen-1.0.0 flatland-0.9.1 greenlet-3.1.0 itsdangerous-2.2.0 lxml-5.3.0 markupsafe-2.1.5 mdx-wikilink-plus-1.4.1 moin-2.0.0b1 passlib-1.7.4 pdfminer.six-20240706 pycparser-2.22 pygments-2.18.0 python-dateutil-2.9.0.post0 pytz-2024.2 six-1.16.0 sqlalchemy-2.0.34 typing-extensions-4.12.2 whoosh-2.7.4

~/d/moin-test[1]►pip install setuptools       moin-test 0.241s 00:06
Collecting setuptools
  Using cached setuptools-75.0.0-py3-none-any.whl.metadata (6.9 kB)
Using cached setuptools-75.0.0-py3-none-any.whl (1.2 MB)
Installing collected packages: setuptools
Successfully installed setuptools-75.0.0



~/d/moin-test►moin create-instance --full     moin-test 1.457s 00:06
2024-09-16 00:06:36,812 INFO moin.cli.maint.create_instance:76 Directory /home/raj/dev/moin-test already exists, using as wikiconfig dir.
2024-09-16 00:06:36,813 INFO moin.cli.maint.create_instance:93 Instance creation finished.
2024-09-16 00:06:37,303 INFO moin.cli.maint.create_instance:107 Build Instance started.
2024-09-16 00:06:37,304 INFO moin.cli.maint.index:51 Index creation started
2024-09-16 00:06:37,308 INFO moin.cli.maint.index:55 Index creation finished
2024-09-16 00:06:37,308 INFO moin.cli.maint.modify_item:166 Load help started
Item loaded: Home
Item loaded: docbook
Item loaded: mediawiki
Item loaded: OtherTextItems/Diff
Item loaded: WikiDict
Item loaded: moin
Item loaded: moin/subitem
Item loaded: html/SubItem
Item loaded: moin/HighlighterList
Item loaded: MoinWikiMacros/Icons
Item loaded: InclusionForMoinWikiMacros
Item loaded: TemplateSample
Item loaded: MoinWikiMacros
Item loaded: rst/subitem
Item loaded: OtherTextItems/IRC
Item loaded: rst
Item loaded: creole/subitem
Item loaded: Home/subitem
Item loaded: OtherTextItems/CSV
Item loaded: images
Item loaded: Sibling
Item loaded: html
Item loaded: markdown
Item loaded: creole
Item loaded: OtherTextItems
Item loaded: OtherTextItems/Python
Item loaded: docbook/SubItem
Item loaded: OtherTextItems/PlainText
Item loaded: MoinWikiMacros/MonthCalendar
Item loaded: markdown/Subitem
Success: help namespace help-en loaded successfully with 30 items
2024-09-16 00:06:46,258 INFO moin.cli.maint.modify_item:166 Load help started
Item loaded: video.mp4
Item loaded: archive.tar.gz
Item loaded: audio.mp3
Item loaded: archive.zip
Item loaded: logo.png
Item loaded: cat.jpg
Item loaded: logo.svg
Success: help namespace help-common loaded successfully with 7 items
2024-09-16 00:06:49,685 INFO moin.cli.maint.modify_item:338 Load welcome page started
2024-09-16 00:06:49,801 INFO moin.cli.maint.modify_item:347 Load welcome finished
2024-09-16 00:06:49,801 INFO moin.cli.maint.index:124 Index optimization started
2024-09-16 00:06:51,383 INFO moin.cli.maint.index:126 Index optimization finished
2024-09-16 00:06:51,383 INFO moin.cli.maint.create_instance:114 Full instance setup finished.
2024-09-16 00:06:51,383 INFO moin.cli.maint.create_instance:115 You can now use "moin run" to start the builtin server.



~/d/moin-test►ls                             moin-test 15.295s 00:06
bin/      intermap.txt  lib64@      wiki/        wikiconfig.py
include/  lib/          pyvenv.cfg  wiki_local/



~/d/moin-test►MOINCFG=wikiconfig.py                  moin-test 00:07
fish: Unsupported use of &apos=&apos. In fish, please use &aposset MOINCFG wikiconfig.py&apos.

~/d/moin-test[123]►set MOINCFG wikiconfig.py         moin-test 00:07


~/d/moin-test[123]►moin account-create --name test --email [email protected] --password test123
Password not acceptable: For a password a minimum length of 8 characters is required.
2024-09-16 00:08:19,106 WARNING moin.utils.clock:53 These timers have not been stopped: total




~/d/moin-test►moin account-create --name test --email [email protected] --password this-is-a-password
2024-09-16 00:08:43,798 INFO moin.cli.account.create:49 User c3608cafec184bd6a7a1d69d83109ad0 [&apostest&apos] [email protected] - created.
2024-09-16 00:08:43,798 WARNING moin.utils.clock:53 These timers have not been stopped: total



~/d/moin-test►moin run --host 0.0.0.0 --port 5000 --no-debugger --no-reload
 * Debug mode: off
2024-09-16 00:09:26,146 INFO werkzeug:97 WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.1.2:5000
2024-09-16 00:09:26,146 INFO werkzeug:97 Press CTRL C to quit

15 September, 2024 06:45PM by rajudev

Russell Coker

Kogan AX1800 Wifi6 Mesh

I previously blogged about the difficulties in getting a good Wifi mesh network setup [1].

I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.

I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.

I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.

I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.

Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.

The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.

For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.

The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.

Conclusion

The performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.

So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.

15 September, 2024 12:15PM by etbe

September 14, 2024

hackergotchi for Evgeni Golov

Evgeni Golov

Fixing the volume control in an Alesis M1Active 330 USB Speaker System

I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.

They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.

Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.

So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?

Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.

Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?

Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).

right Alesis M1Active 330 USB speaker with a Makita wrench where the volume knob is

Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.

Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.

right Alesis M1Active 330 USB speaker open

Great, this was the easy part!

The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.

Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!

Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!

Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!

PS: Don't operate this thing open to try it out. 230V are dangerous!

14 September, 2024 06:38PM by evgeni

September 11, 2024

Jamie McClelland

MariaDB mystery

I keep getting an error in our backup logs:

Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp

It’s a WordPress database having trouble dumping the options table.

The error log has a corresponding message:

Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)

The Internet is full of suggestions, almost all of which either focus on the network connection between the client and the server or the FEDERATED plugin. We aren’t using the federated plugin and this error happens when conneting via the socket.

Check it out - what is better than a consistently reproducible problem!

It happens if I try to select all the values in the table:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens when I specifiy one specific offset:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens if I specify the field name explicitly:

root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I specify the key field:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
 ----------- 
| option_id |
 ----------- 
|  16296351 |
 ----------- 
root@mysql007:~#

It does happen if I specify the value field:

root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I query the specific row by key field:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
 ----------- ---------------------- -------------- ---------- 
| option_id | option_name          | option_value | autoload |
 ----------- ---------------------- -------------- ---------- 
|  16296351 | z_taxonomy_image8905 |              | yes      |
 ----------- ---------------------- -------------- ---------- 
root@mysql007:~#

Hm. Surely there is some funky non-printing character in that option_value right?

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
 --------------------------- 
| CHAR_LENGTH(option_value) |
 --------------------------- 
|                         0 |
 --------------------------- 
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
 ------------------- 
| HEX(option_value) |
 ------------------- 
|                   |
 ------------------- 
root@mysql007:~#

Resetting the value to an empty value doesn’t make a difference:

root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Deleting the row in question causes the error to specify a new offset:

root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1401
root@mysql007:~#

If I put the record I deleted back in, we return to the old offset:

root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp 
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
root@mysql007:~#

I’m losing my little mind. Let’s get drastic and create a whole new table, copy over the data delicately working around the deadly offset:

oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp 
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp 
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp 

Now let’s make sure all is working properly:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;

Now let’s examine which row we are missing:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp 
 ----------- 
| option_id |
 ----------- 
|  18405297 |
 ----------- 
root@mysql007:~#

Wait, what? I was expecting option_id 16296351.

Oh, now we are getting somewhere. And I see my mistake: when using offsets, you need to use ORDER BY or you won’t get consistent results.

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
 ----------- 
| option_id |
 ----------- 
|  18405297 |
 ----------- 
root@mysql007:~#

Now that I have the correct row… what is in it:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Well, that makes a lot more sense. Let’s start over with examining the value:

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
 --------------------------- 
| CHAR_LENGTH(option_value) |
 --------------------------- 
|                  50814767 |
 --------------------------- 
root@mysql007:~#

Wow, that’s a lot of characters. If it were a book, it would be 35,000 pages long (I just discovered this site). It’s a LONGTEXT field so it should be able to handle it. But now I have a better idea of what could be going wrong. The name of the option is “rewrite_rules” so it seems like something is going wrong with the generation of that option.

I imagine there is some tweak I can make to allow MariaDB to cough up the value (read_buffer_size? tmp_table_size?). But I’ll start with checking in with the database owner because I don’t think 35,000 pages of rewrite rules is appropriate for any site.

11 September, 2024 12:27PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, August 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In August, 16 contributors have been paid to work on Debian LTS, their reports are available:

  • Adrian Bunk did 44.5h (out of 46.5h assigned and 53.5h from previous period), thus carrying over 55.5h to the next month.
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 9.0h (out of 0.0h assigned and 21.0h from previous period), thus carrying over 12.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 12.0h (out of 7.0h assigned and 5.0h from previous period).
  • Emilio Pozuelo Monfort did 22.25h (out of 6.5h assigned and 53.5h from previous period), thus carrying over 37.75h to the next month.
  • Guilhem Moulin did 17.5h (out of 8.75h assigned and 11.25h from previous period), thus carrying over 2.5h to the next month.
  • Lee Garrett did 11.5h (out of 58.0h assigned and 2.0h from previous period), thus carrying over 48.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 4.0h assigned and 20.0h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. Sánchez did 8.25h (out of 5.0h assigned and 7.0h from previous period), thus carrying over 3.75h to the next month.
  • Santiago Ruano Rincón did 21.5h (out of 11.5h assigned and 10.0h from previous period).
  • Sean Whitton did 4.0h (out of 2.25h assigned and 3.75h from previous period), thus carrying over 2.0h to the next month.
  • Sylvain Beucler did 42.0h (out of 46.0h assigned and 14.0h from previous period), thus carrying over 18.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 2.5h (out of 7.75h assigned and 4.25h from previous period), thus carrying over 9.5h to the next month.

Evolution of the situation

In August, we have released 1 DLAs.

During the month of August Debian 11 "bullseye" officially transitioned to the responsibility of the LTS team (on 2024-08-15). However, because the final point release (11.11) was not made until 2024-08-31, LTS contributors were prevented from uploading packages to bullseye until after the point release had been made. That said, the team was not at all idle, and was busy at work on a variety of tasks which impacted both LTS and the broader Debian community, as well as preparing uploads which will be released during the month of September.

Of particular note, LTS contributor Bastien Roucariès prepared updates of the putty and cacti packages for bookworm (1 2) and bullseye (1 2), which were accepted by the old-stable release managers for the August point releases. He also analysed several security regressions in the apache2 package. LTS contributor Emilio Pozuelo Monfort worked on the Rust toolchain in bookworm and bullseye, which will be needed to support the upcoming Firefox ESR and Thunderbird ESR releases from the Mozilla project. Additionally, LTS contributor Thorsten Alteholz prepared bookworm and bullseye updates of the cups package (1 2), which were accepted by the old-stable release managers for the August point releases.

LTS contributor Markus Koschany collaborated with Emmanuel Bourg, co-maintainer of the tomcat packages in Debian. Regressions in a proposed security fix necessitated the updating of the tomcat10 package in Debian to the latest upstream release.

LTS contributors Bastien and Santiago Ruano Rincón collaborated with the upstream developers and the Debian maintainer (Bernhard Schmidt) of the FreeRADIUS project towards addressing the BlastRADIUS vulnerability in the bookworm and bullseye versions of the freeradius package. If you use FreeRADIUS in Debian bookworm or bullseye, we encourage you to test the packages following the instructions found in the call for testers to help identifying any possible regression that could be introduced with these updates.

Testing is an important part of the work the LTS Team does, and in that vein LTS contributor Sean Whitton worked on improving the documentation and tooling around creating test filesystems which can be used for testing a variety of package update scenarios.

Thanks to our sponsors

Sponsors that joined recently are in bold.

11 September, 2024 12:00AM by Roberto C. Sánchez

Valhalla's Things

Two Linen Hoods

Posted on September 11, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a white hood with a deep shoulder covering that ends in a point at about waist height.  The hood is not fitted, has a slight point because of the rectangular construction, and on the torso part one can see the big square gusset that gives it fullness.

I’ve been influenced again into feeling the need for a garment.

It was again a case of multiple sources conspiring in the same direction for unrelated reasons, but I decided I absolutely needed a linen hood, made from the heavy white linen I knew I had in my stash.

Why? I don’t know. I do like the feeling of wearing a hood, and the white linen should give a decent protection from the sun, but I don’t know how often I’m going to wear these instead of just a hat. On the other hand the linen was already there and I needed something small to sew.

A woman wearing a hood in the same shape, but made in tartan-print flannel.  It looks slightly smaller than the linen one.

My first idea was to make a square hood: some time ago I had already made one out of some leftovers of duvet cover, vaguely inspired by the S , because I have a long-term plan of making one a bit more from scratch1.

I like the fact that this pattern is completely made out of squares and rectangles, and while the flannel one is quite fitting, as suitable for a warm garment, I felt that by making it just a cm or two wider it would have worked nicely for a warm weather one, and indeed it did.

A woman wearing some sort of white veil that covers the head and has two scarf-like appendages, one wrapped around the shoulders and falling on the back, and one falling on the front side, kept in place with one arm.

Except, before I even started on the square hood, I started to think that the same square top would also be good for a hood-scarf, one of those long flowy garments that sit on the head, wrap around the neck and fall down, moving with the wind and the movements of the person.

Because, let’s be honest. worn in a way that look like a veil they feel nice, it’s true. But with the help of a couple of pins then you can do this.

A woman wearing what looks like a deep hood over some sort of fabric face mask, showing only the eyes, in a way that resembles characters in a video game.  There are again two scarf-like edges that fall on the front and back.

And no, I’ve never played that game2, and I’m not even 100% sure what it is about, other than killing people, climbing buildings and petting cats3, but that’s not really an issue when making a bit of casual cosplay of something, right?

Anyway, should anybody feel the need to make themselves a hood or ten, the patterns have been released as usual as #FreeSoftWear: square hood and hood scarf.


  1. I’m not going to raise the sheep :D I’m actually not even going to wash and comb the wool, I’ll start from the step just after those :D↩︎

  2. because proprietary software, because somewhat underpowered computers and other related reasons that are somewhat incidental to the game itself.↩︎

  3. at least two out of three things that make it look like a perfectly enjoyable activity.↩︎

11 September, 2024 12:00AM

September 10, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

GS1900-10HP web session hijack

While fiddling around, I found a (fairly serious) vulnerability in Zyxel's GS1900-10HP and related switches; today Zyxel released an advisory with updated firmware, so I can publish my side of it as well. (Unfortunately there's no Zyxel bounty program, but Zyxel PSIRT has been forthcoming all along, which I guess is all you can hope for.)

The CVE (CVE-2024-38270) is sparse on details, so I'll simply paste my original message to Zyxel below:

Hi,

GS1900-10HP (probably also many other switches in the same series),
firmware V2.80(AAZI.0) (also older ones) generate web authentication
tokens in an unsafe way. This makes it possible for an attacker
to guess them and hijack the session.

web_util_randStr_generate() contains code that is functionally
the same as this:

        char token[17];
        struct timeval now;
        gettimeofday(&now, NULL);
        srandom(now.tv_sec   now.tv_usec);
        for (int i = 0; i < 16;   i) {
                long r = random() % 62;
                char c;
                if (r < 10) {
                        c = r   '0';  // 0..9
                } else if (r < 36) {
                        c = r   ('A' - 10);  // A..Z
                } else {
                        c = r   ('a' - 36);  // a..z
                }
                token[i] = c;
        }
        token[16] = 0;

(random() comes from uclibc, but it has the same generator as glibc,
so the code runs just as well on desktop Linux)

This token is generated on initial login, and stored in a cookie
on the client. This has multiple problems:

First, the clock is a known quantity; even if the switch is not on SNTP,
it is trivial to get its idea of time-of-day by just doing a HTTP
request and looking at the Date header. This means that if an attacker
knows precisely when the administrator logged in (for instance, by observing
a HTTPS login on the network), they will have a very limited range of
possible tokens to check.

Second, tv_sec and tv_usec are combined in an improper way, canceling
out much of the intended entropy. As long as one assumes that the
administrator logged in less than a day ago, the entire range of possible
seeds it contained within the range [now - 86400, now   999999], i.e.
only about 1.1M possible cookies, which can simply be tried serially
even if one did not observe the original login. There is no brute-force
protection on the web interface.

I have verified that this attack is practical, by simply generating all the
tokens and asking for the status page repeatedly (it is trivial to see
whether it returns an authentication success or failure). The switch can
sustain about one try every 96 ms on average against an attacker on a local
LAN (there is no keepalive or multithreading, so the most trivial code is
seemingly also the best one), which means that an attack will succeed on
average after about 15 hours; my test run succeeded after a bit under three
hours. If there are multiple administrator sessions active, the expected time
to success is of course lower, although the tries are also somewhat slower
because the switch has to deal with the keepalive traffic from the admins.

This is a straightforward case of CWE-330 (Use of Insufficiently Random
Values), with subcategories CWE-331, CWE-334, CWE-335, CWE-337, CWE-339,
CWE-340, CWE-341 and probably others. The suggested fix is simple: Read
entropy from /dev/urandom or another good source, instead of using random().
(Make sure that you don't get bias issues due to the use of modulo; you can
use e.g. rejection sampling.)

Session timeout does help against this attack (by default, it is 3 minutes),
but only as long as the administrator has not kept a tab open. If the tab is
left open, that keeps on making background requests that refreshes the token
every five seconds, guaranteeing a 100% success rate if given a day or two.

There is also _tons_ of outdated software on the switch (kernel from 2008,
OpenSSH from 2013, netkit-telnetd which is no longer maintained, a fork of
a very old NET-SNMP, etc.), but I did not check whether there are any
relevant security holes or whether you have actually backported patches.

I haven't verified what their fix looks like, but it's probably somewhere there in the GPL dump. :-)

10 September, 2024 07:55AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2024

10 September, 2024 12:51AM by Ben Hutchings

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Python 3 patches, OpenSSH GSS-API split, rebootstrap, salsa CI, etc. (by Anupa Ann Joseph)

Debian Contributions: 2024-08

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Debian Python 3 patch review, by Stefano Rivera

Last month, at DebConf, Stefano reviewed the current patch set of Debian’s cPython packages with Matthias Klose, the primary maintainer until now. As a result of that review, Stefano re-reviewed the patchset, updating descriptions, etc. A few patches were able to be dropped, and a few others were forwarded upstream.

One finds all sorts of skeletons doing reviews like this. One of the patches had been inactive (fortunately, because it was buggy) since the day it was applied, 13 years ago. One is a cleanup that probably only fixes a bug on HPUX, and is a result of copying code from xfree86 into Python 25 years ago. It was fixed in xfree86 a year later. Others support just Debian-specific functionality and probably never seemed worth forwarding. Or good cleanup that only really applies to Debian.

A trivial new patch would allow Debian to multiarch co-install Python stable ABI dynamic extensions (like we can with regular dynamic extensions). Performance concerns are stalling it in review, at the moment.

DebConf 24 Organization, by Stefano Rivera

Stefano helped organize DebConf 24, which concluded in early August. The event is run by a large entirely volunteer team. The work involved in making this happen is far too varied to describe here. While Freexian provides funding for 20% of collaborator time to spend on Debian-related work, it only covers a small fraction of contributions to time-intensive tasks like this.

Since the end of the event, Stefano has been doing some work on the conference finances, and initiated the reimbursement process for travel bursaries.

Archive rebuilds on Debusine, by Stefano Rivera

The recent setuptools 73 upload to Debian unstable removed the test subcommand, breaking many packages that were using python3 setup.py test in their Debian packaging. Stefano did a partial archive-rebuild using debusine.debian.net to find the regressions and file bugs.

Debusine will be a powerful tool to do QA work like this for Debian in the future, but it doesn’t have all the features needed to coordinate rebuild-testing, yet. They are planned to be fleshed out in the next year. In the meantime, Debusine has the building blocks to work through a queue of package building tasks and store the results, it just needs to be driven from outside the system.

So, Stefano started working on a set of tools using the Debusine client API to perform archive rebuilds, found and tagged existing bugs, and filed many more.

OpenSSH GSS-API split, by Colin Watson

Colin landed the first stage of the planned split of GSS-API authentication and key exchange support in Debian’s OpenSSH packaging. In order to allow for smooth upgrades, the second stage will have to wait until after the Debian 13 (trixie) release; but once that’s done, as upstream puts it, “this substantially reduces the amount of pre-authentication attack surface exposed on your users’ sshd by default”.

OpenSSL vs. cryptography, by Colin Watson

Colin facilitated a discussion between Debian’s OpenSSL team and the upstream maintainers of Python cryptography about a new incompatibility between Debian’s OpenSSL packaging and cryptography’s handling of OpenSSL’s legacy provider, which was causing a number of build and test failures. While the issue remains open, the Debian OpenSSL maintainers have effectively reverted the change now, so it’s no longer a pressing problem.

/usr-move, by Helmut Grohne

There are less than 40 source packages left to move files to /usr, so what we’re left with is the long tail of the transition. Rather than fix all of them, Helmut started a discussion on removing packages from unstable and filed a first batch. As libvirt is being restructured in experimental, we’re handling the fallout in collaboration with its maintainer Andrea Bolognani. Since base-files validates the aliasing symlinks before upgrading, it was discovered that systemd has its own ideas with no solution as of yet. Helmut also proposed that dash checks for ineffective diversions of /bin/sh and that lintian warns about aliased files.

rebootstrap by Helmut Grohne

Bootstrapping Debian for a new or existing CPU architecture still is a quite manual process. The rebootstrap project attempts to automate part of the early stage, but it still is very sensitive to changes in unstable. We had a number of fairly intrusive changes this year already. August included a little more fallout from the earlier gcc-for-host work where the C include search path would end up being wrong in the generated cross toolchain. A number of packages such as util-linux (twice), libxml2, libcap-ng or systemd had their stage profiles broken. e2fsprogs gained a cycle with libarchive-dev due to having gained support for creating an ext4 filesystem from a tar archive. The restructuring of glib2.0 remains an unsolved problem for now, but libxt and cdebconf should be buildable without glib2.0.

Salsa CI, by Santiago Ruano Rincón

Santiago completed the initial RISC-V support (!523) in the Salsa CI’s pipeline. The main work started in July, but it was required to take into account some comments in the review (thanks to Ahmed!) and some final details in [!534]. riscv64 is the most recently supported port in Debian, which will be part of trixie. As its name suggests, the new build-riscv64 job makes it possible to test that a package successfully builds in the riscv64 architecture. The RISC-V runner (salsaci riscv64 runner 01) runs in a couple of machines generously provided by lab.rvperf.org. Debian Developers interested in running this job in their projects should enable the runner (salsaci riscv64 runner 01) in Settings / CI / Runners, and follow the instructions available at https://salsa.debian.org/salsa-ci-team/pipeline/#build-job-on-risc-v.

Santiago also took part in discussions about how to optimize the build jobs and reviewed !537 to make the build-source job to only satisfy the Build-Depends and Build-Conflicts fields by Andrea Pappacoda. Thanks a lot to him!

Miscellaneous contributions

  • Stefano submitted patches for BeautifulSoup to support the latest soupsieve and lxml.
  • Stefano uploaded pypy3 7.3.17, upgrading the cPython compatibility from 3.9 to 3.10. Then ran into a GCC-14-related regression, which had to be ignored for now as it’s proving hard to fix.
  • Colin released libpipeline 1.5.8 and man-db 2.13.0; the latter included foundations allowing adding an autopkgtest for man-db.
  • Colin upgraded 19 Python packages to new upstream versions (fixing 5 CVEs), fixed several other build failures, fixed a Python 3.12 compatibility issue in zope.security, and made python-nacl build reproducibly.
  • Colin tracked down test failures in python-asyncssh and Ruby resulting from certain odd /etc/hosts configurations.
  • Carles upgraded the packages python-ring-doorbell and simplemonitor to new upstream versions.
  • Carles started discussions and implementation of a tool (still in early days) named “po-debconf-manager”: a way for translators and reviewers to collaborate using git as a backend instead of mailing list; and submit the translations using salsa MR. More information next month.
  • Carles (dog-fooding “po-debconf-manager”) reviewed debconf templates translated by a collaborator.
  • Carles reviewed and submitted the translation of “apt”.
  • Helmut sent 19 patches for improving cross building.
  • Helmut implemented the cross-exe-wrapper proposed by Simon McVittie for use with glib2.0.
  • Helmut detailed what it takes to make Perl’s ExtUtils::PkgConfig suitable for cross building.
  • Helmut made the deletion of the root password work in debvm in all situations and implemented a test case using expect.
  • Anupa attended Debian Publicity team meeting and is moderating and posting on Debian Administrators LinkedIn group.
  • Thorsten uploaded package gutenprint to fix a FTBFS with gcc14 and package ipp-usb to fix a /usr-merge issue.
  • Santiago updated bzip2 to fix a long-standing bug that requested to include a pkg-config file. An important impact of this change is that it makes it possible to use Rust bindings for libbz2 by Sequoia, an implementation of OpenPGP.

10 September, 2024 12:00AM by Anupa Ann Joseph

September 09, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in July 2024

09 September, 2024 11:57PM by Ben Hutchings

hackergotchi for Wouter Verhelst

Wouter Verhelst

NBD: Write Zeroes and Rotational

The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.

I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.

I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.

A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.

To which he replied "Can you send a patch".

That got me down the rabbit hole, and now, for the first time in the 20 years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.

So, what do these things do?

The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.

It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.

The second set of patches adds support for the WRITE_ZEROES command. Most devices these days allow you to tell them "please write a N zeroes starting at this offset", which is a lot more efficient than sending over a buffer of N zeroes and asking the device to do DMA to copy buffers etc etc for just zeroes.

The NBD protocol has supported its own WRITE_ZEROES command for a while now, and hooking it up was reasonably simple in the end. The only problem is that it expects length values in bytes, whereas the kernel uses it in blocks. It took me a few tries to get that right -- and then I also fixed up handling of discard messages, which required the same conversion.

09 September, 2024 03:00PM

September 08, 2024

Thorsten Alteholz

My Debian Activities in August 2024

FTP master

This month I accepted 441 and rejected 15 packages. The overall number of packages that got accepted was 442.

I am ashamed of some occurrences that happened this month and I apologize for this. Unfortunately I have no idea how to prevent this in the future without becoming a solo entertainer.

Debian LTS

This was my hundred-twenty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

  • [#1073518] bookworm-pu: cups 2.4.2-3 deb12u6 has been closed
  • [#1074439] bookworm-pu: cups 2.4.2-3 deb12u7 has been closed
  • [#1073519] bullseye-pu: cups 2.3.3op2-3 deb11u7 has been closed
  • [#1074438] bullseye-pu: cups 2.3.3op2-3 deb11u8 has been closed

Unfortunately Bullseye was not handed over to LTS in August. So I only prepared new packages of asterisk, libvirt and tinyproxy and will upload them next month.

Last but not least I did a week of FD this month.

Debian ELTS

This month was the seventy-third ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1160-1]tiff security update for two CVEs in Jessie and Stretch. The Buster upload was already done before. This upload fixed a segmentation fault and a memory leak
  • [ELA-1161-1]libvirt security update for six CVEs to fix issues related to use-after-free, an off-by-one, a null pointer dereference, a badly handled mutex, a privilege escalation and breaking out of the sVirt confinement. In this case only Jessie and Stretch needed an update.
  • [ELA-1166-1]frr security update for one CVEs in Buster to fix a missing length check.

I also did a week of FD.

Debian Printing

This month I uploaded …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian Mobcom

The following packages have been prepared by the GSoC student Nathan:

It was so much fun working with Nathan. Unfortunately GSoC is over now, but Nathan will continue working in Debian and become a Debian Maintainer.

misc

This month I uploaded new upstream or bugfix versions of:

I also filed an RM bug against meep-openmpi. As Adrian made me ware, this package is no longer needed.

08 September, 2024 11:37PM by alteholz

Dima Kogan

GNU Make: details regarding intermediate files

Suppose I have this Makefile:

a: b
      touch $@
b:
      touch $@

# A common chain of build steps
%-GENERATED.c: %-generate
      touch $@
%.o: %.c
      touch $@
%.so: %-GENERATED.o
      touch $@
xxx-GENERATED.o: CFLAGS  = adsf

# Imitates .d files created with "gcc -MMD". Does not exist on the initial build
ifneq ($(wildcard xxx.so),)
xxx-GENERATED.o: xxx-GENERATED.c
endif

This is all very simple build-system stuff. Let's see how it works:

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
  [start from a clean slate]

$ touch xxx-generate xxx.h
  [Files that would be available in a project exist; xxx-generate is some tool]
  [that would generate xxx-GENERATED.c                                        ]

$ touch a
  ["a" exists but the file "b" it depends on does not]

$ make a xxx.so

  touch b
  touch a
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so
  rm xxx-GENERATED.c

  [It built everything, but then deleted xxx-GENERATED.c]

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It knew to not rebuild "a", but the missing xxx-GENERATED.c caused it to]
  [re-build stuff                                                          ]

Well that's not good. What if we add .SECONDARY: to the end of the Makefile to mark everything as a secondary file?

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
$ touch xxx-generate xxx.h
$ touch a

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It didn't bother rebuilding "a" even though its prerequisites "b" doesn't]
  [exist. But it didn't delete the xxx-GENERATED.c at least                 ]

$ make a xxx.so

  remake: 'a' is up to date.
  remake: 'xxx.so' is up to date.

  [It knew to not rebuild anything. Great.]

So it doesn't work right with or without .SECONDARY:, but it's much closer with it. The solution is to mark everything as not an intermediate file. mrbuild cannot do this without a bleeding-edge version of GNU Make, but users of mrbuild can do this by explicitly mentioning specific files in rules. This would suffice:

___dummy___: file1 file2

Detailed notes are in a commit in mrbuild (mrbuild 1.13) and in a post to LKML by Masahiro Yamada.

08 September, 2024 07:31PM by Dima Kogan

Antonio Terceiro

gotcha: using ccache in Debian package builds

Before I upload packages to Debian, I always do a full build from source under sbuild. This ensures that the package can build from source on a clean environment, implying that the set of build dependencies is complete.

But when iterating on a non-trivial package locally, I will usually build the package directly on my Debian testing system, and I want to take advantage of ccache to cache native (C/C ) code compilation to speed things up. In Debian, the easiest way to enable ccache is to add /usr/lib/ccache to your $PATH. I do this by doing something similar to the following in my ~/.bashrc:

export PATH=/usr/lib/ccache:$PATH

I noticed, however, that my Debian package builds were not using the cache. When building the same small package manually using make, the cache was used, but not when the build was wrapped with dpkg-buildpackage.

I tracked it down to the fact that in compatibility level 13 , debhelper will set $HOME to a temporary directory. For what's it worth, I think that's a good thing: you don't want package builds reaching for your home directory as that makes it harder to make builds reproducible, among other things.

This behavior, however, breaks ccache. The default cache directory is $HOME/.ccache, but that only gets resolved when ccache is actually used. So we end up starting with an empty cache on each build, get a 100% cache miss rate, and still pay for the overhead of populating the cache.

The fix is to explicitly set $CCACHE_DIR upfront, so that by the time $HOME gets overriden, it doesn't matter anymore for ccache. I did this in my ~/.bashrc:

export CCACHE_DIR=$HOME/.ccache

This way, $HOME will be expanded right there when the shell starts, and by the time ccache is called, it will use the persistent cache in my home directory even though $HOME will, at that point, refer to a temporary directory.

08 September, 2024 12:18PM

Jacob Adams

Linux's Bedtime Routine

How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated.

This investigation will be split into a few parts, with the first one going from invocation of hibernation to synchronizing all filesystems to disk.

This article has been written using Linux version 6.9.9, the source of which can be found in many places, but can be navigated easily through the Bootlin Elixir Cross-Referencer:

https://elixir.bootlin.com/linux/v6.9.9/source

Each code snippet will begin with a link to the above giving the file path and the line number of the beginning of the snippet.

A Starting Point for Investigation: /sys/power/state and /sys/power/disk

These two system files exist to allow debugging of hibernation, and thus control the exact state used directly. Writing specific values to the state file controls the exact sleep mode used and disk controls the specific hibernation mode1.

This is extremely handy as an entry point to understand how these systems work, since we can just follow what happens when they are written to.

Show and Store Functions

These two files are defined using the power_attr macro:

kernel/power/power.h:80

#define power_attr(_name) \
static struct kobj_attribute _name##_attr = {   \
    .attr   = {             \
        .name = __stringify(_name), \
        .mode = 0644,           \
    },                  \
    .show   = _name##_show,         \
    .store  = _name##_store,        \
}

show is called on reads and store on writes.

state_show is a little boring for our purposes, as it just prints all the available sleep states.

kernel/power/main.c:657

/*
 * state - control system sleep states.
 *
 * show() returns available sleep state labels, which may be "mem", "standby",
 * "freeze" and "disk" (hibernation).
 * See Documentation/admin-guide/pm/sleep-states.rst for a description of
 * what they mean.
 *
 * store() accepts one of those strings, translates it into the proper
 * enumerated value, and initiates a suspend transition.
 */
static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
			  char *buf)
{
	char *s = buf;
#ifdef CONFIG_SUSPEND
	suspend_state_t i;

	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i  )
		if (pm_states[i])
			s  = sprintf(s,"%s ", pm_states[i]);

#endif
	if (hibernation_available())
		s  = sprintf(s, "disk ");
	if (s != buf)
		/* convert the last space to a newline */
		*(s-1) = '\n';
	return (s - buf);
}

state_store, however, provides our entry point. If the string “disk” is written to the state file, it calls hibernate(). This is our entry point.

kernel/power/main.c:715

static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
			   const char *buf, size_t n)
{
	suspend_state_t state;
	int error;

	error = pm_autosleep_lock();
	if (error)
		return error;

	if (pm_autosleep_state() > PM_SUSPEND_ON) {
		error = -EBUSY;
		goto out;
	}

	state = decode_state(buf, n);
	if (state < PM_SUSPEND_MAX) {
		if (state == PM_SUSPEND_MEM)
			state = mem_sleep_current;

		error = pm_suspend(state);
	} else if (state == PM_SUSPEND_MAX) {
		error = hibernate();
	} else {
		error = -EINVAL;
	}

 out:
	pm_autosleep_unlock();
	return error ? error : n;
}

kernel/power/main.c:688

static suspend_state_t decode_state(const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
	suspend_state_t state;
#endif
	char *p;
	int len;

	p = memchr(buf, '\n', n);
	len = p ? p - buf : n;

	/* Check hibernation first. */
	if (len == 4 && str_has_prefix(buf, "disk"))
		return PM_SUSPEND_MAX;

#ifdef CONFIG_SUSPEND
	for (state = PM_SUSPEND_MIN; state < PM_SUSPEND_MAX; state  ) {
		const char *label = pm_states[state];

		if (label && len == strlen(label) && !strncmp(buf, label, len))
			return state;
	}
#endif

	return PM_SUSPEND_ON;
}

Could we have figured this out just via function names? Sure, but this way we know for sure that nothing else is happening before this function is called.

Autosleep

Our first detour is into the autosleep system. When checking the state above, you may notice that the kernel grabs the pm_autosleep_lock before checking the current state.

autosleep is a mechanism originally from Android that sends the entire system to either suspend or hibernate whenever it is not actively working on anything.

This is not enabled for most desktop configurations, since it’s primarily for mobile systems and inverts the standard suspend and hibernate interactions.

This system is implemented as a workqueue2 that checks the current number of wakeup events, processes and drivers that need to run3, and if there aren’t any, then the system is put into the autosleep state, typically suspend. However, it could be hibernate if configured that way via /sys/power/autosleep in a similar manner to using /sys/power/state to manually enable hibernation.

kernel/power/main.c:841

static ssize_t autosleep_store(struct kobject *kobj,
			       struct kobj_attribute *attr,
			       const char *buf, size_t n)
{
	suspend_state_t state = decode_state(buf, n);
	int error;

	if (state == PM_SUSPEND_ON
	    && strcmp(buf, "off") && strcmp(buf, "off\n"))
		return -EINVAL;

	if (state == PM_SUSPEND_MEM)
		state = mem_sleep_current;

	error = pm_autosleep_set_state(state);
	return error ? error : n;
}

power_attr(autosleep);
#endif /* CONFIG_PM_AUTOSLEEP */

kernel/power/autosleep.c:24

static DEFINE_MUTEX(autosleep_lock);
static struct wakeup_source *autosleep_ws;

static void try_to_suspend(struct work_struct *work)
{
	unsigned int initial_count, final_count;

	if (!pm_get_wakeup_count(&initial_count, true))
		goto out;

	mutex_lock(&autosleep_lock);

	if (!pm_save_wakeup_count(initial_count) ||
		system_state != SYSTEM_RUNNING) {
		mutex_unlock(&autosleep_lock);
		goto out;
	}

	if (autosleep_state == PM_SUSPEND_ON) {
		mutex_unlock(&autosleep_lock);
		return;
	}
	if (autosleep_state >= PM_SUSPEND_MAX)
		hibernate();
	else
		pm_suspend(autosleep_state);

	mutex_unlock(&autosleep_lock);

	if (!pm_get_wakeup_count(&final_count, false))
		goto out;

	/*
	 * If the wakeup occurred for an unknown reason, wait to prevent the
	 * system from trying to suspend and waking up in a tight loop.
	 */
	if (final_count == initial_count)
		schedule_timeout_uninterruptible(HZ / 2);

 out:
	queue_up_suspend_work();
}

static DECLARE_WORK(suspend_work, try_to_suspend);

void queue_up_suspend_work(void)
{
	if (autosleep_state > PM_SUSPEND_ON)
		queue_work(autosleep_wq, &suspend_work);
}

The Steps of Hibernation

Hibernation Kernel Config

It’s important to note that most of the hibernate-specific functions below do nothing unless you’ve defined CONFIG_HIBERNATION in your Kconfig4. As an example, hibernate itself is defined as the following if CONFIG_HIBERNATE is not set.

include/linux/suspend.h:407

static inline int hibernate(void) { return -ENOSYS; }

Check if Hibernation is Available

We begin by confirming that we actually can perform hibernation, via the hibernation_available function.

kernel/power/hibernate.c:742

if (!hibernation_available()) {
	pm_pr_dbg("Hibernation not available.\n");
	return -EPERM;
}

kernel/power/hibernate.c:92

bool hibernation_available(void)
{
	return nohibernate == 0 &&
		!security_locked_down(LOCKDOWN_HIBERNATION) &&
		!secretmem_active() && !cxl_mem_active();
}

nohibernate is controlled by the kernel command line, it’s set via either nohibernate or hibernate=no.

security_locked_down is a hook for Linux Security Modules to prevent hibernation. This is used to prevent hibernating to an unencrypted storage device, as specified in the manual page kernel_lockdown(7). Interestingly, either level of lockdown, integrity or confidentiality, locks down hibernation because with the ability to hibernate you can extract bascially anything from memory and even reboot into a modified kernel image.

secretmem_active checks whether there is any active use of memfd_secret, and if so it prevents hibernation. memfd_secret returns a file descriptor that can be mapped into a process but is specifically unmapped from the kernel’s memory space. Hibernating with memory that not even the kernel is supposed to access would expose that memory to whoever could access the hibernation image. This particular feature of secret memory was apparently controversial, though not as controversial as performance concerns around fragmentation when unmapping kernel memory (which did not end up being a real problem).

cxl_mem_active just checks whether any CXL memory is active. A full explanation is provided in the commit introducing this check but there’s also a shortened explanation from cxl_mem_probe that sets the relevant flag when initializing a CXL memory device.

drivers/cxl/mem.c:186

* The kernel may be operating out of CXL memory on this device,
* there is no spec defined way to determine whether this device
* preserves contents over suspend, and there is no simple way
* to arrange for the suspend image to avoid CXL memory which
* would setup a circular dependency between PCI resume and save
* state restoration.

Check Compression

The next check is for whether compression support is enabled, and if so whether the requested algorithm is enabled.

kernel/power/hibernate.c:747

/*
 * Query for the compression algorithm support if compression is enabled.
 */
if (!nocompress) {
	strscpy(hib_comp_algo, hibernate_compressor, sizeof(hib_comp_algo));
	if (crypto_has_comp(hib_comp_algo, 0, 0) != 1) {
		pr_err("%s compression is not available\n", hib_comp_algo);
		return -EOPNOTSUPP;
	}
}

The nocompress flag is set via the hibernate command line parameter, setting hibernate=nocompress.

If compression is enabled, then hibernate_compressor is copied to hib_comp_algo. This synchronizes the current requested compression setting (hibernate_compressor) with the current compression setting (hib_comp_algo).

Both values are character arrays of size CRYPTO_MAX_ALG_NAME (128 in this kernel).

kernel/power/hibernate.c:50

static char hibernate_compressor[CRYPTO_MAX_ALG_NAME] = CONFIG_HIBERNATION_DEF_COMP;

/*
 * Compression/decompression algorithm to be used while saving/loading
 * image to/from disk. This would later be used in 'kernel/power/swap.c'
 * to allocate comp streams.
 */
char hib_comp_algo[CRYPTO_MAX_ALG_NAME];

hibernate_compressor defaults to lzo if that algorithm is enabled, otherwise to lz4 if enabled5. It can be overwritten using the hibernate.compressor setting to either lzo or lz4.

kernel/power/Kconfig:95

choice
	prompt "Default compressor"
	default HIBERNATION_COMP_LZO
	depends on HIBERNATION

config HIBERNATION_COMP_LZO
	bool "lzo"
	depends on CRYPTO_LZO

config HIBERNATION_COMP_LZ4
	bool "lz4"
	depends on CRYPTO_LZ4

endchoice

config HIBERNATION_DEF_COMP
	string
	default "lzo" if HIBERNATION_COMP_LZO
	default "lz4" if HIBERNATION_COMP_LZ4
	help
	  Default compressor to be used for hibernation.

kernel/power/hibernate.c:1425

static const char * const comp_alg_enabled[] = {
#if IS_ENABLED(CONFIG_CRYPTO_LZO)
	COMPRESSION_ALGO_LZO,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_LZ4)
	COMPRESSION_ALGO_LZ4,
#endif
};

static int hibernate_compressor_param_set(const char *compressor,
		const struct kernel_param *kp)
{
	unsigned int sleep_flags;
	int index, ret;

	sleep_flags = lock_system_sleep();

	index = sysfs_match_string(comp_alg_enabled, compressor);
	if (index >= 0) {
		ret = param_set_copystring(comp_alg_enabled[index], kp);
		if (!ret)
			strscpy(hib_comp_algo, comp_alg_enabled[index],
				sizeof(hib_comp_algo));
	} else {
		ret = index;
	}

	unlock_system_sleep(sleep_flags);

	if (ret)
		pr_debug("Cannot set specified compressor %s\n",
			 compressor);

	return ret;
}
static const struct kernel_param_ops hibernate_compressor_param_ops = {
	.set    = hibernate_compressor_param_set,
	.get    = param_get_string,
};

static struct kparam_string hibernate_compressor_param_string = {
	.maxlen = sizeof(hibernate_compressor),
	.string = hibernate_compressor,
};

We then check whether the requested algorithm is supported via crypto_has_comp. If not, we bail out of the whole operation with EOPNOTSUPP.

As part of crypto_has_comp we perform any needed initialization of the algorithm, loading kernel modules and running initialization code as needed6.

Grab Locks

The next step is to grab the sleep and hibernation locks via lock_system_sleep and hibernate_acquire.

kernel/power/hibernate.c:758

sleep_flags = lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!hibernate_acquire()) {
	error = -EBUSY;
	goto Unlock;
}

First, lock_system_sleep marks the current thread as not freezable, which will be important later7. It then grabs the system_transistion_mutex, which locks taking snapshots or modifying how they are taken, resuming from a hibernation image, entering any suspend state, or rebooting.

The GFP Mask

The kernel also issues a warning if the gfp mask is changed via either pm_restore_gfp_mask or pm_restrict_gfp_mask without holding the system_transistion_mutex.

GFP flags tell the kernel how it is permitted to handle a request for memory.

include/linux/gfp_types.h:12

 * GFP flags are commonly used throughout Linux to indicate how memory
 * should be allocated.  The GFP acronym stands for get_free_pages(),
 * the underlying memory allocation function.  Not every GFP flag is
 * supported by every function which may allocate memory.

In the case of hibernation specifically we care about the IO and FS flags, which are reclaim operators, ways the system is permitted to attempt to free up memory in order to satisfy a specific request for memory.

include/linux/gfp_types.h:176

 * Reclaim modifiers
 * -----------------
 * Please note that all the following flags are only applicable to sleepable
 * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them).
 *
 * %__GFP_IO can start physical IO.
 *
 * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the
 * allocator recursing into the filesystem which might already be holding
 * locks.

gfp_allowed_mask sets which flags are permitted to be set at the current time.

As the comment below outlines, preventing these flags from being set avoids situations where the kernel needs to do I/O to allocate memory (e.g. read/writing swap8) but the devices it needs to read/write to/from are not currently available.

kernel/power/main.c:24

/*
 * The following functions are used by the suspend/hibernate code to temporarily
 * change gfp_allowed_mask in order to avoid using I/O during memory allocations
 * while devices are suspended.  To avoid races with the suspend/hibernate code,
 * they should always be called with system_transition_mutex held
 * (gfp_allowed_mask also should only be modified with system_transition_mutex
 * held, unless the suspend/hibernate code is guaranteed not to run in parallel
 * with that modification).
 */
static gfp_t saved_gfp_mask;

void pm_restore_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	if (saved_gfp_mask) {
		gfp_allowed_mask = saved_gfp_mask;
		saved_gfp_mask = 0;
	}
}

void pm_restrict_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	WARN_ON(saved_gfp_mask);
	saved_gfp_mask = gfp_allowed_mask;
	gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);
}

Sleep Flags

After grabbing the system_transition_mutex the kernel then returns and captures the previous state of the threads flags in sleep_flags. This is used later to remove PF_NOFREEZE if it wasn’t previously set on the current thread.

kernel/power/main.c:52

unsigned int lock_system_sleep(void)
{
	unsigned int flags = current->flags;
	current->flags |= PF_NOFREEZE;
	mutex_lock(&system_transition_mutex);
	return flags;
}
EXPORT_SYMBOL_GPL(lock_system_sleep);

include/linux/sched.h:1633

#define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */

Then we grab the hibernate-specific semaphore to ensure no one can open a snapshot or resume from it while we perform hibernation. Additionally this lock is used to prevent hibernate_quiet_exec, which is used by the nvdimm driver to active its firmware with all processes and devices frozen, ensuring it is the only thing running at that time9.

kernel/power/hibernate.c:82

bool hibernate_acquire(void)
{
	return atomic_add_unless(&hibernate_atomic, -1, 0);
}

Prepare Console

The kernel next calls pm_prepare_console. This function only does anything if CONFIG_VT_CONSOLE_SLEEP has been set.

This prepares the virtual terminal for a suspend state, switching away to a console used only for the suspend state if needed.

kernel/power/console.c:130

void pm_prepare_console(void)
{
	if (!pm_vt_switch())
		return;

	orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
	if (orig_fgconsole < 0)
		return;

	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
	return;
}

The first thing is to check whether we actually need to switch the VT

kernel/power/console.c:94

/*
 * There are three cases when a VT switch on suspend/resume are required:
 *   1) no driver has indicated a requirement one way or another, so preserve
 *      the old behavior
 *   2) console suspend is disabled, we want to see debug messages across
 *      suspend/resume
 *   3) any registered driver indicates it needs a VT switch
 *
 * If none of these conditions is present, meaning we have at least one driver
 * that doesn't need the switch, and none that do, we can avoid it to make
 * resume look a little prettier (and suspend too, but that's usually hidden,
 * e.g. when closing the lid on a laptop).
 */
static bool pm_vt_switch(void)
{
	struct pm_vt_switch *entry;
	bool ret = true;

	mutex_lock(&vt_switch_mutex);
	if (list_empty(&pm_vt_switch_list))
		goto out;

	if (!console_suspend_enabled)
		goto out;

	list_for_each_entry(entry, &pm_vt_switch_list, head) {
		if (entry->required)
			goto out;
	}

	ret = false;
out:
	mutex_unlock(&vt_switch_mutex);
	return ret;
}

There is an explanation of the conditions under which a switch is performed in the comment above the function, but we’ll also walk through the steps here.

Firstly we grab the vt_switch_mutex to ensure nothing will modify the list while we’re looking at it.

We then examine the pm_vt_switch_list. This list is used to indicate the drivers that require a switch during suspend. They register this requirement, or the lack thereof, via pm_vt_switch_required.

kernel/power/console.c:31

/**
 * pm_vt_switch_required - indicate VT switch at suspend requirements
 * @dev: device
 * @required: if true, caller needs VT switch at suspend/resume time
 *
 * The different console drivers may or may not require VT switches across
 * suspend/resume, depending on how they handle restoring video state and
 * what may be running.
 *
 * Drivers can indicate support for switchless suspend/resume, which can
 * save time and flicker, by using this routine and passing 'false' as
 * the argument.  If any loaded driver needs VT switching, or the
 * no_console_suspend argument has been passed on the command line, VT
 * switches will occur.
 */
void pm_vt_switch_required(struct device *dev, bool required)

Next, we check console_suspend_enabled. This is set to false by the kernel parameter no_console_suspend, but defaults to true.

Finally, if there are any entries in the pm_vt_switch_list, then we check to see if any of them require a VT switch.

Only if none of these conditions apply, then we return false.

If a VT switch is in fact required, then we move first the currently active virtual terminal/console10 (vt_move_to_console) and then the current location of kernel messages (vt_kmsg_redirect) to the SUSPEND_CONSOLE. The SUSPEND_CONSOLE is the last entry in the list of possible consoles, and appears to just be a black hole to throw away messages.

kernel/power/console.c:16

#define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)

Interestingly, these are separate functions because you can use TIOCL_SETKMSGREDIRECT (an ioctl11) to send kernel messages to a specific virtual terminal, but by default its the same as the currently active console.

The locations of the previously active console and the previous kernel messages location are stored in orig_fgconsole and orig_kmsg, to restore the state of the console and kernel messages after the machine wakes up again. Interestingly, this means orig_fgconsole also ends up storing any errors, so has to be checked to ensure it’s not less than zero before we try to do anything with the kernel messages on both suspend and resume.

drivers/tty/vt/vt_ioctl.c:1268

/* Perform a kernel triggered VT switch for suspend/resume */

static int disable_vt_switch;

int vt_move_to_console(unsigned int vt, int alloc)
{
	int prev;

	console_lock();
	/* Graphics mode - up to X */
	if (disable_vt_switch) {
		console_unlock();
		return 0;
	}
	prev = fg_console;

	if (alloc && vc_allocate(vt)) {
		/* we can't have a free VC for now. Too bad,
		 * we don't want to mess the screen for now. */
		console_unlock();
		return -ENOSPC;
	}

	if (set_console(vt)) {
		/*
		 * We're unable to switch to the SUSPEND_CONSOLE.
		 * Let the calling function know so it can decide
		 * what to do.
		 */
		console_unlock();
		return -EIO;
	}
	console_unlock();
	if (vt_waitactive(vt   1)) {
		pr_debug("Suspend: Can't switch VCs.");
		return -EINTR;
	}
	return prev;
}

Unlike most other locking functions we’ve seen so far, console_lock needs to be careful to ensure nothing else is panicking and needs to dump to the console before grabbing the semaphore for the console and setting a couple flags.

Panics

Panics are tracked via an atomic integer set to the id of the processor currently panicking.

kernel/printk/printk.c:2649

/**
 * console_lock - block the console subsystem from printing
 *
 * Acquires a lock which guarantees that no consoles will
 * be in or enter their write() callback.
 *
 * Can sleep, returns nothing.
 */
void console_lock(void)
{
	might_sleep();

	/* On panic, the console_lock must be left to the panic cpu. */
	while (other_cpu_in_panic())
		msleep(1000);

	down_console_sem();
	console_locked = 1;
	console_may_schedule = 1;
}
EXPORT_SYMBOL(console_lock);

kernel/printk/printk.c:362

/*
 * Return true if a panic is in progress on a remote CPU.
 *
 * On true, the local CPU should immediately release any printing resources
 * that may be needed by the panic CPU.
 */
bool other_cpu_in_panic(void)
{
	return (panic_in_progress() && !this_cpu_in_panic());
}

kernel/printk/printk.c:345

static bool panic_in_progress(void)
{
	return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID);
}

kernel/printk/printk.c:350

/* Return true if a panic is in progress on the current CPU. */
bool this_cpu_in_panic(void)
{
	/*
	 * We can use raw_smp_processor_id() here because it is impossible for
	 * the task to be migrated to the panic_cpu, or away from it. If
	 * panic_cpu has already been set, and we're not currently executing on
	 * that CPU, then we never will be.
	 */
	return unlikely(atomic_read(&panic_cpu) == raw_smp_processor_id());
}

console_locked is a debug value, used to indicate that the lock should be held, and our first indication that this whole virtual terminal system is more complex than might initially be expected.

kernel/printk/printk.c:373

/*
 * This is used for debugging the mess that is the VT code by
 * keeping track if we have the console semaphore held. It's
 * definitely not the perfect debug tool (we don't know if _WE_
 * hold it and are racing, but it helps tracking those weird code
 * paths in the console code where we end up in places I want
 * locked without the console semaphore held).
 */
static int console_locked;

console_may_schedule is used to see if we are permitted to sleep and schedule other work while we hold this lock. As we’ll see later, the virtual terminal subsystem is not re-entrant, so there’s all sorts of hacks in here to ensure we don’t leave important code sections that can’t be safely resumed.

Disable VT Switch

As the comment below lays out, when another program is handling graphical display anyway, there’s no need to do any of this, so the kernel provides a switch to turn the whole thing off. Interestingly, this appears to only be used by three drivers, so the specific hardware support required must not be particularly common.

drivers/gpu/drm/omapdrm/dss
drivers/video/fbdev/geode
drivers/video/fbdev/omap2

drivers/tty/vt/vt_ioctl.c:1308

/*
 * Normally during a suspend, we allocate a new console and switch to it.
 * When we resume, we switch back to the original console.  This switch
 * can be slow, so on systems where the framebuffer can handle restoration
 * of video registers anyways, there's little point in doing the console
 * switch.  This function allows you to disable it by passing it '0'.
 */
void pm_set_vt_switch(int do_switch)
{
	console_lock();
	disable_vt_switch = !do_switch;
	console_unlock();
}
EXPORT_SYMBOL(pm_set_vt_switch);

The rest of the vt_switch_console function is pretty normal, however, simply allocating space if needed to create the requested virtual terminal and then setting the current virtual terminal via set_console.

Virtual Terminal Set Console

With set_console, we begin (as if we haven’t been already) to enter the madness that is the virtual terminal subsystem. As mentioned previously, modifications to its state must be made very carefully, as other stuff happening at the same time could create complete messes.

All this to say, calling set_console does not actually perform any work to change the state of the current console. Instead it indicates what changes it wants and then schedules that work.

drivers/tty/vt/vt.c:3153

int set_console(int nr)
{
	struct vc_data *vc = vc_cons[fg_console].d;

	if (!vc_cons_allocated(nr) || vt_dont_switch ||
		(vc->vt_mode.mode == VT_AUTO && vc->vc_mode == KD_GRAPHICS)) {

		/*
		 * Console switch will fail in console_callback() or
		 * change_console() so there is no point scheduling
		 * the callback
		 *
		 * Existing set_console() users don't check the return
		 * value so this shouldn't break anything
		 */
		return -EINVAL;
	}

	want_console = nr;
	schedule_console_callback();

	return 0;
}

The check for vc->vc_mode == KD_GRAPHICS is where most end-user graphical desktops will bail out of this change, as they’re in graphics mode and don’t need to switch away to the suspend console.

vt_dont_switch is a flag used by the ioctls11 VT_LOCKSWITCH and VT_UNLOCKSWITCH to prevent the system from switching virtual terminal devices when the user has explicitly locked it.

VT_AUTO is a flag indicating that automatic virtual terminal switching is enabled12, and thus deliberate switching to a suspend terminal is not required.

However, if you do run your machine from a virtual terminal, then we indicate to the system that we want to change to the requested virtual terminal via the want_console variable and schedule a callback via schedule_console_callback.

drivers/tty/vt/vt.c:315

void schedule_console_callback(void)
{
	schedule_work(&console_work);
}

console_work is a workqueue2 that will execute the given task asynchronously.

Console Callback

drivers/tty/vt/vt.c:3109

/*
 * This is the console switching callback.
 *
 * Doing console switching in a process context allows
 * us to do the switches asynchronously (needed when we want
 * to switch due to a keyboard interrupt).  Synchronization
 * with other console code and prevention of re-entrancy is
 * ensured with console_lock.
 */
static void console_callback(struct work_struct *ignored)
{
	console_lock();

	if (want_console >= 0) {
		if (want_console != fg_console &&
		    vc_cons_allocated(want_console)) {
			hide_cursor(vc_cons[fg_console].d);
			change_console(vc_cons[want_console].d);
			/* we only changed when the console had already
			   been allocated - a new console is not created
			   in an interrupt routine */
		}
		want_console = -1;
	}
...

console_callback first looks to see if there is a console change wanted via want_console and then changes to it if it’s not the current console and has been allocated already. We do first remove any cursor state with hide_cursor.

drivers/tty/vt/vt.c:841

static void hide_cursor(struct vc_data *vc)
{
	if (vc_is_sel(vc))
		clear_selection();

	vc->vc_sw->con_cursor(vc, false);
	hide_softcursor(vc);
}

A full dive into the tty driver is a task for another time, but this should give a general sense of how this system interacts with hibernation.

Notify Power Management Call Chain

kernel/power/hibernate.c:767

pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION)

This will call a chain of power management callbacks, passing first PM_HIBERNATION_PREPARE and then PM_POST_HIBERNATION on startup or on error with another callback.

kernel/power/main.c:98

int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down)
{
	int ret;

	ret = blocking_notifier_call_chain_robust(&pm_chain_head, val_up, val_down, NULL);

	return notifier_to_errno(ret);
}

The power management notifier is a blocking notifier chain, which means it has the following properties.

include/linux/notifier.h:23

 *	Blocking notifier chains: Chain callbacks run in process context.
 *		Callouts are allowed to block.

The callback chain is a linked list with each entry containing a priority and a function to call. The function technically takes in a data value, but it is always NULL for the power management chain.

include/linux/notifier.h:49

struct notifier_block;

typedef	int (*notifier_fn_t)(struct notifier_block *nb,
			unsigned long action, void *data);

struct notifier_block {
	notifier_fn_t notifier_call;
	struct notifier_block __rcu *next;
	int priority;
};

The head of the linked list is protected by a read-write semaphore.

include/linux/notifier.h:65

struct blocking_notifier_head {
	struct rw_semaphore rwsem;
	struct notifier_block __rcu *head;
};

Because it is prioritized, appending to the list requires walking it until an item with lower13 priority is found to insert the current item before.

kernel/notifier.c:252

/*
 *	Blocking notifier chain routines.  All access to the chain is
 *	synchronized by an rwsem.
 */

static int __blocking_notifier_chain_register(struct blocking_notifier_head *nh,
					      struct notifier_block *n,
					      bool unique_priority)
{
	int ret;

	/*
	 * This code gets used during boot-up, when task switching is
	 * not yet working and interrupts must remain disabled.  At
	 * such times we must not call down_write().
	 */
	if (unlikely(system_state == SYSTEM_BOOTING))
		return notifier_chain_register(&nh->head, n, unique_priority);

	down_write(&nh->rwsem);
	ret = notifier_chain_register(&nh->head, n, unique_priority);
	up_write(&nh->rwsem);
	return ret;
}

kernel/notifier.c:20

/*
 *	Notifier chain core routines.  The exported routines below
 *	are layered on top of these, with appropriate locking added.
 */

static int notifier_chain_register(struct notifier_block **nl,
				   struct notifier_block *n,
				   bool unique_priority)
{
	while ((*nl) != NULL) {
		if (unlikely((*nl) == n)) {
			WARN(1, "notifier callback %ps already registered",
			     n->notifier_call);
			return -EEXIST;
		}
		if (n->priority > (*nl)->priority)
			break;
		if (n->priority == (*nl)->priority && unique_priority)
			return -EBUSY;
		nl = &((*nl)->next);
	}
	n->next = *nl;
	rcu_assign_pointer(*nl, n);
	trace_notifier_register((void *)n->notifier_call);
	return 0;
}

Each callback can return one of a series of options.

include/linux/notifier.h:18

#define NOTIFY_DONE		0x0000		/* Don't care */
#define NOTIFY_OK		0x0001		/* Suits me */
#define NOTIFY_STOP_MASK	0x8000		/* Don't call further */
#define NOTIFY_BAD		(NOTIFY_STOP_MASK|0x0002)
						/* Bad/Veto action */

When notifying the chain, if a function returns STOP or BAD then the previous parts of the chain are called again with PM_POST_HIBERNATION14 and an error is returned.

kernel/notifier.c:107

/**
 * notifier_call_chain_robust - Inform the registered notifiers about an event
 *                              and rollback on error.
 * @nl:		Pointer to head of the blocking notifier chain
 * @val_up:	Value passed unmodified to the notifier function
 * @val_down:	Value passed unmodified to the notifier function when recovering
 *              from an error on @val_up
 * @v:		Pointer passed unmodified to the notifier function
 *
 * NOTE:	It is important the @nl chain doesn't change between the two
 *		invocations of notifier_call_chain() such that we visit the
 *		exact same notifier callbacks; this rules out any RCU usage.
 *
 * Return:	the return value of the @val_up call.
 */
static int notifier_call_chain_robust(struct notifier_block **nl,
				     unsigned long val_up, unsigned long val_down,
				     void *v)
{
	int ret, nr = 0;

	ret = notifier_call_chain(nl, val_up, v, -1, &nr);
	if (ret & NOTIFY_STOP_MASK)
		notifier_call_chain(nl, val_down, v, nr-1, NULL);

	return ret;
}

Each of these callbacks tends to be quite driver-specific, so we’ll cease discussion of this here.

Sync Filesystems

The next step is to ensure all filesystems have been synchronized to disk.

This is performed via a simple helper function that times how long the full synchronize operation, ksys_sync takes.

kernel/power/main.c:69

void ksys_sync_helper(void)
{
	ktime_t start;
	long elapsed_msecs;

	start = ktime_get();
	ksys_sync();
	elapsed_msecs = ktime_to_ms(ktime_sub(ktime_get(), start));
	pr_info("Filesystems sync: %ld.ld seconds\n",
		elapsed_msecs / MSEC_PER_SEC, elapsed_msecs % MSEC_PER_SEC);
}
EXPORT_SYMBOL_GPL(ksys_sync_helper);

ksys_sync wakes and instructs a set of flusher threads to write out every filesystem, first their inodes15, then the full filesystem, and then finally all block devices, to ensure all pages are written out to disk.

fs/sync.c:87

/*
 * Sync everything. We start by waking flusher threads so that most of
 * writeback runs on all devices in parallel. Then we sync all inodes reliably
 * which effectively also waits for all flusher threads to finish doing
 * writeback. At this point all data is on disk so metadata should be stable
 * and we tell filesystems to sync their metadata via ->sync_fs() calls.
 * Finally, we writeout all block devices because some filesystems (e.g. ext2)
 * just write metadata (such as inodes or bitmaps) to block device page cache
 * and do not sync it on their own in ->sync_fs().
 */
void ksys_sync(void)
{
	int nowait = 0, wait = 1;

	wakeup_flusher_threads(WB_REASON_SYNC);
	iterate_supers(sync_inodes_one_sb, NULL);
	iterate_supers(sync_fs_one_sb, &nowait);
	iterate_supers(sync_fs_one_sb, &wait);
	sync_bdevs(false);
	sync_bdevs(true);
	if (unlikely(laptop_mode))
		laptop_sync_completion();
}

It follows an interesting pattern of using iterate_supers to run both sync_inodes_one_sb and then sync_fs_one_sb on each known filesystem16. It also calls both sync_fs_one_sb and sync_bdevs twice, first without waiting for any operations to complete and then again waiting for completion17.

When laptop_mode is enabled the system runs additional filesystem synchronization operations after the specified delay without any writes.

mm/page-writeback.c:111

/*
 * Flag that puts the machine in "laptop mode". Doubles as a timeout in jiffies:
 * a full sync is triggered after this time elapses without any disk activity.
 */
int laptop_mode;

EXPORT_SYMBOL(laptop_mode);

However, when running a filesystem synchronization operation, the system will add an additional timer to schedule more writes after the laptop_mode delay. We don’t want the state of the system to change at all while performing hibernation, so we cancel those timers.

mm/page-writeback.c:2198

/*
 * We're in laptop mode and we've just synced. The sync's writes will have
 * caused another writeback to be scheduled by laptop_io_completion.
 * Nothing needs to be written back anymore, so we unschedule the writeback.
 */
void laptop_sync_completion(void)
{
	struct backing_dev_info *bdi;

	rcu_read_lock();

	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
		del_timer(&bdi->laptop_mode_wb_timer);

	rcu_read_unlock();
}

As a side note, the ksys_sync function is simply called when the system call sync is used.

fs/sync.c:111

SYSCALL_DEFINE0(sync)
{
	ksys_sync();
	return 0;
}

The End of Preparation

With that the system has finished preparations for hibernation. This is a somewhat arbitrary cutoff, but next the system will begin a full freeze of userspace to then dump memory out to an image and finally to perform hibernation. All this will be covered in future articles!

  1. Hibernation modes are outside of scope for this article, see the previous article for a high-level description of the different types of hibernation. 

  2. Workqueues are a mechanism for running asynchronous tasks. A full description of them is a task for another time, but the kernel documentation on them is available here: https://www.kernel.org/doc/html/v6.9/core-api/workqueue.html  2

  3. This is a bit of an oversimplification, but since this isn’t the main focus of this article this description has been kept to a higher level. 

  4. Kconfig is Linux’s build configuration system that sets many different macros to enable/disable various features. 

  5. Kconfig defaults to the first default found 

  6. Including checking whether the algorithm is larval? Which appears to indicate that it requires additional setup, but is an interesting choice of name for such a state. 

  7. Specifically when we get to process freezing, which we’ll get to in the next article in this series. 

  8. Swap space is outside the scope of this article, but in short it is a buffer on disk that the kernel uses to store memory not current in use to free up space for other things. See Swap Management for more details. 

  9. The code for this is lengthy and tangential, thus it has not been included here. If you’re curious about the details of this, see kernel/power/hibernate.c:858 for the details of hibernate_quiet_exec, and drivers/nvdimm/core.c:451 for how it is used in nvdimm

  10. Annoyingly this code appears to use the terms “console” and “virtual terminal” interchangeably. 

  11. ioctls are special device-specific I/O operations that permit performing actions outside of the standard file interactions of read/write/seek/etc.  2

  12. I’m not entirely clear on how this flag works, this subsystem is particularly complex. 

  13. In this case a higher number is higher priority. 

  14. Or whatever the caller passes as val_down, but in this case we’re specifically looking at how this is used in hibernation. 

  15. An inode refers to a particular file or directory within the filesystem. See Wikipedia for more details. 

  16. Each active filesystem is registed with the kernel through a structure known as a superblock, which contains references to all the inodes contained within the filesystem, as well as function pointers to perform the various required operations, like sync. 

  17. I’m including minimal code in this section, as I’m not looking to deep dive into the filesystem code at this time. 

08 September, 2024 12:00AM

September 07, 2024

Sergio Durigan Junior

Chatting in the 21st century

Several people have been asking me to explain and/or write about my solution for chatting nowadays. I realize that the current scenario is much more complex than, say, 10 or 20 years ago. Back then, this post would probably be more about the IRC client I used than about different chatting technologies.

I have also spent a non trivial amount of time setting things up the way I want, so I understand that it’s about time to write about my setup not only because I think it can be helpful to others, but also because I would like to document things for myself.

The backbone: Matrix

I chose to use Matrix as the place where I integrate everything. Despite there being some heavy (and justified) criticism on the protocol itself, it serves me well for what I need right now. Obviously, I don’t like the fact that I have to provide Matrix and all of its accompanying bridges a VPS with 4GB of RAM and 3 vCPUs, but I think that that ship has sailed, unfortunately.

In an ideal world, I would be using XMPP and dedicating only a fraction of the resources I’m using today to have a full chat system. And since I have been running my personal XMPP server for more than a decade now, I did try to find a solution that would allow me to keep using it, but unfortunately the protocol became almost a hobbyist thing, so there’s that.

A few disclaimers

I self-host everything, including my Matrix server. Much of what I did won’t work if you don’t self-host Matrix, so keep that in mind.

This won’t be a post teaching you how to deploy the services. My intention is to describe what I use and for what purpose.

Also, as much as I try to use Debian packages for everything I do, I opted to deploy all services using a community-maintained Ansible playbook which is very well written and organized: matrix-docker-ansible-deploy.

Last but not least, as I said above, you will likely need a machine with a good amount of RAM, CPU and storage, especially if you deploy Synapse as your Matrix homeserver (which is what I recommend if you plan to use the bridges I’ll mention). My current VPS has 4GB of RAM, 3 vCPUs and 80GB of storage (of which I’m currently using approximately 55GB).

Problem #1: my Matrix client(s)

There are a lot of clients that can talk the Matrix protocol, but most of them are either web clients or GUI programs. I live on the terminal, more specifically inside Emacs, so I settled for the amazing ement.el Emacs mode. It works surprisingly well, but unfortunately doesn’t support end-to-end encryption out of the box; for that, you have to hook it up with pantalaimon. Unfortunately, the project seems abandoned and therefore I don’t recommend you to use it. I don’t use it myself.

When I have to reply some E2E encrypted message from another user, I go to my web browser and use my self-hosted Element client. It’s a nuisance, but one that I’m willing to accept because of security concerns.

If you’re into web clients and don’t want to use Element (because it is heavy), you can try Cinny. It’s lightweight and supports a decent set of features.

If you’re a terminal lover but don’t use Emacs, you may want to try gomuks or iamb.

Problem #2: IRC bridging

There are basically two types of IRC bridges for Matrix:

  • The regular and most used matrix-appservice-irc. This bridge takes Matrix to IRC (think of IRC users with the [m] suffix appended to their nicknames), and is what the matrix.org and other big homeservers (including matrix.debian.social) use. It’s a complex service which allows thousands of Matrix users to connect to IRC networks, but that unfortunately has complex problems and is only worth using if you intend to host a community server.

  • A bouncer-like bridge called Heisenbridge. This is what I use personally. It takes IRC to Matrix, which means that people on IRC will not know that you’re using Matrix. This bridge is much simpler, and because it acts like a bouncer it’s pretty much impossible for it to cause problems with the IRC network.

Due to the fact that I sometimes like to use other IRC clients, I still run a regular ZNC bouncer, and I use Heisenbridge to connect to my ZNC. This means that I can use, e.g., ERC inside Emacs and my Matrix bridge at the same time. But you don’t necessarily need to run another bouncer; you can simply use Heisenbridge and connect directly to the IRC network(s) you want.

A word of caution, though: unlike ZNC, Heisenbridge doesn’t support per-user configuration when you use it in bouncer mode. This is the reason why you need to self-host it, and why it’s not possible to offer the service to other users (they would have access to your IRC network configuration otherwise).

It’s also worth talking about logs. I find that keeping logs of everything that goes on IRC has saved me a bunch of times, and so I find it really important to continue doing that. Unfortunately, neither ement.el nor Element support logging things out of the box (at least not that I know). This is also one of the reasons why I still keep my ZNC around: I configure it to log everything.

Problem #3: Telegram

I don’t use Telegram myself, but unfortunately several people from the Debian community do, especially in Brazil. There is a whole Debian community on Telegram, and I wanted to be able to bridge our Debian Matrix channels to their Telegram counterparts.

I am currently using mautrix-telegram for that, and it’s working great. You need someone with a Telegram account to configure their credentials so that the bridge can connect to it, but afterwards it’s really easy to bridge channels together.

Problem #4: GitLab webhooks

Something else I wanted to be able to do was to receive notifications regarding new issues, merge requests and other activities from Salsa. For this, I’m using maubot, which is awesome and has a huge list of plugins. I’m using the gitlab one.

Final thoughts

Overall, I’m satisfied with the setup I have now. It has certainly taken some time and effort to find the right tool for each problem I needed to solve, and I still feel like there are some rough edges to soften (like the fact that my Emacs client doesn’t support E2E encryption out of the box, or the whole logging situation), but otherwise things are working fine and I haven’t had any big problems with the deployment. You do have to be much more careful about stuff (for example, when I installed an unrelated service that “hijacked” my Apache configuration and made Matrix’s federation silently stop working), though.

If you have more specific questions about any part of my setup, shoot me an email and I’ll do my best to help.

Happy chatting!

07 September, 2024 09:25PM

Paul Wise

FLOSS Activities August 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Regressions in adequate (1 2 3 4)

Review

  • Debian publicity: reviewed Debian birthday post

Administration

  • Debian servers: contributed to a review of current Debian Partners

Communication

  • Respond to queries from Debian users and contributors on IRC

Sponsors

All work was done on a volunteer basis.

07 September, 2024 04:56AM

September 05, 2024

Sandro Tosi

TL;DR belongs at the top of an article

 TL;DR

  • if you are writing an article and plan to add a TL;DR section, then put it at the very top, right after the title.
  • that's it, no excuses, end of discussion.
It has happen to probably everyone to read an article, reach the end of it only to see a TL;DR section right at the bottom, and thinking: "eeh i wish this would have been at the top so i didnt have to read (DR) this long article (TL) to gather its core ideas".

If the reason for "Too Long; Didn't Read" to exist is to avoid the reader to go thru the whole article to get its main points, then the natural place to present it is at the very top of said article.

So if you're planning on writing something and to add a TL;DR section (you don't have to, of course, but if you do that work too) then please position it at the very beginning of your work.

05 September, 2024 06:21AM by Sandro Tosi ([email protected])

September 04, 2024

Reproducible Builds

Reproducible Builds in August 2024

Welcome to the August 2024 report from the Reproducible Builds project!

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. LWN: The history, status, and plans for reproducible builds
  2. Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs
  3. Distribution news
  4. Mailing list news
  5. diffoscope
  6. Website updates
  7. Upstream patches
  8. Reproducibility testing framework

LWN: The history, status, and plans for reproducible builds

The free software newspaper of record, Linux Weekly News, published an in-depth article based on Holger Levsen’s talk, Reproducible Builds: The First Eleven Years which was presented at the recent DebConf24 conference in Busan, South Korea.

Titled The history, status, and plans for reproducible builds and written by Jake Edge, LWN’s article not only summarises Holger’s talk and clarifies its message but it links to external information as well. Holger’s original talk can also be watched on the DebConf24 webpage (direct .webm link and his HTML slides are available also). There are also a significant number of comments on LWN’s page as well.

Holger Levsen also headed a scheduled discussion session at DebConf24 on Preserving *other* build artifacts addressing a topic where a number of Debian packages are (or would like to) produce results that are neither the .deb files, the build logs nor the logs of CI tests. This is an issue for reproducible builds as this “4th type” of build artifact are typically shipped within the binary .deb packages, and are invariably non-deterministic; thus making the .deb files unreproducible. (A direct .webm link and HTML slides are available).


Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs

Peter Eisentraut wrote a detailed blog post on the subject of “The new PostgreSQL 17 make dist”. Like many projects, the PostgreSQL database has previously pre-built parts of its GNU Autotools build system: “the reason for this is a mix of convenience and traditional practice”. Peter astutely notes that this arrangement in the build system is “quite tricky” as:

You need to carefully maintain the different states of “clean source code”, “partially built source code”, and “fully built source code”, and the commands to transition between them.

However, Peter goes on to mention that:

… a lot more attention is nowadays paid to the software supply chain. There are security and legal reasons for this. When users install software, they want to know where it came from, and they want to be sure that they got the right thing, not some fake version or some version of dubious legal provenance.

And cites the XZ Utils backdoor as a reason to care about transparent and reproducible ways of distributing and communicating a source tarball and provenance. Because of this, intermediate build artifacts are now henceforth essentially disallowed from PostgreSQL distribution tarballs.

Distribution news

In Debian this month, 30 reviews of Debian packages were added, 17 were updated and 10 were removed this month adding to our knowledge about identified issues. One issue type was added by Chris Lamb, too. []

In addition, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org.


In Arch Linux this month, Jelle van der Waa published a short blog post on the topic of Investigating creating reproducible images with mkosi, motivated by the desire to make it possible for anyone to “re-recreate the official Arch cloud image bit-by-bit identical on their own machine as per [the] reproducible builds definition.” In addition, Jelle filed a patch for pacman, the Arch Linux package manager, to respect the SOURCE_DATE_EPOCH environment variable when installing a package.


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In Android news, the IzzyOnDroid project added 49 new rebuilder recipes and now features 256 total reproducible applications representing 21% of the total offerings in the repository. IzzyOnDroid is “an F-Droid style repository for Android apps[:] applications in this repository are official binaries built by the original application developers, taken from their resp. repositories (mostly GitHub).”


Mailing list news

From our mailing list this month:

  • Bernhard M. Wiedemann posted a brief message to the list with some helpful information regarding nondeterminism within Rust binaries, positing the use of the codegen-units = 16 default and resulting in a bug being filed in the Rust issue tracker. []

  • Bernhard also wrote to the list, following up to a thread in November 2023, on attempts to make the LibreOffice suite of office applications build reproducibly. In the thread from this month, Bernhard could announce that the four patches previously mentioned have landed in LibreOffice upstream.

  • Fay Stegerman linked the mailing list to a thread she made on the Signal issue tracker regarding whether “device-specific binaries [can] ever be considered meaningfully reproducible”. In particular: “the whole part about ‘allow[ing] multiple third parties to come to a consensus on a “correct” result’ breaks down completely when ‘correct’ is device-specific and not something everyone can agree on.” []

  • Developer kpcyrd posted an update for source code indexing project, whatsrc.org. Announcing that it now importing packages from live-bootstrap (“a usable Linux system [that is] created with only human-auditable, and wherever possible, human-written, source code”) into its database of provenance data.

  • Lastly, Mechtilde Stehmann posted an update to an earlier thread about how Java builds are not reproducible on the armhf architecture, enquiring how they might gain temporary access to such a machine in order to perform some deeper testing. []


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb released versions 274, 275, 276 and 277, uploaded these to Debian, and made the following changes as well:

  • New features:

    • Strip ANSI escapes—usually colour codes—from the output of the Procyon Java decompiler. []
    • Factor out a method for stripping ANSI escapes. []
    • Append output from dumppdf(1) in more cases, avoiding situations where we fallback to a binary diff. []
    • Add support for versions of Perl’s IO::Compress::Zip version 2.212. []
  • Bug fixes:

    • Also catch RuntimeError exceptions when importing the PyPDF library so that it, or, crucially, its transitive dependencies, cannot not cause diffoscope to traceback at runtime and build time. []
    • Do not call marshal.load(…) of precompiled Python bytecode as it, alas, inherently unsafe. Replace for now with a brief summary of the code section of .pyc. [][]
    • Don’t include excessive debug output when calling dumppdf(1). []
  • Testsuite-related changes:

    • Don’t bother to check version number in test_python.py: the fixture for this test is fixed. [][]
    • Update test_zip text fixtures and definitions to support new changes to the Perl IO::Compress library. []

In addition, Mattia Rizzolo updated the available architectures for a number of test dependencies [] and Sergei Trofimovich fixed an issue to avoid diffoscope crashing when hashing directory symlinks [] and Vagrant Cascadian proposed GNU Guix updates for diffoscope versions 275 and 276 and 277.


Website updates

There were a rather substantial number of improvements made to our website this month, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen, including:

  • Temporarily install the openssl-provider-legacy package for the Debian unstable environments for running diffoscope due to Debian bug #1078944. [][][][]
  • Mark Debian armhf architecture nodes as being down due to proxy down. [][]
  • Detect proxy failures. [][][]
  • Run the index-buildinfo for the builtin-pho script with the -q switch. []
  • Disable all Arch Linux reproducible jobs. []

In addition, Mattia Rizzolo updated the website configuration to install the ruby-jekyll-sitemap package as it is now used in the website [], Roland Clobus updated the script to build Debian ‘live’ images to treat openQA issues as warnings [], and Vagrant Cascadian marked the cbxi4b node as down [].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

04 September, 2024 01:27PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

loading (unintended consequences?)

For their 30th anniversary (ish; the Covid pandemic pushed the date out a bit) British electronic music duo Orbital released the compilation 30 something. The track list mostly looks like a best hits list, which — given their prior compilation celebrating 20 years looks much the same — would appear superfluous. However, they’ve rearranged and re-recorded all their songs for 30, to reflect their live arrangements. The reworkings are sufficiently distinct from the original versions (in some cases I prefer them) and elevate the release. The couple of new tracks are also fun, and many of the remixes on the second disc are worth a listen too.

cover art from Orbital - 30 Something

But what I actually sat down to write about was the cover artwork. They often have designs which riff on the notion of a circle (given their name) and the 30-something art (both for the album and single takes from it) adapts a “loading” spinner-like device from computing (I suppose it mostly closely resembles the spinner from macOS).

A possibly unintended effect of the pattern occurs when you view it on a display which is adjusting its brightness, such as if you’re listening to it on a phone, the screen is off, and you pick it up. The brightest part of the spinner is visible first, and the rest fade into visibility in sequence. The first time you see this is unexpected and very cool. (I've tried to recreate it in the picture below, but I don't think it's worked.)

Although I've suffixed the titled of this post unintended consequences?, It's quite possible this was deliberate.

screenshot of the artwork displayed on my phone

I’ve got the pattern on a t-shirt and my kids love to call out “Daddy’s loading!” In my convalescence it’s taken on a special sort of resonance because at times I’ve felt I’m in a holding state: waiting for an appointment to be made; waiting a polite interval before chasing an appointment; waiting for treatment to start after attending an appointment. Thankfully I’m at the end of that now, I hope.

04 September, 2024 09:06AM

hackergotchi for Samuel Henrique

Samuel Henrique

DebConf24 was fun!: Security, curl, wcurl, Debian's quality

A picture of a badger2040w with Samuel's badge and the curl manpage PCB on the side

tl;dr

DebConf24 was fun!

A playlist of all of my talks, with subtitles (en, pt-br) and chapters is available on YouTube.

Overview

DebConf24 was held in Busan, South Korea, between Sunday July 28th to Sunday August 4th 2024.

As usual for DebConfs, I had a great time meeting my friends, but also met new people and got to learn a bit about the interesting things they're working on.

I ended up getting too excited during the talk submission stage of the conference and as a result I presented 5 different activities (3 talks, 1 BoF and 1 lightning talk).

Since I was too busy with the presentations, I did not have a lot of time to actually hang out with folks, or even to go out in the city, I guess I've learned my lesson for next time.

The main purpose of this post is to write about all of the things I presented at the conference. I did want to list some of the interesting talks I've watched, but that I would not be able to be fair as I'm sure I would miss some.

You can get the schedule and the recordings of any talks from the conference's website: https://debconf24.debconf.org/schedule/

wcurl Lightning Talk

The most fun of my presentations, during the second-to-last day of the conference, I've asked for help from Sergio Durigan Junior <sergiodj> to setup an URL containing a whitespace and redirecting that to wcurl's manpage.

I then did a little demo to showcase why me (and a lot others) struggle with downloading things with curl, and how wcurl solves that.

https://www.youtube.com/watch?v=eM8M5qa4pPM

Fixing CVEs on Debian: Everything you probably know already

I've always felt like DebConf was missing security-related talks, so I decided to do something about it and presented a few of the things I've learned when fixing CVEs for Debian.

This is an area where we don't get a lot of new contributors, I'm trying to change that, and this talk can be used to introduce newcomers to it.

https://www.youtube.com/watch?v=XzNVVILVyUM

The secret sauce of Debian

Debian is not very vocal about all of the nice things it has regarding quality-assurance, testing, or CI, even though it's at the state-of-the-art for a lot of things.

This talk is an initial step towards making people aware of the cool things happening behind the scenes. Ideally we should have it well-documented somewhere.

https://www.youtube.com/watch?v=x_X2IBnpjic

"I use Debian BTW": fzf, tmux, zoxide and friends

One of my earliest good memories of Debian was when it started coming with a colored PS1 by default, I still remember the feeling of relief whenever I jumped into a Debian server and didn't have to deal with a black and white PS1.

There's still a lot of room for Debian to ship better defaults, and I think some of them can actually happen.

This talk is a bit of a silly one where I'm just making people aware of the existence of a few Golang/Rust CLI tools, and also some dotfiles configurations that should probably be the default.

https://www.youtube.com/watch?v=tfto3Seokn4

curl

The curl project does such a great job with their security advisories that it will likely never receive the amount of praise it deserves, but I did my best at mentioning it throughout my CVEs talk.

Maybe I will write more extensively about this someday, but in case I don't:


There's no other project which always consistently mentions the exact range of commits that are affected by a given CVE.

Forget about whether the versions are EOL, curl doesn't have LTS releases, yet they do such a great job at clearly documenting their CVEs that I would take that over having LTS releases anytime (that's for curl at least, I acknowledge some types of projects have a different need for LTS releases).

Not only that, but they are also always careful about explaining alternative mitigations such as configuration changes, build flags that defuse the exploitation, or parameters that you should not use.


Just like we tend to do every time we meet, me and the other Debian curl maintainers spent the first 2 or 3 days of the conference talking about how we wanted to eventually meet up to discuss the package.

It was going to be informal, maybe during the Cheese and Wine party, but then I've realized we should make it part of the official schedule, which would also give us the recordings for later.

And so the "curl maintainers BoF" happened, where we spoke about HTTP3, GnutTLS, wcurl and other things.

https://www.youtube.com/watch?v=fL7hSypUTdM

wcurl

Right after that BoF, Daniel Stenberg asked if we were interested in having wcurl adopted into curl, which we definitely were, so wcurl is now part of the curl project.

Daniel was also kind enough to design a logo for the project, which makes me especially happy because I can stop with my own approach at a logo (which I had to redo every few days):

A laptop with a curl and a GoHorse sticker, there's a 'w' handwritten with a marker on the right side of the curl sticker, making it 'wcurl'

And here is the new logo:

'wcurl' written with the same font and colors as the curl logo, with the 'w' being green instead of blue, and a download icon at the end

Much better, I would say :)

curl Swag

DebConf24 was my chance at forwarding some curl swag items to the other curl maintainers, so both Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles> got the curl-up t-shirt and the very cool curl PCB coaster, both gifted by Daniel Stenberg.

Unfortunately I didn't have any of that for DebConf attendees, but I did drop loads of curl stickers at the stickers table, they were gone very quickly.

A table full of different stickers, curl stickers can be seen over the whole table

For the future

I used to think the most humbling experience you could have as someone who presented a talk was to have to watch it yourself, you notice a lot of mistakes and you instantly think about things that should be done differently.

It turns out the most humbling thing to do is actually to write subtitles for your talks, I noticed every single mistake, often multiple times.

So after spending more than 30 hours writing the subtitles for both English and Brazilian Portuguese for my talks, I feel like it's going to be much easier to avoid committing the same mistakes again. After some time you stop feeling shame about those mistakes and you're just left with feelings of annoyance, and at that point it becomes easier to consciously avoid them.

I am collecting a list of things I wish I had done differently on all of those talks, so if I end up presenting any one of them again, it will be an improved version.

A picture from the top of a group of conference attendees, there's about 150 people in the picture

04 September, 2024 12:00AM by Unknown

September 02, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Free and open source software and other market failures

This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM

Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS’s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the “steam” it once had.

The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers’ needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader.

Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of “open Unix computers,” but with clear examples of vendor lock-in techniques. He speculates that, “if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs … nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby.” He states that the FOSS movement was born out of the utter market failure of the different Unix vendors.

The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the “serious market” and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion?

Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. “The frustrations and anger of [information technology, IT] in 2024,” Kamp writes, “are entirely different from those of 1991.” As an example, the author closes by citing the difficulty of maintaining–despite having the resources to do so–an aging legacy codebase that needs to continue working year after year.

02 September, 2024 07:08PM

hackergotchi for Jonathan Carter

Jonathan Carter

Debian Day South Africa 2024

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

02 September, 2024 01:01PM by jonathan

September 01, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this are my bits from DPL for August.

Happy Birthday Debian

On 16th of August Debian celebrated its 31th birthday. Since I'm unable to write a better text than our great publicity team I'm simply linking to their article for those who might have missed it:

https://bits.debian.org/2024/08/debian-turns-31.html

Removing more packages from unstable

Helmut Grohne argued for more aggressive package removal and sought consensus on a way forward. He provided six examples of processes where packages that are candidates for removal are consuming valuable person-power. I’d like to add that the Bug of the Day initiative (see below) also frequently encounters long-unmaintained packages with popcon votes sometimes as low as zero, and often fewer than ten.

Helmut's email included a list of packages that would meet the suggested removal criteria. There was some discussion about whether a popcon vote should be included in these criteria, with arguments both for and against it. Although I support including popcon, I acknowledge that Helmut has a valid point in suggesting it be left out.

While I’ve read several emails in agreement, Scott Kitterman made a valid point "I don't think we need more process. We just need someone to do the work of finding the packages and filing the bugs." I agree that this is crucial to ensure an automated process doesn’t lead to unwanted removals. However, I don’t see "someone" stepping up to file RM bugs against other maintainers' packages. As long as we have strict ownership of packages, many people are hesitant to touch a package, even for fixing it. Asking for its removal might be even less well-received. Therefore, if an automated procedure were to create RM bugs based on defined criteria, it could help reduce some of the social pressure.

In this aspect the opinion of Niels Thykier is interesting: "As much as I want automation, I do not mind the prototype starting as a semi-automatic process if that is what it takes to get started."

The urgency of the problem to remove packages was put by CharlesPlessy into the words: "So as of today, it is much less work to keep a package rotting than removing it." My observation when trying to fix the Bug of the Day exactly fits this statement.

I would love for this discussion to lead to more aggressive removals that we can agree upon, whether they are automated, semi-automated, or managed by a person processing an automatically generated list (supported by an objective procedure). To use an analogy: I’ve found that every image collection improves with aggressive pruning. Similarly, I’m convinced that Debian will improve if we remove packages that no longer serve our users well.

DEP14 / DEP18

There are two DEPs that affect our workflow for maintaining packages—particularly for those who agree on using Git for Debian packages. DEP-14 recommends a standardized layout for Git packaging repositories, which benefits maintainers working across teams and makes it easier for newcomers to learn a consistent repository structure.

DEP-14 stalled for various reasons. Sam Hartman suspected it might be because 'it doesn't bring sufficient value.' However, the assumption that git-buildpackage is incompatible with DEP-14 is incorrect, as confirmed by its author, Guido Günther. As one of the two key tools for Debian Git repositories (besides dgit) fully supports DEP-14, though the migration from the previous default is somewhat complex.

Some investigation into mass-converting older formats to DEP-14 was conducted by the Perl team, as Gregor Hermann pointed out..

The discussion about DEP-14 resurfaced with the suggestion of DEP-18. Guido Günther proposed the title Encourage Continuous Integration and Merge Request-Based Collaboration for Debian Packages’, which more accurately reflects the DEP's technical intent.

Otto Kekäläinen, who initiated DEP-18 (thank you, Otto), provided a good summary of the current status. He also assembled a very helpful overview of Git and GitLab usage in other Linux distros.

More Salsa CI

As a result of the DEP-18 discussion, Otto Kekäläinen suggested implementing Salsa CI for our top popcon packages.

I believe it would be a good idea to enable CI by default across Salsa whenever a new repository is created.

Progress in Salsa migration

In my campaign, I stated that I aim to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;).

After a third of my DPL term (OMG), we've made significant progress, reducing the amount in question (369 packages) by nearly half. I'm pleased with the support from the DDs who moved their packages to Salsa. Some packages were transferred as part of the Bug of the Day initiative (see below).

Bug of the Day

As announced in my 'Bits from the DPL' talk at DebConf, I started an initiative called Bug of the Day. The goal is to train newcomers in bug triaging by enabling them to tackle small, self-contained QA tasks. We have consistently identified target packages and resolved at least one bug per day, often addressing multiple bugs in a single package.

In several cases, we followed the Package Salvaging procedure outlined in the Developers Reference. Most instances were either welcomed by the maintainer or did not elicit a response. Unfortunately, there was one exception where the recipient of the Package Salvage bug expressed significant dissatisfaction. The takeaway is to balance formal procedures with consideration for the recipient’s perspective.

I'm pleased to confirm that the Matrix channel has seen an increase in active contributors. This aligns with my hope that our efforts would attract individuals interested in QA work. I’m particularly pleased that, within just one month, we have had help with both fixing bugs and improving the code that aids in bug selection.

As I aim to introduce newcomers to various teams within Debian, I also take the opportunity to learn about each team's specific policies myself. I rely on team members' assistance to adapt to these policies. I find that gaining this practical insight into team dynamics is an effective way to understand the different teams within Debian as DPL.

Another finding from this initiative, which aligns with my goal as DPL, is that many of the packages we addressed are already on Salsa but have not been uploaded, meaning their VCS fields are not published. This suggests that maintainers are generally open to managing their packages on Salsa. For packages that were not yet on Salsa, the move was generally welcomed.

Publicity team wants you

The publicity team has decided to resume regular meetings to coordinate their efforts. Given my high regard for their work, I plan to attend their meetings as frequently as possible, which I began doing with the first IRC meeting.

During discussions with some team members, I learned that the team could use additional help. If anyone interested in supporting Debian with non-packaging tasks reads this, please consider introducing yourself to [email protected]. Note that this is a publicly archived mailing list, so it's not the best place for sharing private information.

Kind regards Andreas.

01 September, 2024 10:00PM by Andreas Tille