Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.
Winston Churchill
Author Archives: Scott Alfter
Running a Collabora server behind caddy-docker-proxy
Just a quick cheatsheet-ish post as I figure out how to move my home server off nginx and onto caddy-docker-proxy, which I hope will make configuration easier in the long run. Figuring out what labels to attach to get Caddy to do what I want is a minor stumbling block.
In this case, it’s working with the Collabora server that my Nextcloud instance uses to enable online editing of office-type files. The sticking point is that the server complains if Nextcloud connects over plain HTTP, but Caddy (though which Nextcloud will connect) complained about the Collabora server’s self-signed SSL certificate.
This docker-compose.yml is what I ended up using:
services:
collabora:
image: collabora/code
container_name: collabora
restart: unless-stopped
environment:
extra_params: --o:ssl.enable=true
networks:
- www
labels:
caddy: collabora.alfter.us
caddy.reverse_proxy: https://collabora.www:9980
caddy.reverse_proxy.transport: http
caddy.reverse_proxy.transport.tls_insecure_skip_verify:
networks:
www:
name: www
external: true
The “www” network connects Nginx and Caddy (eventually just Caddy) to all of the containers to be proxied. The first two labels are pretty normal, but the last two are what tell Caddy to ignore Collabora’s self-signed certificate. The part of the Caddyfile that handles collabora.alfter.us ends up looking something like this:
collabora.alfter.us {
reverse_proxy https://collabora.www {
transport http {
tls_insecure_skip_verify
}
}
}
A quick little project
Only thing missing is the keyboard, though I threw some one-liners at it through the USB port, which shows up as a serial interface. This is a single-board computer that boots to a BASIC prompt, like the 8-bitters of ~40 years ago. It’s built around a Raspberry Pi Pico, produces VGA video output, and has an SD-card slot for program storage and a PS/2 keyboard jack. It’s a PicoMiteVGA implementation that I knocked together fairly quickly and sent out for manufacturing.
I have parts to build four more. They might show up on Tindie, once I verify that the keyboard input works. The KiCad files are over here.
z88dk: a quick way to get it running
I’m looking to bring up my RC2014-compatible computer project, now that I’ve assembled the minimal set of boards to have a working computer (CPU board, RAM/ROM board, and serial I/O board). The toolchain to build software for these systems is z88dk. It has a non-standard build system, and it looks like getting it to build on Gentoo Linux is a bit hairy. Fortunately, there’s a Docker image with all of the tools, and I think I can get it running with a bunch of shell scripts. Running the following from within /usr/local/bin should do the trick:
(echo '#!/usr/bin/env bash'; echo 'docker run -v .:/src -it --rm z88dk/z88dk "$@"') | sudo tee z88dk >/dev/null && sudo chmod +x z88dk; for i in zcc z88dk-sccz80 z88dk-zsdcc z88dk-z80asm z88dk-z80nm z88dk-zobjcopy z88dk-appmake z88dk-ticks z88dk-gdb z88dk-dis z88dk-lib z88dk-zx0 z88dk-zx7 z88dk-dzx0 z88dk-dzx7; do (echo '#!/usr/bin/env bash'; echo 'z88dk '$i' "$@"') | sudo tee $i >/dev/null && sudo chmod +x $i; done
Disabling RAID on HP Smart Array 420i controllers
At work, I have an old HP ProLiant DL385p Gen8 that had been running VMware ESXi for over a decade. We’ve migrated VMs off of it onto newer servers, so it was sitting idle. Broadcom’s recent acquisition of VMware and their plans for the product (the end of perpetual licensing, in particular) prompted a search for a replacement.
I nuked VMware off the server and installed Proxmox. I got it up and running with a mix of Windows and Linux VMs (including Windows 11 VMs running on the Opterons in this server, with no gripes about CPU compatibility, TPM availability, or anything…nice!) and a Docker host running some containers. Then I wanted to enable Ceph on Proxmox, but the RAID controller (an HP Smart Array P420i) was getting in the way. Ceph doesn’t like to work with hardware RAID.
I deleted the existing array (eight 600GB drives in RAID-6) and set about to disabling RAID on the controller. This required booting into an OS of some sort (such as a Linux distro on a USB stick) and running a command-line tool…no big deal. When I did that, though, it failed to work at first. The firmware on the card was probably whatever shipped when we bought the server all those years ago. Information I’d read online suggested that a firmware upgrade would support HBA mode, so I set out to take care of that. This post covers two topics:
- Upgrading the controller firmware to make HBA mode available. (I’m targeting the Smart Array 420i, but the methods should work for other controllers as long as you can find the firmware and it’s not paywalled by HPE.)
- Toggling HBA mode on.
Upgrading the firmware
You’ll need a Windows PE bootable distribution (I used Hiren’s BootCD PE, but any would likely work) and the online firmware flash tool for Windows. (If you have a different controller, you can probably find it through the search box at the second link.) Put each of them on separate USB sticks.
Boot your WinPE stick. Unplug it (it’s running in RAM once you get to the desktop), plug in the stick with the firmware flash tool, and run the tool. It’ll take a few minutes to do its thing, and the server’s fans will start screaming for a bit while the controller firmware is being updated. When done, you’ll be prompted to reboot. Eject the stick and reboot.
Disabling RAID
For this, you’ll need a Debian live install image. Blast it onto a USB stick with the tool of your choice (these days, I normally use the Raspberry Pi Imager for this, even for stuff that’s not ultimately running on a Raspberry Pi). Debian offers several desktop options to choose from; LXDE works and should be lighter than most.
Boot your Debian stick and wait for it to get to the desktop. Open a shell window and enter the following to install ssacli, the command-line tool that manages Smart Array controllers:
sudo bash -l
apt update && apt install -y wget
echo "deb http://downloads.linux.hpe.com/SDR/repo/mcp jammy/current non-free" > /etc/apt/sources.list.d/hp-mcp.list
wget -q -O - https://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/hpPublicKey1024.gpg
wget -q -O - https://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/hpPublicKey2048.gpg
wget -q -O - https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/hpPublicKey2048_key1.gpg
wget -q -O - https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/hpePublicKey2048_key1.gpg
apt update && apt install -y ssacli
With ssacli installed, you can now use it to enumerate the installed controllers:
ssacli ctrl all show
Odds are you’ll just have one controller in slot 0:
Smart Array P420i in Slot 0 (Embedded) (sn: <REDACTED>)
To enable HBA mode, use this:
ssacli ctrl slot=0 modify hbamode=on
With the 3.xx firmware that was on there previously, this wouldn’t work…it’d gripe about HBA mode not being supported. With the v8.32-0 firmware it’s now running, it should work, and once the card is switched over, any attached drives should now show up as /dev/sdb, /dev/sdc, etc.
Something to keep in mind
I’ve read the constitutions of a number of countries, including the Soviet Union’s. Now, some people are surprised to hear that they have a constitution, and it even supposedly grants a number of freedoms to its people. Many countries have written into their constitution provisions for freedom of speech and freedom of assembly. Well, if this is true, why is the Constitution of the United States so exceptional?
Well, the difference is so small that it almost escapes you, but it’s so great it tells you the whole story in just three words: We the people.
In those other constitutions, the Government tells the people of those countries what they’re allowed to do. In our Constitution, we the people tell the Government what it can do, and it can do only those things listed in that document and no others. Virtually every other revolution in history has just exchanged one set of rulers for another set of rulers. Our revolution is the first to say the people are the masters and government is their servant.
Ronald Reagan, State of the Union Address, 27 January 1987
Fun with the new server
Over the past couple of months, I’ve been migrating this site (and others) off of an older virtual server that ran Gentoo Linux onto a newer one running Alpine Linux and Docker. I have a pile of scripts that set up and maintain each service on the new server, and Ouroboros keeps most things up to date faster than an emerge -uNDv world && emerge --depclean
ever did. For the services that Ouroboros doesn’t keep up to date (because they’re containers I generated myself), there’s a script that regenerates them with whatever upstream sources are current. So far, I’ve only been stung once when an upgrade of MariaDB to 11.3 broke most of my websites, but rolling it back to 11.2 set things right again.
There are also some things that got revisited in the move that needed to be brought up to date. For instance, there used to be a mailing list called the Homebrew Digest. It’s been gone for some time now, but it had nearly 100k posts to it going back as far as the late ’80s covering all aspects of homebrewing. Someone else had an archive up, but it’s also succumbed to bit rot. 15 or so years ago, I put up my own searchable archive, but it had suffered from neglect and the resulting breakage from moving to newer versions of PHP. I’ve recently dusted it off, tweaked it where needed, and put it up again at hbd.beerandloafing.org. It was previously integrated into another website I’d not changed much since 2004, but has been broken out into its own site running in its own container.
That other site, www.beerandloafing.org, is now home to a BrewBlogger instance. That’s another piece of web code that hasn’t been maintained much…it still needs PHP 5 to run. Putting it in its own container made it easy to cater to its needs while the other PHP-based sites I have (including the HBD archive) can run on more up-to-date versions. (If you’re interested in running BrewBlogger on Docker, you can pull it from Docker Hub or grab the Dockerfile from my GitLab.) Once I got it running, I imported the database from the old server into it. Most of my recipes are still in ProMash, an even older piece of software and the main reason I have WINE running on my computers. (ProMash was written for Windows, but my computers spend 99%+ of their time running some form of Linux.) As I get around to it, I’ll transcribe the recipes and brew sessions from ProMash into BrewBlogger.
Cheatsheet: Raspberry Pi + Alpine Linux + OctoPrint
The last upgrade to Raspbian Raspberry Pi OS broke the WiFi connection to my 3D printer in an inconvenient way, as it runs headless with just a 4-pin power-and-UART connection. I’ve been having pretty good luck lately with Alpine Linux on various systems (including a Docker host and a cluster of Raspberry Pi 4s I knocked together to try to wrap my head around Kubernetes), so I thought I’d put it on the Compute Module 4 that drives my printer. I backed up the OctoPrint config that was on it and got a replacement configuration running on a spare Raspberry Pi 3 at first. Here’s what I came up with in the way of what needs to be done to put Alpine Linux on a Raspberry Pi and bring up OctoPrint on the resulting system.
- Download the Alpine Linux tarball that’s appropriate for your Raspberry Pi from here. (For the RPi 3 and up, you most likely want the aarch64 image. This is especially true for the RPi 4 and up with 4 GB or more of RAM.) Instructions on how to do this are available elsewhere, but the short version is that you want to unpack the tarball to a FAT-formatted MicroSD card (or USB stick if you’re targeting a Compute Module 4 with onboard eMMC), boot from the device, and run setup-alpine to install. Toward the end, you want to install it in “sys” mode. Reboot to bring up the new system, and then log in. You would’ve needed a monitor and keyboard to set up Alpine, but if you’ve set it up right, you can ssh into it from here on out.
- Edit /etc/apk/repositories to enable the community repository. (This is needed for vcgencmd.)
- Install needed software:
doas apk add gcc make musl-dev linux-headers libffi-dev nginx raspberrypi-utils-vcgencmd python3-dev
- Install OctoPrint:
python -m venv --upgrade-deps octoprint
octoprint/bin/pip install https://gitlab.alfter.us/salfter/marlin-binary-protocol/-/archive/v0.0.8/marlin-binary-protocol-v0.0.8.tar.gz
octoprint/bin/pip install octoprint
(The marlin-binary-protocol installation is needed if you want to use the Firmware Updater plugin and need to use its marlinbft driver. My printer uses a BTT SKR 1.4 Turbo, an LPC1769-based board that can use this driver for firmware uploads.) - Edit /etc/fstab to disable tmpfs on /tmp. (Restoring backups from another OctoPrint instance will probably fail if you don’t.)
- Copy the following to /etc/nginx/nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream "octoprint" {
server 127.0.0.1:5000;
}
upstream "mjpg-streamer" {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://octoprint/;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_http_version 1.1;
client_max_body_size 0;
}
location /webcam/ {
proxy_pass http://mjpg-streamer/;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
} - Create /etc/local.d/octoprint.start with the following and make it executable:
#!/usr/bin/env ash
su - salfter -c "nohup /home/salfter/octoprint/bin/octoprint serve 2>&1 >/dev/null &"
(change “salfter” to whatever user you’re using) - Create /etc/local.d/octoprint.stop with the following and make it executable:
#!/usr/bin/env ash
pkill octoprint - Check /etc/inittab to make sure the serial console isn’t enabled.
- Add the following to /boot/usercfg.txt:
enable_uart=1
gpu_mem=16
dtoverlay=pi3-disable-bt
[cm4]
otg_mode=1
Reboot so the changes take effect. - Enable the nginx and local startup scripts:
doas rc-update add nginx
doas rc-update add local - Add the user under which OctoPrint runs (in my case, that would be salfter) to whatever group /dev/ttyAMA0 belongs (root, in my case).
- Edit /etc/doas.d/doas.conf to allow doas to work without a password (needed so OctoPrint can restart itself):
permit nopass :wheel
- Reboot and wait for OctoPrint to come up on port 80. Restore your backup (if you have one) and you’re done!
Memes to think about
Cheatsheet: convert WRL to STEP with open-source tools
I had some electronic component models (obtained via easyeda2kicad) that I needed to convert to STEP so they’d show up when I brought a board using these models into FreeCAD with KicadStepUp. I tried doing the job in FreeCAD alone, but while WRL files are basically meshes (not too different in theory from STL files), FreeCAD’s mesh-to-shape conversion wanted nothing to do with them. I searched for
I ended up using Wings3D to convert from WRL to STL, and then used FreeCAD to convert from STL to STEP:
- Import the WRL file into Wings3D. Check the “swap X & Y axes” box and set the import scale to 2.54. Export to STL.
- Import the STL file you just created into FreeCAD.
- (optional) Switch to the mesh workbench and decimate the mesh (Meshes -> Decimation…).
- Switch to the part workbench, create a shape from the mesh (Part -> Create shape from mesh…, then make sure “sew shape” is checked), and convert the shape to a solid (Part -> Convert to solid).
- Export the solid to STEP.
These models definitely won’t look as nice as models created from a proper CAD workflow (you can even import from OpenSCAD, export that to STEP, and get something pretty decent), but if you just need a model of your PCB for mechanical integration, it’ll get the job done.