Ephes Blog

Miscellaneous things. Mostly Weeknotes and links I stumbled upon.

Date -

Weeknotes 2022-12-26

, Jochen
'Man goes to doctor. Says he is CEO of AI startup but has no idea how to become profitable. Doctor says solution is simple. Advanced model GPT-4 is in town. Ask it how to profit, and it will surely know the answer. Man bursts into tears. “But doctor” he says “you are GPT-4”' --@utsu__kun

Lots of Christmas preparations and the usual work stuff. Attended the Django Meetup Cologne in person for the first time in a while and it was great. Reported a bug on takahē discord and saw this PR 9 minutes later 😮. This PR for Django 4.2 looks great! Maybe I can get rid of the monkey-patching stuff I do in django_fileresponse 🥳. Wrote a TIL about how to deploy takahē without docker.

Someone implemented a business model very similar to one I also think about implementing for a while: Just let people rent software deployed somewhere for an hourly/daily/monthly fee (mastodon, jupyterhub server, ...): Replace expensive per-seat SaaS with tap-to-install open-source apps - I just have to set up a landing page and collect some email addresses, no?

Using a time machine backup from an intel-based MacBook to restore it on a new M2-based MacBook led to some unexpected consequences:

  • Printing didn't work anymore (printing is always the first thing to stop working btw). The printer queue complained about a paper jam, but this was just a mismatched error code, I guess. The real problem was the intel-based printer driver didn't work on ARM. Installing the old intel-based printer driver from the manufacturer's website (the latest was for a macOS version a few years ago) triggered the rosetta installation and then it worked.
  • GnuPG stopped working because of the same problem and then installing a native version via homebrew didn't work because now the new version couldn't read the old version's keys. Solved by creating a new key and copying the sensible data in plain text, meh.

Articles

Twitter / Mastodon

Newsletters

Weeklogs

Software

Podcasts

Out of Context Images


TIL Deploying Takahē

, Jochen

Setting up Takahē for Local Development

Clone the repository from GitHub:

git clone git@github.com:jointakahe/takahe.git && cd takahe

Create the Postgres database, and add it to ProcfileDev:

mkdir databases && pg_ctl initdb -D databases/postgres

echo "postgres: postgres -D databases/postgres" > ProcfileDev

Copy the environment variables from the test environment via cp test.env .env and change the database URL in .env to TAKAHE_DATABASE_SERVER="postgres://takahe@localhost/takahe". Also add ProcfileDev to the .env file, because Procfile is already used for deployment stuff 😕 (isn't heroku dead already?):

echo "PROCFILE=ProcfileDev" >> .env

Now it should be possible to start up the database server with:

honcho start

Create the database and database user used by the Django application:

createdb takahe && createuser takahe

Create a virtualenv and activate it. This step is probably different for you but for me it is:

vf new takahe && vf connect

First Caveat

On my M1 mac, the lxml wheel seems to be broken, when I just install the requirements via python -m pip install -r requirements-dev.txt. Trying to import etree after installing the dependencies like this yields this traceback:

>>> from lxml import etree

Traceback (most recent call last):

File "<stdin>", line 1, in <module>

ImportError: dlopen(/Users/jochen/.virtualenvs/takahe/lib/python3.11/site-packages/lxml/etree.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_exsltDateXpathCtxtRegister'

>>>

But installing lxml with python -m pip install lxml works as expected. Really weird. Anyway installing lxml before installing the other dependencies without using a binary worked for me:

python -m pip install lxml --no-binary :all:

Installing the Development Dependencies

Install the development dependencies:

python -m pip install -r requirements-dev.txt

Now it's time to run the Django migrations: python manage.py migrate

Add the Django application and stator to the Procfile:

echo "django: PYTHONUNBUFFERED=true python manage.py runserver 0.0.0.0:8000" >> ProcfileDev

echo "worker: python manage.py runstator" >> ProcfileDev

If you now stop your running honcho and restart it, you should be able to see the takahē webinterface.

Running Tests

If you want to be able to run the tests, you have to add the CREATEDB privilege to the takahe database user:

psql -c "ALTER USER takahe CREATEDB;" -d takahe

Now it should by possible to run the tests:

pytest

Production Deployment

This is a little bit more complicated. To see an example for ansible, have a look at the deploy directory in my takahē fork (without_docker branch). You have to replace the hosts in the inventory but then this should work:

ansible-playbook deploy.yml --limit production


Weeknotes 2022-12-19

, Jochen
modern js frameworks are only possible in a negative real interest rate environment --htmx.org

Got more stuff done last week - getting better I guess 😀:

Articles

Weeklogs

Videos

Mastodon / Twitter

Software

Podcasts

Out of Context Images


TIL: Flush the FRITZ!Box DNS Cache

, Jochen

I tried to change some DNS records today, and even after the TTL was over, I couldn't get the updated records on my notebook. I flushed the DNS cache on macOS but it didn't work. After some debugging, I found that my FRITZ!Box might be the culprit.

How to do it

Just go to:

Heimnetz > Netzwerk > Netzwerkeinstellungen > IPv4-Einstellungen

And then deactivate DHCP and reactivate it afterward. This will flush the DNS cache without the need for a restart. My changed DNS records were available right after doing this 🤪.


TIL: Change Owner of Postgres Database Objects Using Ansible

, Jochen

For my very modest deployment needs, I use ansible. After restoring a postgres database backup using the community.general.postgresql_db role, the normal deployment of a Django app didn't work anymore. The python manage.py migrate command failed complaining it cannot modify a table not owned by the user running the migration. Turns out that all the tables in the database are owned by the postgres user after restoring the database.

Ok, let's just add an ansible task changing the owner using the postgresql_owner role. But reassign_owned_by does not work if the source user is postgres and specifying all the tables, sequences, and views manually seems to be just wrong. So my solution for this was to copy and paste a solution from StackOverflow into two ansible tasks.

- name: Create postgres function to be able to change owner of db objects
      postgresql_query:
        db: "{{ postgres_database }}"
        query: |
          CREATE FUNCTION exec(text) returns text language plpgsql volatile
            AS $f$
              BEGIN
                EXECUTE $1;
                RETURN $1;
              END;
          $f$;
      become: true
      become_user: postgres
      ignore_errors: true

    - name: "Change owner of all tables in db to {{ postgres_user }}"
      postgresql_query:
        db: "{{ postgres_database }}"
        query: |
          SELECT exec('ALTER TABLE ' || quote_ident(s.nspname) || '.' ||
                      quote_ident(s.relname) || ' OWNER TO "{{ postgres_user }}"')
            FROM (SELECT nspname, relname
                    FROM pg_class c JOIN pg_namespace n ON (c.relnamespace = n.oid)
                   WHERE nspname NOT LIKE E'pg\\_%' AND
                         nspname <> 'information_schema' AND
                         relkind IN ('r','S','v') ORDER BY relkind = 'S') s;

      become: true
      become_user: postgres

Update: This should no longe be an image. Code blocks ftw!

Sorry for the image, I still have to implement code blocks for wagtail. I added those two tasks to my database restore playbook and now all database objects belong to the original user. Maybe there's a better way to do this, but maybe this is also helpful for someone. If you know how to do this properly: let me know 🙂.