Blog content is held in a Fossil repository with a running Fossil server to support content pushing.
Each component runs in a Docker container.
Caddy is an open source
HTTP/2 web server.
a plugin for Caddy enabling Docker integration - when an appropriately
configured Docker container or service is brought up, caddy-docker-proxy
generates a Caddy site specification entry for it and reloads Caddy. With
Caddy's built-in Let's Encrypt functionality, this allows the new
container/service to run over HTTPS seamlessly.
Below is my docker-compose.yml for Caddy. I built Caddy with the
caddy-docker-proxy plugin from source and named the resulting Docker image
samadhiweb/caddy. The Docker network caddynet is the private network for
Caddy and the services it is proxying. The Docker volume caddy-data is for
persistence of data such as cryptographic keys and certificates.
Here's the docker-compose.yml snippet for the blog engine:
Of interest are the caddy.* labels from which caddy-docker-proxy generates
the following in-memory Caddy site entry:
Also note the ulimits section, which sets the suggested limits for the
Pharo VM heartbeat thread. These limits must be set in the docker-compose
file or on the docker command line - copying a prepared file into
/etc/security/limits.d/pharo.conf does not work when run in a Docker
Take a fresh Pharo 7 alpha image; as of yesterday's download that is
5f13ae8. Launch it and run the following snippet in a Playground:
Run the Glorp tests in TestRunner. The result should be green, with all
891 tests passed and 12 tests skipped. The database file is sodbxtestu.db
in your image directory. Tested on 32- and 64-bit Ubuntu 18.04.
This is the second post in a short series on the topic.
The last post
looked at the tables GROUPS and TEAMS in the OpenFootball relational
database schema. There is also the table GROUPS_TEAMS, usually known as a
link table, which, ahem, "relates" the GROUPS and TEAMS table. GROUPS_TEAMS
has the following schema:
A row in GROUPS_TEAMS with group_id of XXX and team_id of YYY means that
the team represented by team_id YYY is in the group with group_id XXX.
Let's modify the Smalltalk class OFGroup to handle the linkage, by adding
the inst-var 'teams' and creating accessors for it.
Next, modify the mapping for OFGroup in OFDescriptorSystem:
It is now necessary to add the table GROUPS_TEAMS to OFDescriptorSystem:
Now let's fetch the OFGroup instances with their linked OFTeam instances.
The above snippet produces the following output:
In the snippet, logging is enabled, and the SQL generated by Glorp is
displayed in the Transcript (with whitespace inserted for readability).
What we see is the infamous "N+1 selects problem" in action - the first
SELECT fetches the GROUPS rows, then, for each group_id, there is a
corresponding SELECT to fetch the TEAMS rows.
Fortunately Glorp is cleverer than this, and provides a way to avoid the
N+1 problem, by using the message #alsoFetch:.
Same output as before, but this time the SQL (pretty-printed by hand for
readability) is much shorter and properly takes advantage of the SQL
Using OpenFootball-Glorp for
illustration, this post is the first in a series on mapping an existing
normalized database schema and other fun Glorp stuff. As usual, I'm using
SQLite for the database.
Consider the tables GROUPS and TEAMS.
As it happens, every table in OpenFootball has columns "id", "created_at"
and "updated_at", where "id" is that table's primary key. Let's take
advantage of Smalltalk's inheritance and class hierarchy to map these
columns and tables:
By convention, the Glorp mapping is encapsulated in the class OFDescriptor,
which has these supporting methods:
The mapping for OFGroup is as follows:
The mapping for OFTeam is similar and I've not shown it here for brevity.
To round out the scene setting, OFDatabase, the "database interface" class,
has class-side convenience methods to run snippets like so: