Gitea

14 July 2019

Gitea is an open source Github-lookalike written in Go. Building Gitea from source is straightforward; the output is a single executable gitea. Pre-built binaries and Docker images are also available.

Once configured appropriately, gitea runs a HTTP server, which provides the Github-ish user interface, and a built-in SSH server, which is used for synchronizing Git repos.

In Pharo, Iceberg works with a locally running Gitea just like it works with Github.

I've been using Monticello for version control of my little tools. Monticello works when requirements are simple. But some of the tools have grown enough to need a VCS with good branching and merging capabilities.

Telemon: Pharo metrics for Telegraf

3 July 2019

In my previous post on the TIG monitoring stack, I mentioned that Telegraf supports a large number of input plugins. One of these is the generic HTTP plugin that collects from one or more HTTP(S) endpoints producing metrics in supported input data formats.

I've implemented Telemon, a Pharo package that allows producing Pharo VM and application-specific metrics compatible with the Telegraf HTTP input plugin.

Telemon works as a Zinc ZnServer delegate. It produces metrics in the InfluxDB line protocol format. By default, Telemon produces the metrics generated by VirtualMachine>>statisticsReport and its output looks like this:

TmMetricsDelegate new renderInfluxDB
"pharo uptime=1452854,oldSpace=155813664,youngSpace=2395408,memory=164765696,memoryFree=160273136,fullGCs=3,fullGCTime=477,incrGCs=9585,incrGCTime=9656,tenureCount=610024"

As per the InfluxDB line protocol, 'pharo' is the name of the measurement, and the items in key-value format form the field set.

To add a tag to the measurement:

| tm |
tm := TmMetricsDelegate new. 
tm tags at: 'host' put: 'telemon-1'.
tm renderInfluxDB
"pharo,host=telemon-1 uptime=2023314,oldSpace=139036448,youngSpace=5649200,memory=147988480,memoryFree=140242128,fullGCs=4,fullGCTime=660,incrGCs=14291,incrGCTime=12899,tenureCount=696589"

Above, the tag set consists of "host=telemon-1".

Here's another invocation that adds two user-specified metrics but no tag.

| tm |
tm := TmMetricsDelegate new. 
tm fields 
  at: 'meaning' put: [ 42 ];
  at: 'newMeaning' put: [ 84 ].
tm  renderInfluxDB
"pharo uptime=2548014,oldSpace=139036448,youngSpace=3651736,memory=147988480,memoryFree=142239592,fullGCs=4,fullGCTime=660,incrGCs=18503,incrGCTime=16632,tenureCount=747211,meaning=42,newMeaning=84"

Note that the field values are Smalltalk blocks that will be evaluated dynamically.


When I was reading the specifications for Telegraf's plugins, the InfluxDB line protocol, etc., it all felt rather dry. I imagine this short post is the same so far for the reader who isn't familiar with how the TIG components work together. So here are teaser screenshots of the Grafana panels for the Pharo VM and blog-specific metrics for this blog, which I will write about in the next post.

This Grafana panel shows a blog-specific metric named 'zEntity Count'.

Grafana Pharo app metrics

This next panel shows the blog-specific metric 'zEntity Memory' together with the VM metric 'Used Memory' which is the difference between the 'memory' and 'memoryFree' fields.

Grafana Pharo VM and app metrics

This blog runs in a Docker container. The final panel below shows the resident set size (RSS) of the container as reported by the Docker engine.

Grafana Pharo Docker metrics

Smalltalk's Greatest Performance Issue

29 June 2019

The subject is an interesting discussion that took place on the VA Smalltalk mailing list. Notables:

  • VA Smalltalk has threaded FFI: "I have used our crypto libraries to get 6 to 7 threads of encryption streams going at once and it works great."

  • A GemTalk customer runs "something like 300 processes hosting a Seaside application with a Gemstone/S database." By context, "processes" here should mean OS processes.

  • With VA Smalltalk: "We routinely do 1-3MB image deployments on tiny IoT devices that are running full Seaside web servers and SST object remoting."

  • Another approach: "All my communication solutions to/from Smalltalk are done with 0MQ. 0MQ has its own working thread pool and so a large amount of CPU time is returns to Smalltalk for logic execution."

Also, two implementations of Smalltalk on .NET/DLR were mentioned. Going by their websites/repos they have not been updated for several years.

TIG: Telegraf InfluxDB Grafana Monitoring

13 June 2019

I've set up the open source TIG stack to monitor the services running on these servers. TIG = Telegraf + InfluxDB + Grafana.

  • Telegraf is a server agent for collecting and reporting metrics. It comes with a large number of input, processing and output plugins. Telegraf has built-in support for Docker.

  • InfluxDB is a time series database.

  • Grafana is a feature-rich metrics dashboard supporting a variety of backends including InfluxDB.

Each of the above runs in a Docker container. Architecturally, Telegraf stores the metrics data that it collects into InfluxDB. Grafana generates visualizations from the data that it reads from InfluxDB.

Here are the CPU and memory visualizations for this blog, running on Pharo 7 within a Docker container. The data is as collected by Telegraf via querying the host's Docker engine.

Grafana Pharo CPU

Grafana Pharo Memory

Following comes to mind:

  • While Pharo is running on the server, historically I've kept its GUI running via RFBServer. I haven't had to VNC in for a long time now though. Running Pharo in true headless mode may reduce Pharo's CPU usage.

  • In terms of memory, ~10% usage by a single application is a lot on a small server. Currently this blog stores everything in memory once loaded/rendered. But with the blog's low volume, there really isn't a need to cache; all items can be read from disk and rendered on demand.

Only one way to find out - modify software, collect data, review.