Open Source Rewards

This is a bit of a rambling piece around some things that have been rattling round in my brain for a while. I’ve written it out mainly to just get it out of my head.

There has been a of noise around Open Source projects and cloud companies making huge profits when running these as services. To the extent of some projects even changing the license to either prevent this, to force the cloud providers to publish all the supporting code that allows the projects to be run at scale or include new features under none Open Source licenses.

Most of the cases that have been making the news have been around projects that have an organisation that supports them e.g. Elastic search that also sell support and hosted versions of the project. While I’m sympathetic to the arguments of the projects I’m not sure the license changes work, and the cloud companies do tend to commit developers to the projects (OK, usually with an aim of getting the features they want implemented, but they do fix bugs as well). Steve O’Grady from Redmonk has a good piece about this here.

I’m less interested in the big firms fighting over this sort of thing, I’m more interested in the other end of the scale, the little “guys/gals”.

There have also been cases about single developers that have built some core components that under pin huge amounts of Open Source software. This is especially clear in the NodeJS world where hundreds of tiny npm modules come together to build thousands of more complex modules that then make up every bodies applications.

While a lot of OS developers do it for the love, or to practice their skills on things they enjoy. But when a project becomes popular the expectations start to stack up. There are literally too many stories of entitled users expecting the same levels of support/service that they would get from a large enterprise when paying for a support contract.

The problem here is when the developer at the bottom of the stack gets fed up with everybody that depends on their module raising bugs and not contributing fixes or just gets bored and walks away. We end up with what happened to the event-stream module. In this case the dev got bored and handy ownership over to somebody else (the first person who asked), that somebody later injected a bunch of code that would steal cryptocurrency private keys.

So what are the options to allow a loan developer to work on their projects and get the support they deserve.

Get employed by a somebody that depends on your code

If your project is of value to a medium/large organisation it can be in their interests to bring support for that project in house. I’ve seen this happen with projects I’ve been involved with (if only on the periphery) and it can work really well.

The thing that can possibly be tricky is balancing the needs of the community that has grown up around a project and the developers new “master” who may well have their own idea’s about what direction the project should take.

I’ve also seen work projects be open sourced and their developers continuing to drive them and get paid to do so.

Set up own business around the project

This is sort of the ultimate version of the previous one.

It can be done by selling support/services around a project, but it also can make some of the problems I mentioned earlier worse as now some will expect even more now they are paying for it.

Paypal donations or Github sponsorship/Patreon/Ko-Fi

Adding a link on the projects About page to a paypal account or a Patreon/Github sponsorship page can let people show their appreciation for a developers work.

Paypal links work well for one off payments, where as the Patreon/Github/Ko-Fi sponsorship model is a little bit more of a commitment but can be a good way to cover on going costs without needing to charge for a service directly. With a little work the developer can make use of the APIs these platforms provide bespoke content/services for users who choose to donate.

I have included a Paypal link the about page of some of my projects, I set have set the default amount to £5 with the suggestion that I will probably use it to buy myself a beer from time to time.

I have also recently signed up to the Githib sponsorship project to see how it works. Github lets you set different monthly amounts between $1 and $20000, at this time I only have 1 level set to $1.

Adverts/Affiliate links in projects

If you are building a mobile app or run a website then there is always the option of including adverts in the page/app. With this approach the user of the project doesn’t have to do anything apart from put up with some hopefully relevant adverts.

There is a balance that has to be struck with this as too many adverts or for irrelevant things can annoy users. I do occasionally post Amazon affiliate links in blog posts and I keep track of how much I’ve earned on the about page.

This is not just a valid model for open source projects, many (most) mobile games have adopted this sort of model, even if it is just as a starting tire before either allowing users to pay a fee to remove the adds or to buy in game content.

Amazon Wishlists

This is a slightly different approach is to publish a link to something like an Amazon wishlist. This allows users to buy developers a gift as a token of appreciation. The list allow the developer to get things they actually want and to select a range of items at different price points.

Back when Amazon was closer to it’s roots as a online book store (and people still read books to learn new things) it was a great way to get a book about a new subject to start a new project.

Other random thoughts

For another very interesting take on some of this please watch this video from Tom Scott for the Royal Institution about Science Communicating in the world of Youtube and social media. It has a section in middle about Parasocial Relationships which is really interesting (as it the rest of the video) in this context.

Conclusion

I don’t really have one at the moment, as I said at the start this is a bit of a stream of conciousness post.

I do think that there isn’t a one size fits all model, nor are the options I’ve listed above all of them, they were just the ones that came to mind as it typed.

If I come up with anything meaningful, I’ll do a follow up post, also if somebody want to sponsor me $20,000 a month on Github to come up with something, drop me a line ;-).

Alternate PPPoE Server (Access Concentrator)

Earlier this year I had a short series of posts where I walked through building a tiny (fantasy) ISP.

I’ve been using the Roaring Penguin version of the PPPoE Server that was available by default in Raspbian as I am running all of this on a Raspberry Pi4. It worked pretty well but I had to add the traffic shaping manually, at the time this was useful as it gave me an excuse to finally get round to learning how to do some of those things.

I’ve been looking for a better accounting solution, one that I can reset the data counters on at a regular interval without forcing the connection to drop. While digging around I found an alternative PPPoE implementation called accel-ppp.

accel-ppp supports a whole host of tunnelling protocols such as pptp, l2tp, sstp and ipoe as well as pppoe. It also has a built in traffic shaper module. It builds easily enough on Rasbian so I thought I’d give it a try.

The documentation for accel-ppp isn’t great, it’s not complete and a mix of English and Russian which is not the most helpful. But it is possible to patch enough together to get things working.

[modules]
log_file
pppoe
auth_pap
radius
shaper
#net-snmp
#logwtmp
#connlimit
ipv6_nd
#ipv6_dhcp
#ipv6pool

[core]
log-error=/var/log/accel-ppp/core.log
thread-count=4

[common]
#single-session=replace
#sid-case=upper
#sid-source=seq
#max-sessions=1000
check-ip=1

[ppp]
verbose=1
min-mtu=1280
mtu=1400
mru=1400
#accomp=deny
#pcomp=deny
#ccp=0
#mppe=require
ipv4=require
ipv6=allow
ipv6-intf-id=0:0:0:1
ipv6-peer-intf-id=0:0:0:2
ipv6-accept-peer-intf-id=1
lcp-echo-interval=20
#lcp-echo-failure=3
lcp-echo-timeout=120
unit-cache=1
#unit-preallocate=1

[auth]
#any-login=0
#noauth=0

[pppoe]
verbose=1
ac-name=PPPoE7
#service-name=PPPoE7
#pado-delay=0
#pado-delay=0,100:100,200:200,-1:500
called-sid=mac
#tr101=1
#padi-limit=0
#ip-pool=pppoe
#ifname=pppoe%d
#sid-uppercase=0
#vlan-mon=eth0,10-200
#vlan-timeout=60
#vlan-name=%I.%N
interface=eth0

[dns]
#dns1=172.16.0.1
#dns2=172.16.1.1

[radius]
dictionary=/usr/local/share/accel-ppp/radius/dictionary
nas-identifier=accel-ppp
nas-ip-address=127.0.0.1
gw-ip-address=192.168.5.1
server=127.0.0.1,testing123,auth-port=1812,acct-port=1813,req-limit=50,fail-timeout=0,max-fail=10,weight=1
#dae-server=127.0.0.1:3799,testing123
verbose=1
#timeout=3
#max-try=3
#acct-timeout=120
#acct-delay-time=0
#acct-on=0
#attr-tunnel-type=My-Tunnel-Type

[log]
log-file=/var/log/accel-ppp/accel-ppp.log
log-emerg=/var/log/accel-ppp/emerg.log
log-fail-file=/var/log/accel-ppp/auth-fail.log
copy=1
#color=1
#per-user-dir=per_user
#per-session-dir=per_session
#per-session=1
level=3

[shaper]
vendor=RoaringPenguin
attr-up=RP-Upstream-Speed-Limit
attr-down=RP-Downstream-Speed-Limit
#down-burst-factor=0.1
#up-burst-factor=1.0
#latency=50
#mpu=0
#mtu=0
#r2q=10
#quantum=1500
#moderate-quantum=1
#cburst=1534
#ifb=ifb0
up-limiter=police
down-limiter=tbf
#leaf-qdisc=sfq perturb 10
#leaf-qdisc=fq_codel [limit PACKETS] [flows NUMBER] [target TIME] [interval TIME] [quantum BYTES] [[no]ecn]
#rate-multiplier=1
#fwmark=1
#rate-limit=2048/1024
verbose=1

[cli]
verbose=1
telnet=127.0.0.1:2000
tcp=127.0.0.1:2001
#password=123
#sessions-columns=ifname,username,ip,ip6,ip6-dp,type,state,uptime,uptime-raw,calling-sid,called-sid,sid,comp,rx-bytes,tx-bytes,rx-bytes-raw,tx-bytes-raw,rx-pkts,tx-pkts

[snmp]
master=0
agent-name=accel-ppp

[connlimit]
limit=10/min
burst=3
timeout=60

To use the same Radius attributes as before I had to copy the Roaring Penguin dictionary to /usr/local/share/accel-ppp/radius/ and edit it to add in BEGIN-VENDOR and END-VENDOR tags.

With this configuration is a straight drop in replacement for the Roaring Penguin version, no need to change anything in the LDAP or Radius, but it doesn’t need the `ip-up` script to setup the traffic shaping.

I’m still playing with some of the extra features, like SNMP support and the command line/telnet support for sending management commands.

Pi4 USB-C Gadget

Pi4 Gadget

I’ve previously blogged about using Pi Zero (and Zero W) devices as USB Gadgets. This allows them to be powered and accessed via one of the micro USB sockets and it shows up as both a CD-Drive and a ethernet device.

A recent update to the Raspberry Pi 4 bootloader not only enables the low power mode for the USB hardware, allows the enabling of Network boot and enables data over the USB-C port. The lower power means it should run (without any hats) with the power supplied from a laptop.

Details of how to check/update the bootloader can be found here.

Given that the Pi4 has a Gigabit Ethernet adapter, WiFi and 4 USB sockets (need to keep the power draw low to be safe) and up to 4Gb RAM to go with it’s 4 x 1.5Ghz core processor it makes for a very attractive plugin compute device.

With this enabled all the same script from the Pi Zero’s should just work but here is the updated version for Raspbian Buster.

  • Add dtoverlay=dwc2 to the /boot/config.txt
  • Add modules-load=dwc2 to the end of /boot/cmdline.txt
  • Add libcomposite to /etc/modules
  • Add denyinterfaces usb0 to /etc/dhcpcd.conf
  • Install dnsmasq with sudo apt-get install dnsmasq
  • Create /etc/dnsmasq.d/usb with following content
interface=usb0
dhcp-range=10.55.0.2,10.55.0.6,255.255.255.248,1h
dhcp-option=3
leasefile-ro
  • Create /etc/network/interfaces.d/usb0 with the following content
auto usb0
allow-hotplug usb0
iface usb0 inet static
  address 10.55.0.1
  netmask 255.255.255.248
  • Create /root/usb.sh
#!/bin/bash
cd /sys/kernel/config/usb_gadget/
mkdir -p pi4
cd pi4
echo 0x1d6b > idVendor # Linux Foundation
echo 0x0104 > idProduct # Multifunction Composite Gadget
echo 0x0100 > bcdDevice # v1.0.0
echo 0x0200 > bcdUSB # USB2
echo 0xEF > bDeviceClass
echo 0x02 > bDeviceSubClass
echo 0x01 > bDeviceProtocol
mkdir -p strings/0x409
echo "fedcba9876543211" > strings/0x409/serialnumber
echo "Ben Hardill" > strings/0x409/manufacturer
echo "PI4 USB Device" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
# Add functions here
# see gadget configurations below
# End functions
mkdir -p functions/ecm.usb0
HOST="00:dc:c8:f7:75:14" # "HostPC"
SELF="00:dd:dc:eb:6d:a1" # "BadUSB"
echo $HOST > functions/ecm.usb0/host_addr
echo $SELF > functions/ecm.usb0/dev_addr
ln -s functions/ecm.usb0 configs/c.1/
udevadm settle -t 5 || :
ls /sys/class/udc > UDC
ifup usb0
service dnsmasq restart
  • Make /root/usb.sh executable with chmod +x /root/usb.sh
  • Add /root/usb.sh to /etc/rc.local before exit 0 (I really should add a systemd startup script here at some point)

With this setup the Pi4 will show up as a ethernet device with an IP address of 10.55.0.1 and will assign the device you plug it into an IP address via DHCP. This means you can just ssh to pi@10.55.0.1 to start using it.

Addendum

Quick note, not all USB-C cables are equal it seems. I’ve been using this one from Amazon and it works fine.

Updated AWS Lambda NodeJS Version checker

I got another of those emails from Amazon this morning that told me that the version of the NodeJS runtime I’m using in the Lambda for my Node-RED Alexa Smart Home Skill is going End Of Life.

I’ve previously talked about wanting a way to automate checking what version of NodeJS was in use across all the different AWS Availability Regions. But when I tried my old script it didn’t work.

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This is most likely because the output from awscli tool has changed slightly.

The first change looks to be in the listing of the available regions

$ aws ec2 describe-regions --output text
REGIONS	ec2.eu-north-1.amazonaws.com	opt-in-not-required	eu-north-1
REGIONS	ec2.ap-south-1.amazonaws.com	opt-in-not-required	ap-south-1
REGIONS	ec2.eu-west-3.amazonaws.com	opt-in-not-required	eu-west-3
REGIONS	ec2.eu-west-2.amazonaws.com	opt-in-not-required	eu-west-2
REGIONS	ec2.eu-west-1.amazonaws.com	opt-in-not-required	eu-west-1
REGIONS	ec2.ap-northeast-2.amazonaws.com	opt-in-not-required	ap-northeast-2
REGIONS	ec2.ap-northeast-1.amazonaws.com	opt-in-not-required	ap-northeast-1
REGIONS	ec2.sa-east-1.amazonaws.com	opt-in-not-required	sa-east-1
REGIONS	ec2.ca-central-1.amazonaws.com	opt-in-not-required	ca-central-1
REGIONS	ec2.ap-southeast-1.amazonaws.com	opt-in-not-required	ap-southeast-1
REGIONS	ec2.ap-southeast-2.amazonaws.com	opt-in-not-required	ap-southeast-2
REGIONS	ec2.eu-central-1.amazonaws.com	opt-in-not-required	eu-central-1
REGIONS	ec2.us-east-1.amazonaws.com	opt-in-not-required	us-east-1
REGIONS	ec2.us-east-2.amazonaws.com	opt-in-not-required	us-east-2
REGIONS	ec2.us-west-1.amazonaws.com	opt-in-not-required	us-west-1
REGIONS	ec2.us-west-2.amazonaws.com	opt-in-not-required	us-west-2

This looks to have added something extra to the start of the each line, so I need to change which filed I select with the cut command by changing -f3 to -f4.

The next problem looks to be with the JSON that is output for the list of functions in each region.

$ aws --region $r lambda list-functions
{
    "Functions": [
        {
            "TracingConfig": {
                "Mode": "PassThrough"
            }, 
            "Version": "$LATEST", 
            "CodeSha256": "wUnNlCihqWLXrcA5/5fZ9uN1DLdz1cyVpJV8xalNySs=", 
            "FunctionName": "Node-RED", 
            "VpcConfig": {
                "SubnetIds": [], 
                "VpcId": "", 
                "SecurityGroupIds": []
            }, 
            "MemorySize": 256, 
            "RevisionId": "4f5bdf6e-0019-4b78-a679-12638412177a", 
            "CodeSize": 1080463, 
            "FunctionArn": "arn:aws:lambda:eu-west-1:434836428939:function:Node-RED", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::434836428939:role/service-role/home-skill", 
            "Timeout": 10, 
            "LastModified": "2018-05-11T16:20:01.400+0000", 
            "Runtime": "nodejs8.10", 
            "Description": "Provides the basic framework for a skill adapter for a smart home skill."
        }
    ]
}

This time it looks like there is an extra level of array in the output, this can be fixed with a minor change to the jq filter

$aws lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'
"Node-RED - nodejs8.10"

Putting it all back together to get

for r in `aws ec2 describe-regions --output text | cut -f4`;  do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'; 
done

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"

Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:

/dts-v1/;
/plugin/;

/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;
            };
        };
    };
};

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.

Userspace

Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

device = uinput.device([
  uinput.ABS_X,
  uinput.ABS_Y
])

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
  else:
    device.emit(uinput.BTN_TOUCH, 0)   

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Basic traffic shaping

So, I thought this would be a lot harder than it ended up being1.

Over the last few posts I’ve been walking through the parts needed to build a super simple miniature ISP and one of the last bits I think (I know I’ll have forgotten something) we need is a way to limit the bandwidth available to the end users.

Normally this is mainly done by a step in the chain we have been missing out, that being the actual DSL link between the users house and the exchange. The length of the telephone line imposes some technical restrictions as well as the encoding scheme used by the DSL modems. In the case I’ve been taking about we don’t have any of that as it’s all running directly over Gigabit Ethernet.

Limiting bandwidth is called traffic shaping. One of the reasons to apply traffic shaping is to make sure all the users get a consistent experience, e.g. to stop one user maxing out all the backhual bandwidth (streaming many 4k Netflix episodes) and preventing all the other users from being able to even just browse basic pages.

Home broadband connections tend to have an asymmetric bandwidth profile, this is because most of what home users do is dominated by information being downloaded rather than uploaded, e.g. requesting a web page consists of a request (a small upload) followed by a much larger download (the content of the page). So as a starting point I will assume the backhaul for our ISP is going to be configured in a similar way and set each user up with similar asymmetric set up of 10mb down and 5mb up.

Initially I thought it might be just a case of setting a couple of variable in the RADIUS response. While looking at the dictionary for the RADIUS client I came across the dictionary.roaringpenguin file that includes the following two attribute types

  • RP-Upstream-Speed-Limit
  • RP-Downstream-Speed-Limt

Since Roaring Penguin is the name of the package that provided the pppoe-server I wondered if this meant it had bandwidth control built in. I updated the RADIUS configuration files to include these alongside where I’d set Acct-Interim-Interval so they are sent for every user.

post-auth {

	update reply {
		Acct-Interim-Interval = 300
		RP-Upstream-Speed-Limit = 5120
		RP-Downstream-Speed-Limit = 10240
	}
        ...
}

Unfortunately this didn’t have any noticeable effect so it was time to have a bit of a wider look.

Linux has a traffic shaping tool called tc. The definitive guide is included in a document called the Linux Advanced Routing and Traffic Control HowTo and it is incredibly powerful. Luckily for me what I want is relatively trivial so there is no need to dig into all of it’s intricacies.

Traffic shaping is normally applied to outbound traffic so we will deal with that first. In this case outbound is relative to the machine running the pppoe-server so we will be setting the limits for the user’s download speed. Section 9.2.2.2 has an example we can use.

# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540

This limits the out going connection on device ppp0 to 220kbit. We can adjust the values for the rate to 10240kbitor 1mbitto get the right speed.

Traffic coming into the device is controlled with ingress rules and is called policing. The tc-policing man page has example for limiting incoming traffic.

 # tc qdisc add dev eth0 handle ffff: ingress
 # tc filter add dev eth0 parent ffff: u32 \
                   match u32 0 0 \
                   police rate 1mbit burst 100k

We can change the device to ppp0 and the rate to 5mbit and we have what we are looking for.

Automation

Setting this up on the command line once the connection is up and running is easy enough, but it really needs to be done automatically when ever a user connects. The pppd daemon that gets started for each connection has a script that can be used to do this. The /etc/ppp/ip-up.sh script is called and in turn this calls all the scripts in /etc/ppp/ip-up.d so we can include a script in there to do the work.

The next trick is where to find the settings. When setting up the pppoe-server we added the plugin radattr.so line to the /etc/ppp/options file, this causes all the RADIUS attributes to be written to a file when the connection is created. The file is /var/run/radattr.ppp0 (with the prefix changing for each connection).

Framed-Protocol PPP
Framed-Compression Van-Jacobson-TCP-IP
Reply-Message Hello World
Framed-IP-Address 192.168.5.2
Framed-IP-Netmask 255.255.255.0
Acct-Interim-Interval 300
RP-Upstream-Speed-Limit 5120
RP-Downstream-Speed-Limit 10240

With a little bit of sed and awk magic we can tidy (environments can’t contain - & we need to wrap the string value in ") that up and turn it into environment variables and a script to set the traffic shaping.

#!/bin/sh

eval "$(sed 's/-/_/g; s/ /=/' /var/run/radattr.$PPP_IFACE | awk -F = '{if ($0  ~ /(.*)=(.* .*)/) {print $1 "=\"" $2  "\""} else {print $0}}')"

if [ -n "$RP_Upstream_Speed_Limit" ];
then

#down
tc qdisc add dev $PPP_IFACE root tbf rate ${RP_Upstream_Speed_Limit}kbit latency 50ms burst 1540

#up
tc qdisc add dev $PPP_IFACE handle ffff: ingress
tc filter add dev $PPP_IFACE parent ffff: u32 \
          match u32 0 0 \
          police rate ${RP_Downstream_Speed_Limit}kbit burst 100k

else
	echo "no rate info"
fi

Now when we test the bandwidth with iperf we see the the speeds limited to what we are looking for.

Advanced

1 This is a super simple version that probably has lots of problems I’ve not yet discovered and it would be good to try and set up something that would allow a single user to get bursts of speed above a simple total/number of users share of the bandwidth if nobody else is wanting to use it. So it’s back to reading the LARTC guide to dig out some of the more advanced options.

Static IP Addresses and Accounting

Over the last few posts I’ve talked about how to set up the basic parts needed to run a small ISP.

In this post I’m going to cover adding a few extra features such as static IP addresses, Bandwidth accounting and Bandwidth limiting/shaping.

Static IP Addresses

We can add a static IP address by adding a field to the users LDAP entry. To do this first we need to add the Freeradius schema to the list of fields that the LDAP server understands. The Freeradius schema files can be found in the /usr/share/doc/freeradius/schemas/ldap/openldap/ and have been gzipped. I unzipped them and copied them to /etc/ldap/schema then imported it with

$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/freeradius.ldif

Now we have the schema imported we can now add the radiusprofile objectClass to the user along with a radiusFramedIPAddress entry with the following ldif file.

dn: uid=isp1,ou=users,dc=hardill,dc=me,dc=uk
changetype: modify
add: radiusFramedIPAddress
rediusFramedIPAddress: 192.168.5.2

We then use ldapmodify to update the isp1 users record

$ ldapmodify -f addIPAddress.ldif -D cn=admin,dc=hardill,dc=me,dc=uk -w password

Now we have the static IP address stored against the user, we have to get the RADIUS server to pass that information back to the PPPoE server after it has authenticated the user. To do this we need to edit the /etc/freeradius/3.0/mods-enabled/ldap file. Look for the `update` section and add the following

update {
  ...
  reply:Framed-IP-Address     := 'radiusFramedIPAddress'
}

Running radtest will now show Framed-IP-Address in the response message and when pppoe-server receives the authentication response it will use this as the IP address for the client end of the connection.

Accounting

Out of the box pppoe-server will send accounting messages to the RADIUS server at the start and end of the session.

Sat Aug 24 21:35:17 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Start
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 21:35:17 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566678917

Sat Aug 24 23:08:53 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Stop
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 5616
	Acct-Output-Octets = 2328
	Acct-Input-Octets = 18228
	Acct-Output-Packets = 32
	Acct-Input-Packets = 297
	NAS-Port-Type = Virtual
	Acct-Terminate-Cause = User-Request
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 23:08:53 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566684533

The Stop message includes the session length (Acct-Session-Time) in seconds and the number of bytes downloaded (Acct-Output-Octets) and uploaded (Acct-Input-Octets).

Historically in the days of dial up that probably would have been sufficient as sessions would probably only last for hours at a time, not weeks/months for a DSL connection. pppoe-server can be told to send updates at regular intervals, this setting is also controlled by a field in the RADIUS authentication response. While we could add this to each user, it can be added to all users with a simple update to the /etc/freeradius/3.0/sites-enabled/default file in the post-auth section.

post-auth {
   update reply {
      Acct-Interim-Interval = 300
   }
   ...
}

This sets the update interval to 5mins and the log now also contains entries like this.

Wed Aug 28 08:38:56 2019
	Acct-Session-Id = "5D62ACB7070100"
	User-Name = "isp1"
	Acct-Status-Type = Interim-Update
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 230105
	Acct-Output-Octets = 10915239
	Acct-Input-Octets = 17625977
	Acct-Output-Packets = 25918
	Acct-Input-Packets = 31438
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 28 2019 08:38:56 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "f36693e4792eafa961a477492ad83f8c"
	Timestamp = 1566977936

Having this data written to a log file is useful, but if you want to trigger events based on it (e.g. create a rolling usage graph or restrict speed once a certain allowance has been passed) then something a little more dynamic is useful. Freeradius has a native plugin interface, but it also has plugins that let you write Perl and Python functions that are triggered at particular points. I’m going to use the Python plugin to publish the data to a MQTT broker.

To enable the Python plugin you need to install the freeradius-python package

$ sudo apt-get install freeradius-python

And then we need to symlink the mods-available/python to mods-enabled and then edit the file. First we need to set the path that the plugin will use to file Python modules and files. And then enable the events we want to pass to the module.

python {
    python_path = "/etc/freeradius/3.0/mods-config/python:/usr/lib/python2.7:/usr/local/lib/python/2.7/dist-packages"
    module = example

    mod_instantiate = ${.module}
    func_instantiate = instantiate

    mod_accounting = ${.module}
    func_accounting = accounting
}

The actual code follows, it publishes the number of bytes used in the session to the topic isp/[username]/usage. Each callback gets pass a tuple containing all the values available.

import radiusd
import paho.mqtt.publish as publish

def instantiate(p):
  print "*** instantiate ***"
  print p
  # return 0 for success or -1 for failure

def accounting(p):
  print "*** accounting ***"
  radiusd.radlog(radiusd.L_INFO, '*** radlog call in accounting (0) ***')
  print
  print p
  d = dict(p)
  if d['Acct-Status-Type'] == 'Interim-Update':
      topic = "isp/" + d['User-Name'] + "/usage"
      usage = d['Acct-Output-Octets']
      print "publishing data to " + topic
      publish.single(topic, usage, hostname="hardill.me.uk", retain=True)
      print "published"
  return radiusd.RLM_MODULE_OK

def detach():
  print "*** goodbye from example.py ***"
  return radiusd.RLM_MODULE_OK

I was going to talk about traffic shaping next, but that turns out to be real deep magic and I need to spend some more time playing before I have something to share.

PPPoE Server

With the working RADIUS authentication server setup in the last post it’s time to install and set up the PPPoE server for the users to connect to. As well as the pppoe package we will need the libradcli4 as this provides the RADIUS client library.

$ sudo apt-get install pppoe libradcli4

First we need to stop the dhcpcd daemon from trying to allocate a IP address for the interface we are going to use for PPPoE. As I’m running this on a Rasperry Pi 4 I’ll be using the eth0 port and then using wlan0 for the back haul. To get dhcpcd to ignore eth0 we add the following to /etc/dhcpcd.conf

denyinterfaces eth0

With that out of the way we can start setting things up for the pppoe-server. We will start by editing the /etc/ppp/options file. We need to add the plugins to link it to the RADIUS server and tweak a couple of settings.

mtu 1492
proxyarp
...
plugin radius.so
plugin radattr.so
radius-config-file /etc/radcli/radiusclient.conf

next up create /etc/ppp/pppoe-server-options and make sure it outputs logs

# PPP options for the PPPoE server
# LIC: GPL
require-pap
login
lcp-echo-interval 10
lcp-echo-failure 2
debug
logfile /var/log/pppoe/pppoe-server.log

and finally /etc/ppp/pap-secrets we need to add the following:

# INBOUND connections

# Every regular user can use PPP and has to use passwords from /etc/passwd
#*	hostname	""	*
* * "" *

That’s it for PPP options, just need to finish settings up radcli. Here we need to add the password for the RADIUS server in the /etc/radcli/servers file

localhost/localhost				testing123

and then we can update /etc/radcli/radiusclient.conf to point to the RADIUS server on localhost

authserver 	localhost
acctserver 	localhost

The current version of PPP available with Raspbian Buster has been built against an older version of the radius client library so to get things to work we have to also add the following 2 lines and run touch /etc/ppp/radius-port-id-map

seqfile /var/run/radius.seq
mapfile /etc/ppp/radius-port-id-map

And we need to edit the /etc/radcli/dictionary file to comment out all the lines that include ipv6addr and also change all instances of ipv4addr to ipaddr. There is a patch which fixes some of this but requires a rebuild of all of PPP. I’m going to give that a go later to get IPv6 working properly.

We should now be able to start the pppoe-server.

# pppoe-server -I eth0 -T 60 -N 127 -C PPPoE -S PPPoE -L 192.168.5.1 -R 192.168.5.128 -F
  • -I sets the port to listen on
  • -T sets the timeout for a connection
  • -N sets the maximum number of connections
  • -C sets the “name” of the server instance
  • -S sets the “name” of the PPP Service
  • -L sets the IP address for the server
  • -R sets the first address of the range for the remote device
  • -F tells pppoe-server to run in the foreground (only used for testing)

If we make sure the server is set to masquerade and forward IP packets then any client that connects should now be able reach the internet via the server.

In the next post I’ll cover how to customise connections for different users by adding data to their LDAP entry. And also how to do traffic shaping to ensure equal use of the available bandwidth along with basic accounting so we know what to bill each user.

LDAP & RADIUS

As mentioned in the last post, I’m building a PoC ISP and to do this I need to set both an LDAP and RADIUS servers.

I’m going to run all of this on the latest version of Raspbian Buster.

LDAP

Lets start by installing the LDAP server.

$ sudo apt-get install ldap-server

This will install OpenLDAP. The first thing to do is to set the admin password and configure the base dn. To do this we first create a hashed version of the password with slappasswd

$ slappasswd
New password:
Re-enter new password: 
{SSHA}FRtFAY09RdZN76rZiVfgyqs2F3J9jXPN

We can then create the following ldif file called config.ldif. This sets the admin password and updates the base dn

dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}TXcmvaldskl312012cKsPK1cY2321+aj
-
replace: olcRootDN
olcRootDN: cn=admin,dc=hardill,dc=me,dc=uk
-
replace: olcSuffix
olcSuffix: dc=hardill,dc=me,dc=uk

We the apply these changed with the ldapmodify command

$ ldapmodify -a -Q -Y EXTERNAL -H ldapi:/// -f config.ldif

Now we have the admin user setup we can start to add the normal users. Again we need to use the slappasswd command to create password that we can use in the user.ldif file. I’ve added the inetOrgPerson in to the user entry so I can also include the mail item.

dn: uid=isp1,ou=users,dc=hardill,dc=me,dc=uk
objectClass: top
objectClass: person
objectClass: inetOrgPerson
displayName: Joe Blogs
cn: Joe
sn: Blogs
mail: isp1@hardill.me.uk
uid: isp1
userPassword: {SSHA}rozJD+T37NqRQp36myXf1KJ35+7tf2LN

And since we’ve set the admin password we need to change the ldapmodify command as well

$ ldapmodify -f user.ldif -D cn=admin,dc=hardill,dc=me,dc=uk -w password 

RADIUS

Next we need to install the RADIUS

$ sudo apt-get install freeradius

Once installed we need to enable the LDAP module and configure it to use the server we have just setup. To do this we need to symlink the ldap file from /etc/freeradius/3.0/mods-available to /etc/freeradius/3.0/mods-enabled. Next edit the identity, password and base_dn in the ldap config file to match the settings in config.ldif.

...
	#  additional schemes:
	#  - ldaps:// (LDAP over SSL)
	#  - ldapi:// (LDAP over Unix socket)
	#  - ldapc:// (Connectionless LDAP)
	server = 'localhost'
#	server = 'ldap.rrdns.example.org'
#	server = 'ldap.rrdns.example.org'

	#  Port to connect on, defaults to 389, will be ignored for LDAP URIs.
#	port = 389

	#  Administrator account for searching and possibly modifying.
	#  If using SASL + KRB5 these should be commented out.
	identity = 'cn=admin,dc=hardill,dc=me,dc=uk'
	password = password

	#  Unless overridden in another section, the dn from which all
	#  searches will start from.
	base_dn = 'ou=users,dc=hardill,dc=me,dc=uk'

	#
	#  SASL parameters to use for admin binds
...

Once we’ve restarted freeradius we can test if we can authenticate the isp1 user with the radtest command.

$ radtest isp1 secret 127.0.0.1 testing123
Sent Access-Request Id 159 from 0.0.0.0:42495 to 127.0.0.1:1812 length 78
	User-Name = "isp1"
	User-Password = "secret"
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Message-Authenticator = 0x00
	Cleartext-Password = "secret"
Received Access-Accept Id 159 from 127.0.0.1:1812 to 127.0.0.1:42495 length 51

testing123 is the default password for a RADIUS client connecting from 127.0.0.1, you can change this and add more clients in the /etc/freeradius/3.0/clients.conf file.

In the next post I’ll talk about setting up PPPoE