Monetising the Web without Adverts

As a bit of a follow on from my ramble about rewards for Open Source contributors I’m having a look at some other options.

One of those came about after Terence Eden retweeted the following:

I know Terence has worked with the W3C on web standards so was curious as to what this might be about. A bit of digging turned up that the plugin in question was part of a trial of a proposed new standard to allow for web content to be monetised without the need for adverts. Details of the proposed standard can be found on the Web Monetization Org website.

The number of sites looking to make use of the new standard is growing e.g. a recent addition was a advert free version of imgur.

How it Works

High Level

The basic starting point is to add a meta tag to the head section of your content.

<meta name="monetization" content="$">

The content section holds what’s known as a payment pointer, it’s holds a hostname that represents the entity that holds your wallet and a unique id for that wallet. The tag can be included in as many sites as you want.

A browser plugin (There are versions available for Chrome & Firefox and it’s built into the Puma browser for Android and iOS) reads this when the page is loaded and triggers a payment from the reader’s wallet to the content creators wallet.

The transactions are intended to not allow anybody to track what content readers are consuming.

Low level

At the backend the system uses the Interledger protocol to make payments between the Reader and the the Website. A full description can be found here.

Flow diagram for Webmonetization.


The first implementation of the proposed standard is by company called Coil who host users wallets. They have also created a WordPress plugin that handles inserting the meta tag and few other things.

I have been beta testing the plugin for the last few months and it is simple to install and configure.

Coil act as the readers wallet, their business model is that you create a single account with them and pay a small fee ($5 an month) and this gets you access to all the monetised content.

The WordPress plugin is for content creators and let’s you set a few levels of operation.

  • Off – It does nothing
  • Public content and monetisation enabled – This lets anybody read the content, but if they have the browser plugin they seamlessly make a payment.
  • Protected content – This shows the blog post title but presents a paywall type pop up asking the reader to install the plugin and allow payment to see the content.

For the third option you can even enable this on a block by block (Using WordPress’ Gutenberg editor) basis. E.g. only show an introductory summary to everybody with the bulk of the post protected (e.g. a bit how scientific journals work, not that actually approve of having to pay to read the output of state sponsored research) or for example just the next paragraph.

Hello, and Thank You for contributing towards this content. Also next week’s lottery numbers are 7, 13, 19, 27, 31, 52 😉

The selective masking of content works on a Javascript api that can also be used for a bunch of other things. The most obvious being removing adds from pages if people as using a Web Monetization plugin. Another example is to add a counter to the page so the reader can see how much they’ve paid so far.

While I trial this I’m probably just going stick with the first option of just letting everybody read all my posts but leave the meta tag in the background, but I may experiment with mixed content.

Coil are currently paying a rate of $0.36 per hour of content viewed. Given my current viewing figures for posts I doubt I’ll be getting rich with this. I need to look at how this compares to rates for embedded adverts.

I can see time based payments working well for streaming games and video, but I’m still not sure about web content.

For technical content I tend to leave tabs open for a long time to refer back to regularly, but they may stay in my tab bar for multiple days while I work on a project. I’ve asked what the browser plugins do in this situation and I’m told that it only pays out when the tab has focus, which seems like the right approach.

Coil also support Twitch, YouTube and Cinnamon as well as web pages.

There are currently 3 wallet providers for content creators

  • XRP Tip Bot (sort of merged with Uphold details)
  • Stronghold
  • GateHub XRP
  • Uphold

All the payout’s from Coil are in XRP and different wallet providers allow you to exchange that for different currencies and may charge withdrawal fees. At the moment none of them look to directly support GBP as an output currency, but hopefully if the standard takes off then more providers will come online offering different options and competition may help to drive the fees down. Uphold support paying out in £ so once I have enough to make it worth making a withdrawal I’ll give it a go.


I’ve spelt monetisation with the British spelling in the title and with the American spelling where it’s used in respect to the proposed W3C standard, because that’s just how the web tends to work these days…

Raspberry Pi Streaming Camera

I talked about using a ONVIF camera to stream to a Chromecast earlier because they come with an open well documented interface for pulling video from them (as well as pan/tilt/zoom control if available).

If you don’t have a camera that supports ONVIF you can build something similar with a Raspberry Pi and the Camera module.

This should work with pretty much all of the currently available Raspberry Pi models (With the exception of the basic Pi Zero that doesn’t have Wifi)

  1. Flash a SD card with the Raspbian Lite image
  2. Insert the camera ribbon cable into both the camera module and the Pi
  3. Once the card has booted use the raspi-conf command to enable the Camera interface
  4. Install ffmpeg sudo apt-get install ffmpeg
  5. Create a script with the following content

v4l2-ctl --set-ctrl video_bitrate=300000

ffmpeg -f video4linux2 -input_format h264 -video_size 640x360 -framerate 30 -i /dev/video0  -vcodec copy -an -f flv rtmp://
  • This script sets the max video bitrate to 30kps
  • If you need to rotate the video you can insert v4l2-ctl --set-ctrl=rotate=180 before ffmpeg to rotate 180 degrees
  • ffmpeg uses the videolinux2` driver to read from the attached camera (/dev/video0)
  • Takes h264 encoded feed at 640x360 and 30 frames per second and outputs it to the same nginx instance that I mentioned in my previous post. The feed is called pi

ffmpeg uses the on board hardware support for the video encoding so even a Pi Zero W runs at about 5% CPU load. This means that if you only have 1 camera you could probably run nginx on the same device, else you can have a multiple cameras all feeding to a central video streaming server.

If you want a kit that comes with the Pi Zero W, Camera and a case to mount it to a window have a look at the Pimoroni OctoCam.

The instructions should also work for pretty much any USB (or built in) camera attached to a Linux machine.

Streaming Camera to Chromecast

I have a little cheap WiFi camera I’ve been meaning to do something with for a while. The on board web access doesn’t really work any more because it only supports Flash or Java Applet to view the stream.

But camera supports the ONVIF standard so it offers a rtsp:// feed so I can point Linux apps like mplayer at it and see the stream.

The camera is currently sat on my window sill looking out over the valley which is a pretty good view.

View from upstairs window

I thought it would be interesting to stream the view to the TV in my Living room while I’m working from home at the moment. It is also a way to check the weather without having to get up in the morning and open the blind.

I have a Chromecast in the back back of both TVs so using this seamed like it would be the easiest option.


Chromecasts support a number of different media types but for video we have 2 common codec that will work across all the currently available device types.

  • H264
  • VP8

And we have 3 options to deliver the video stream

  • HLS
  • DASH
  • SmoothStreaming

These are all basically the same, they chop the video up into short segments and generate a play list that points to the segments in order and the consumer downloads each segment. When it reaches the end of the list it downloads the list again which will now hold the next list of segments.

There is a plugin for Nginx that supports both HLS and DASH which looked like a good place to start.


I’m running this whole stack on a Raspberry Pi 4 running Raspbian Buster.

$ sudo apt-get install nginx libnginx-mod-rtmp ffmpeg

Once the packages are installed the following needs to be added to the end of the /etc/nginx/nginx.conf file. This sets up a rtmp listener that we can stream the video to which will then be turned into both a HLS and DASH stream to be consumed.

rtmp {
  server {
    listen 1935; # Listen on standard RTMP port
    chunk_size 4000;

    application show {
      live on;
      # Turn on HLS
      hls on;
      hls_type live;
      hls_path /var/www/html/hls/;
      hls_fragment 5s;
      hls_playlist_length 20s;
      # Turn on DASH      
      dash on;
      dash_path /var/www/html/dash/;
      dash_fragment 5s;
      dash_playlist_length 20s;

      # disable consuming the stream from nginx as rtmp
      deny play all;

The playlist and video segments get written to /var/www/html/hls and /var/www/html/dash respectively. Because they will be short lived and replaced very regularly it’s a bad idea to write these to an SD card as they will just cause excessive flash wear.

To get round this I’ve mounted tmpfs filesystems at those points with the following entries in /etc/fstab

tmpfs	/var/www/html/dash	tmpfs	defaults,noatime,size=50m
tmpfs	/var/www/html/hls	tmpfs	defaults,noatime,size=50m

Now we have the playlists and segments being generated in a sensible way we need to server them up. I added the following to the /etc/nginx/sites-enabled/default file

server {
  listen 8080;
  listen [::]:8080;

  sendfile off;
  tcp_nopush on;
  directio 512;
  default_type application/octet-stream;

  location / {
    add_header 'Cache-Control' 'no-cache';
    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Expose-Headers' 'Content-Length';

    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Credentials' 'true';
      add_header 'Access-Control-Max-Age' 1728000;
      add_header 'Content-Type' 'text/plain charset=UTF-8';
      add_header 'Content-Length' 0;
      return 204;

    types {
      application/dash+xml mpd;
      application/ m3u8;
      video/mp2t ts;

    root /var/www/html/;

Now we have the system to stream the content in an acceptable format we need to get the video from the camera into nginx. We can use ffmpeg to do this.

ffmpeg -re -rtsp_transport tcp -i rtsp:// -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv rtmp://localhost/show/stream

This reads from the RTSP stream rtsp:// and streams it into the rtmp://localhost/show/stream. The show part is the name of the application declared in the rtmp section in the nginx.conf and the stream will be the name of the HLS or DASH stream. In this case the following:

  • HLS ->
  • DASH ->

If you change the end of the `rtmp://localhost/show/XXXX` URL you can create multiple streams from different sources with different names (just make sure the tmpfs mounts have enough space for all the streams).


Node-RED Chromecast flow

I’ve been using the Node-RED Chromecast node to test the streams. The DASH stream is working pretty well, but the HLS is a bit more fragile for some reason. Latency is currently about 20-30 seconds which is appears mainly to be determined by the size and number of chunks in the playlist used but if I wind the fragment size down any lower than 5s or the 20s for the playlist length.


Now it’s basically working the next steps are to add support for Camera devices to my Google Assistant Node-RED service so I can request the stream via voice and have it show on my Google Home Hub as well. I’m also building a standalone Google Smart Home Action just for Camera feeds using an all Google stack just as a learning exercise in Firebase.

At the moment the stream is only available from inside my network, I’ll probably proxy it to my external web server as well and add on authentication. The Google Assistant can be given a Bearer Auth Token along with the URL which means I’ll be able to view the stream on my phone while out. While not important for this stream it would be for other Security camera type applications.