Building a Linux kernel driver on Ubuntu using dkms


Recently, I ran into a situation where the driver for my Ethernet controller had a bug. In my case, the igc driver did not update the /dev/net/proc stats, such that all of the stats would persistently remain at 0 for that particular Ethernet interface. While debugging this issue and developing a fix, I needed a way to compile the driver and load the driver alongside the Ubuntu kernel.

Dynamic Kernel Module Support (dkms)

DKMS allows you to dynamically load a kernel module (e.g., a driver for a graphics card or ethernet controller) into a pre-compiled kernel, such as a kernel provided by a Linux distribution (e.g., Ubuntu, Fedora, Arch, etc.).[1]See Wikipedia DKMS also automatically compiles and installs the module when a new kernel is installed. For example, when installing a DKMS module, the source code of the module may be copied to /usr/src, which is where DKMS will look to find the source code to rebuild the module when a new kernel is installed.

The Arch wiki pages on compiling a kernel module and dksm are helpful, but this article aims to merge both of these topics. For example, when you want to prepare or tweak your own kernel module, and you would like to use dkms to manage the installation process. The CentOS wiki also provides helpful instructions on how to build a kernel module using dkms, but lacks some specific issues you may encounter on Ubuntu.


You may need to install some packages:

sudo apt install dkms dpkg-dev git

Step 1. Prepare the project folder for the dkms build

Under this tutorial, the project folder for the driver source code will have the following structure:

  • driver-project/
    • dkms.conf
    • README
    • src/
      • driver.h
      • driver.c

Make the directories:

mkdir driver-project
cd driver-project
mkdir src

Of course, change the name of driver-project to suit your preferences or needs.

You will be customizing the dkms.conf,, and files as further described in this tuturial.

The README and LICENSE files can be optional. But you may want to review the license of any source code you will be modifying in case you need to include that license with your derivative module.

The source code will reside in src/, and the files driver.h and driver.c are just examples.

Download source code using git

If you are going to be modifying or forking an existing driver included in a Linux kernel, you will need to download the source code.

For git, you can download the entire source code of the kernel with the clone command, for example:

git clone

Alternatively, you can download the source code as a ZIP file:


You can also download a specific subdirectory using svn, for example:

svn export

The syntax for the repo URL of the svn export command is broken in to three parts:


  1. url://repo.git is the project URL (such as
  2. branch represents the branch of the project to which the source code belongs. trunk is used for the master branch, and other branches use their respective title. You can list the branches available with svn list url://repo.git
  3. subdirectory is the specific subdirectory you would like to download that is residing under the master branch (for example, drivers/net/ethernet/intel/igc).

Download source code of an Ubuntu package

For an Ubuntu/Debian package, you can download the source with:

apt source package

Most likely, you will need to enable the source repository for that package. You can do that by first identifying the repository that hosts the package:

apt policy package

In /etc/apt/sources.list, uncomment the deb-src lines corresponding to the repositories listed by the apt policy command above. For example, the deb-src line below

deb focal main restricted
# deb-src focal main restricted

will change to:

deb focal main restricted
deb-src focal main restricted

Then you will download the source code of the Ubuntu package with:

apt source package

Often, you will see a message after running apt source providing a notice that the “packaging is maintained in the ‘Git’ version control system” with a URL to the git package.

Download source code of an Ubuntu kernel

To download the source code of an Ubuntu kernel, it is a similar process for a given package as described above. Most likely, you will want to download the source code of the kernel that you are currently running.

deb focal-updates main restricted
deb-src focal-updates main restricted
. . .
deb focal-security main restricted
deb-src focal-security main restricted

Then you will download the source code of the Ubuntu kernel:

apt source linux-image-unsigned-`uname -r`

Often, you will see a message after running apt source providing a notice that the “packaging is maintained in the ‘Git’ version control system” with a URL to the git package. You could use git clone next time, if you want to pull the latest source code.

Also, other Ubuntu-based distributions may not require the use of linux-image-unsigned. For example, to download the latest Pop OS! kernel source you can use:

apt source linux-image-`uname -r`

Copy source code to src/

Once the source code is downloaded, copy or move it to the src/ directory of your driver project directory:

cd driver-project/
cp -rv source-code-dir/* src/.

where source-code-dir is the directory where the source code resides. For example, suppose you downloaded the 5.6 kernel and wanted the source code for a particular Intel ethernet controller, such as the igc driver. You would copy that source code as follows:

cp -rv linux-5.6.19/drivers/net/ethernet/intel/igc/* scr/.

Step 2. Configure DKMS

Create the dkms.conf file:

cd driver-project/
vi dkms.conf

Let’s populate the dkms.conf with the following:

MAKE="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build/src modules"
CLEAN="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build/src clean"

You will be modifying the following parameters:

  • PACKAGE_NAME – This directive is used to give the name associated with the entire package of modules. This is the same name that is used with the -m option when building, adding, etc. and may not necessarily be the same as the MODULE_NAME. This directive must be present in every dkms.conf.[2]See the dkms man page
  • PACAKGE_VERSION – This directive is used to give the version associated with the entire package of modules being installed within that dkms package. This directive must be present in every dkms.conf.[3]See the dkms man page
  • BUILT_MODULE_NAME[0] – This directive gives the name of the module just after it is built. If your DKMS module package contains more than one module to install, this is a required directive for all of the modules. This directive should explicitly not contain any trailing “.o” or “.ko”.[4]See the dkms man page

BUILT_MODULE_LOCATION[0]=”src” indicates that the source code of the driver is located in the src/.

As an example, the dkms.conf may have the following settings for the igc driver:

MAKE="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build/src modules"
CLEAN="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build/src clean"

In this case, the PACKAGE_VERSION indicated that the source code for the igc derived from the 5.4.0-7642.46 Pop OS! kernel.

Step 3. Install the driver

Set the version of the driver module using the VERSION variable. The VERSION variable will match the PACKAGE_VERSION as set in the dkms.conf above.


Add the module to the DKSM tree:

cd driver-project
dkms add .

Build the module using DKMS:

dkms build igc -v ${VERSION}

Install the module using DKMS:

dkms install --force igc -v ${VERSION}

The –force option can be used if you want this version of the driver to have priority over the same driver built even with the current kernel.

Check that the module has been installed using the modinfo command:

modinfo igc

If the filename attribute contains updates/dkms in the path, then you have successfully installed the driver.

Step 4. Load the driver

You man need to remove the existing module from the kernel with:

rmmod igc

Load the driver with modprobe:

modprobe igc

Scripts to automate installation and uninstallation


dkms add .
dkms build igc -v ${VERSION}
dkms install --force igc -v ${VERSION}


dkms remove -m igc -v ${VERSION} --all


Syncing Podcasts to Garmin Fenix on Ubuntu/Mac

After the Garmin Connect outage in July 2020, I started looking into alternative options for syncing podcasts to my Garmin Fenix without using Garmin Express. Since I have a Mac, macOS does not natively support MTP to access the filesystem of the Garmin smartwatch. You can use the Android File Transfer application, but I have not been impressed with the functionality of that application.

This tutorial should also work for the following Garmin devices, which use MTP for accessing the filesystem:

  • Forerunner 945 and 645
  • Fenix 6 Series
  • Garmin Fenix 5 Plus Series
  • Garmin MARQ series, and
  • Vivoactive 4 series.

In may case, I have a Garmin Fenix 5 Plus, and I’ve implemented this using Xubuntu 20.04.

Also, you can easily use this method for syncing music, not just podcasts. But because podcasts need to be synced more frequently with an up-to-date playlist, this may be more applicable to podcasts.


  • gPodder for downloading the most recent podcast episodes
  • Mutagen for tagging the downloaded MP3 with a title
  • glib which bundles gio providing to the file system of the smartwatch via MTP
    • Under Ubuntu, glib should come pre-installed for Gnome-based flavors. There is klib for KDE-based flavors.
    • Under Mac, you can install glib using homebrew.
  • rsync for identifying which podcasts to transfer or remove from the smartwatch

Step 1. Install and configure gPodder

On Ubuntu, you can install gPodder with:

sudo apt install gpodder

Also, install Mutagen to enable id3 tagging post-download:

sudo apt install python3-mutage

Configure gPodder to manage your podcast subscriptions. For example, add a podcast using the subscribe command (or do this with the GUI):

gpo subscribe

Enable two post-download extensions that enable the downloaded files to be suitable for syncing to the Garmin Fenix.

  • Rename episodes after download
  • Tag downloaded files using Mutagen
gpo extension enable rename_download
gpo extension enable tagging

In my case, I also edited the module in gPodder to tweak how the episode titles are “sanitized.” These next few steps related to are optional.

Open the module:

sudo vi /usr/lib/python3/dist-packages/gpodder/

In the function sanitize_filename, replace the line that sets the filename with this:

    # custom sanitize expression
    filename = re.sub(r"[\"“”.*/:<>?\\|]", "", filename)

If you have multiple podcast subscriptions, create a folder where we can stage the podcasts for syncing. We will be hardlinking the podcasts to this common folder, so it shouldn’t use any more storage space.

For example, I created a Podcasts folder in my home directory:

mkdir ~/Podcasts

Step 2. Access the Garmin Fenix with GIO

Under system settings on the watch (Settings->System-USB Mode), set the device to use MTP mode. Next, connect your Garmin smartwatch to your computer via the USB charging cable. You can verify that the Garmin smartwatch is detected with the lsusb command:

lsusb | grep Garmin
Bus 001 Device 043: ID 091e:4b54 Garmin International

Now, with the Garmin smartwatch plugged in, we can mount it:

gio mount -li | grep mtp | awk -F= '{print $2}' | xargs -I {} gio mount {}

You can also mount the smartwatch using your desktop environment. In my case, Thunar displays the smartwatch as a phone icon. Thunar still uses GIO to access the smartwatch. Regardless of which method you use, the smartwatch will be mounted in the same location as described below.

GIO will mount the smartwatch’s filesystem to a directory that begins with mtp under /run/user/$UID/gvfs/. For example, my Garmin Fenix mounts to this directory:


Some observations about GIO:

  • I have noticed that the gio copy and remove commands fail without being run locally from within the mtp directory.
  • GIO does provide a list command, but I have not had any trouble with ls.
  • The linux commands cp and rm as well as rsync will fail to copy or remove files from the filesystem of the smartwatch.

The directory structure of the watch is as follows:

  • Primary
    • Audiobooks
    • GARMIN
    • Music
    • Podcasts

I’ve noticed that Garmin Express will load most of my podcasts in Music. It does not really matter if you store the podcasts in either the Music or Podcasts directory. This tutorial will assume that the Podcasts directory will be used for storing podcasts.

Step 3. Sync the Podcasts

Set some variables:

MTP_DIR=/run/user/$UID/gvfs/mtp:host=<your smartwatch id>

Download any new episodes:

gpo update
gpo download

Sync the gPodder subscription download folders to the staging folder (e.g., ~/Podcasts):

find ${DOWNLOADS_DIR} -mindepth 1 -type d -print0 |  xargs --null -I {} rsync -av --exclude="folder.jpg" --link-dest={}/ {}/ $PODCAST_DIR

Remove any podcasts older than 2 weeks:

find ~/Podcasts -type f -mtime +14 -name '*mp3' -print0 | xargs -r0 rm -v --

Adjust the -mtime option to keep podcasts for a longer or shorter time.

Create the playlist file used on the Garmin smartwatch:

ls -t *mp3 | sed 's/^/Podcasts\//' > ${PLAYLIST}

Sync the podcasts and playlist file to the Garmin smartwatch:

options="-n --omit-dir-times --no-perms --recursive --inplace --size-only"
rsync ${options} --out-format="%n" --exclude=".*" ${src} ${dest} > ${sync_files}
cat ${sync_files}
xargs -a ${sync_files} -d '\n' -I {} gio copy -p {} ${MTP_DIR}/Primary/Podcasts/.

Remove old podcasts from the Garmin smartwatch:

options_delete="-n  --omit-dir-times --no-perms --recursive --inplace --size-only --delete"
rsync ${options_delete} --out-format="%n" --exclude=".*" ${src} ${dest} | grep deleting | sed 's/deleting //' > ${delete_files}
cat ${delete_files}
xargs -a ${delete_files} -d '\n' -I {} gio remove ${MTP_DIR}/Primary/Podcasts/{}

Color prompt for screen under byobu

When using the GNU screen window manager under byobu, the default .bashrc file in Ubuntu (releases 18.04 and 20.04) does not recognize screen as supporting a color prompt. [1]You can find the default .bashrc file in /etc/skel. At first I thought the lack of color was due to byobu not loading the .bashrc file. But lo and behold, it was related to the case statement which identifies whether your terminal supports color.

The default case statement that checks the value of $TERM to see whether the terminal supports a color prompt is as follows:

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    xterm-color|*-256color) color_prompt=yes;;

Under screen, the value of $TERM is as follows:

user@computer:~$ echo $TERM

As you can see, the conditions xterm-color and *-256color in the case statement will not set the color_prompt flag to yes when using screen.

I opted for expanding the *-256color condition with another asterisk:

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    screen*|xterm|xterm-color|*-256color*) color_prompt=yes;;

I also included the conditions xterm and screen* to allow for a color prompt in Putty.


1 You can find the default .bashrc file in /etc/skel.

Cheat sheet for upgrading Nextcloud (manually)


  • Ubuntu
  • MYSQL database
  • Apache

Set your variables:


date=`date +%F`
version=`grep VersionString ${nc_path}/version.php | awk -F\' '{print $2}'`

Enable maintenance mode:

# put server in maintenance mode
cd ${nc_path}
sudo -u ${htuser} php occ maintenance:mode --on

Verify the current version:

# version
grep VersionString ${nc_path}/version.php | awk -F\' '{print $2}'

Make backups:

# backup nextcloud server files
mkdir -pv ${backup_path}
cp -prv ${nc_path}/* ${backup_path}

# backup nextcloud database
mysqldump -u root -p ${db_name} > ${db_backup}
gzip ${db_backup}

If your data folder is outside of your /nextcloud directory, backup your data files separately. I prefer using rsync-time-backup which provides a wrapper around rsync. /source/data /destination/backup

Stop the web server:

# stop web server
service apache2 stop

Download the latest release:

You can use this PHP script to easily get the URL to the latest release and automatically download the archive. Name this PHP script get-update-url.php.


        $updaterUrl = '';

        $version = $OC_Version;
        $version['installed'] = '';
        $version['updated'] = '';
        $version['updatechannel'] = $OC_Channel;
        $version['edition'] = '';
        $version['build'] = '';
        $version['php_major'] = PHP_MAJOR_VERSION;
        $version['php_minor'] = PHP_MINOR_VERSION;
        $version['php_release'] = PHP_RELEASE_VERSION;
        $versionString = implode('x', $version);

        //fetch xml data from updater
        $url = $updaterUrl . '?version=' . $versionString;
        echo $url;

        # Example update url:

See the source of versionCheck.php to determine the correct format of the update URL.

Change to the directory where Nextcloud is installed:

cd $nc_path; cd ..

Download the latest release using the get-update-url.php script:

wget `php -f get-update-url.php | xargs curl 2> /dev/null | grep url | awk -F "[<>]" '{print $3}'`

Move the current installation:

mv ${nc_path} ${nc_old}

Unpack Nextcloud archive:

unzip nextcloud-*.zip

Restore the configuration file:

cp -pv ${backup_path}/config/config.php ${nc_path}/config/.

Set the permissions and owner:

data_path='/somewhere/data'  # if located in nextcloud /var/www/nextcloud/data

find ${nc_path}/ -type f -print0 | xargs -0 chmod 0640
find ${nc_path}/ -type d -print0 | xargs -0 chmod 0750

chown -R root:${htuser} ${nc_path}/
chown -R ${htuser}:${htuser} ${nc_path}/apps/
chown -R ${htuser}:${htuser} ${nc_path}/config/
chown -R ${htuser}:${htuser} ${data_path}
chown -R ${htuser}:${htuser} ${nc_path}/themes/

chown root:${htuser} ${nc_path}/.htaccess
chown root:${htuser} ${data_path}/.htaccess

chmod 0644 ${nc_path}/.htaccess
chmod 0644 ${data_path}/.htaccess

Find the script here.

Restart the web server:

service apache2 start

Perform the upgrade:

cd ${nc_path}
sudo -u ${htuser} php occ upgrade

Disable maintenance mode:

sudo -u ${htuser} php occ maintenance:mode --off

Check the installation:

Check the installed Apps:

sudo -u ${htuser} php occ app:list

Check the two factor state of a user:

sudo -u ${htuser} php occ twofactorauth:state <username>

On one occasion, I’ve had to reinstall twofactor_totp:

sudo -u ${htuser} php occ app:install twofactor_totp

RPi Compiling Icecast with support for SSL

In the Raspbian repositories, the Icecast2 package does NOT support encrypted connections via openssl. If you try to use the ssl tags in the /etc/icecast2/icecast.xml configuration file, Icecast will fail to start.

You’ll see something like this in /var/log/icecast2/error.log:

[2016-10-15 20:41:45] INFO connection/get_ssl_certificate No SSL capability.

To remedy this, you need to compile Icecast with openssl support enabled. I recommend installing Icecast2 from the repositories and then removing it. This builds all the configuration files in /etc/icecast2, creates a daemon user and group called icecast2 and icecast, respectively, and provides the init scripts necessary to start Icecast automatically during the boot process.

Make sure your repository cache is up-to-date:

sudo apt-get update

Install Icecast2 from the repositories:

sudo apt install icecast2

It will ask you three passwords to set. These will be stored as plain text in /etc/icecast2/icecast.xml, so choose your passwords wisely.

Remove Icecast2, but don’t purge:

sudo apt remove icecast2

Optionally, you can check whether the configuration files are still there:

ls -l /etc/init.d/ /etc/ | grep icecast

Install the development tools required to build Icecast from source:

sudo apt install git gcc build-essential

Note: I’m not positive these are all the development tools. Leave me a comment if you need help with this.

Now let’s get some of the dependencies required to compile Icecast from source. As of Icecast v. 2.4, it requires the following packages: libxml2, libxslt, curl (>= version 7.10 required), and ogg/vorbis (>= version 1.0 required). You’ll also need libssl-dev (of course).

sudo apt install libcurl4-openssl-dev libxslt1-dev libxml2-dev \ 
libogg-dev libvorbis-dev libflac-dev libtheora-dev libssl-dev 

If apt reports you already have these installed, no worries. Let’s get compiling!

The development libraries provided above are only the bare minimum necessary to compile Icecast with SSL support. You can also install other libraries to extend the functionality of Icecast. Once you have the Icecast source downloaded, you can run ./configure -h to see some of the extra packages that are supported. For example, you can install the Speex library to provide support for this speech codec:

sudo apt install libspeex-dev

Make a folder that we can use to compile the source code.

cd /home/pi/; mkdir src; cd src

Clone the latest release of Icecast (See Downloads):

git clone --recursive

Move into the source directory and prepare the configuration script:

cd icecast-server; ./

Configure the source code with SSL support enabled:

./configure --with-curl --with-openssl

The configure script will not report that SLL was enabled, it will only report if it’s disabled. You can check that the configuration was successful by running this: 

grep lssl config.status

Grep should output a line similar to this:

S["XIPH_LIBS"]=" -lssl -lcrypto -L/usr/lib/arm-linux-gnueabihf -lcurl -lspeex -ltheora -lvorbis -logg -lm -lxslt -lxml2 "

If so, then openssl has been successfully enabled for compilation. Alternatively, you can look for “configure: SSL disabled!” near the end of the configure script output.

If the SSL library was successfully enabled, compile Icecast:

If you have a 4-core ARM, let’s use all 4 of them:

make -j 4

Otherwise, stick with your single core 🙁


Compiling Icecast only takes about 3 minutes with 4-cores enabled on the RPi 3. This is a breeze compared to FFMPEG, which can take over 90 minutes.

Install Icecast:

sudo make install

Create a self-signed SSL certificate to be used for encryption:

sudo mkdir /etc/icecast2/ssl; 
sudo openssl req -x509 -nodes -days 1095 -newkey rsa:2048 \ 
-keyout /etc/icecast2/ssl/icecast.pem -out /etc/icecast2/ssl/icecast.pem 

This command will provide you with several prompts to answer. Each one is optional, but I recommend filling in at least the Country, State or Province, and Organization.

Configure Icecast to use the newly minted SLL certificate. You need to tell Icecast to only use SSL on a particular port and where the SLL certificate is located:

sudo nano /etc/icecast2/icecast.xml

8443 … 1 … /etc/icecast2/ssl/icecast.pem

Since I was streaming with Darkice, I also needed to create another listen socket. This port will allow Darkice to communicate with Icecast. Icecast will stream to the world with the encrypted socket (port 8443), but communicate locally unencrypted with Darkice using port 8000.

Create symbolic links to the old repository version of Icecast2, so that we can use the /etc files:

sudo ln -s /usr/local/bin/icecast /usr/bin/icecast2 
sudo ln -s /usr/local/share/icecast /usr/share/icecast2 

Now, let’s start it up:

sudo service icecast2 start

And test whether Icecast is hosting via a browser:

https://<server ip>:8443/server_version.xsl

Update (2016-10-31): Fixed symbolic link commands, added pre-requisites for building, and added a comment on adding optional packages to the build based on the comment from acrawford.