Syncing Podcasts to Garmin Fenix on Ubuntu/Mac

After the Garmin Connect outage in July 2020, I started looking into alternative options for syncing podcasts to my Garmin Fenix without using Garmin Express. Since I have a Mac, macOS does not natively support MTP to access the filesystem of the Garmin smartwatch. You can use the Android File Transfer application, but I have not been impressed with the functionality of that application.

This tutorial should also work for the following Garmin devices, which use MTP for accessing the filesystem:

  • Forerunner 945 and 645
  • Fenix 6 Series
  • Garmin Fenix 5 Plus Series
  • Garmin MARQ series, and
  • Vivoactive 4 series.

In may case, I have a Garmin Fenix 5 Plus, and I’ve implemented this using Xubuntu 20.04.

Also, you can easily use this method for syncing music, not just podcasts. But because podcasts need to be synced more frequently with an up-to-date playlist, this may be more applicable to podcasts.

Prerequisites

  • gPodder for downloading the most recent podcast episodes
  • Mutagen for tagging the downloaded MP3 with a title
  • glib which bundles gio providing to the file system of the smartwatch via MTP
    • Under Ubuntu, glib should come pre-installed for Gnome-based flavors. There is klib for KDE-based flavors.
    • Under Mac, you can install glib using homebrew.
  • rsync for identifying which podcasts to transfer or remove from the smartwatch

Step 1. Install and configure gPodder

On Ubuntu, you can install gPodder with:

sudo apt install gpodder

Also, install Mutagen to enable id3 tagging post-download:

sudo apt install python3-mutage

Configure gPodder to manage your podcast subscriptions. For example, add a podcast using the subscribe command (or do this with the GUI):

gpo subscribe http://feeds.feedburner.com/pod-save-america

Enable two post-download extensions that enable the downloaded files to be suitable for syncing to the Garmin Fenix.

  • Rename episodes after download
  • Tag downloaded files using Mutagen
gpo extension enable rename_download
gpo extension enable tagging

In my case, I also edited the util.py module in gPodder to tweak how the episode titles are “sanitized.” These next few steps related to util.py are optional.

Open the util.py module:

sudo vi /usr/lib/python3/dist-packages/gpodder/util.py

In the function sanitize_filename, replace the line that sets the filename with this:

    # custom sanitize expression
    filename = re.sub(r"[\"“”.*/:<>?\\|]", "", filename)

If you have multiple podcast subscriptions, create a folder where we can stage the podcasts for syncing. We will be hardlinking the podcasts to this common folder, so it shouldn’t use any more storage space.

For example, I created a Podcasts folder in my home directory:

mkdir ~/Podcasts

Step 2. Access the Garmin Fenix with GIO

Under system settings on the watch (Settings->System-USB Mode), set the device to use MTP mode. Next, connect your Garmin smartwatch to your computer via the USB charging cable. You can verify that the Garmin smartwatch is detected with the lsusb command:

lsusb | grep Garmin
Bus 001 Device 043: ID 091e:4b54 Garmin International

Now, with the Garmin smartwatch plugged in, we can mount it:

gio mount -li | grep mtp | awk -F= '{print $2}' | xargs -I {} gio mount {}

You can also mount the smartwatch using your desktop environment. In my case, Thunar displays the smartwatch as a phone icon. Thunar still uses GIO to access the smartwatch. Regardless of which method you use, the smartwatch will be mounted in the same location as described below.

GIO will mount the smartwatch’s filesystem to a directory that begins with mtp under /run/user/$UID/gvfs/. For example, my Garmin Fenix mounts to this directory:

/run/user/1000/gvfs/mtp:host=091e_4b54_0000ed6c2317

Some observations about GIO:

  • I have noticed that the gio copy and remove commands fail without being run locally from within the mtp directory.
  • GIO does provide a list command, but I have not had any trouble with ls.
  • The linux commands cp and rm as well as rsync will fail to copy or remove files from the filesystem of the smartwatch.

The directory structure of the watch is as follows:

  • Primary
    • Audiobooks
    • GARMIN
    • Music
    • Podcasts

I’ve noticed that Garmin Express will load most of my podcasts in Music. It does not really matter if you store the podcasts in either the Music or Podcasts directory. This tutorial will assume that the Podcasts directory will be used for storing podcasts.

Step 3. Sync the Podcasts

Set some variables:

MTP_DIR=/run/user/$UID/gvfs/mtp:host=<your smartwatch id>
PODCAST_DIR=/home/user/Podcasts/
DOWNLOADS_DIR=/home/user/gPodder/Downloads/
PLAYLIST=/home/user/Podcasts/Podcasts.m3u8

Download any new episodes:

gpo update
gpo download

Sync the gPodder subscription download folders to the staging folder (e.g., ~/Podcasts):

find ${DOWNLOADS_DIR} -mindepth 1 -type d -print0 |  xargs --null -I {} rsync -av --exclude="folder.jpg" --link-dest={}/ {}/ $PODCAST_DIR

Remove any podcasts older than 2 weeks:

find ~/Podcasts -type f -mtime +14 -name '*mp3' -print0 | xargs -r0 rm -v --

Adjust the -mtime option to keep podcasts for a longer or shorter time.

Create the playlist file used on the Garmin smartwatch:

cd ${PODCAST_DIR}
ls -t *mp3 | sed 's/^/Podcasts\//' > ${PLAYLIST}

Sync the podcasts and playlist file to the Garmin smartwatch:

sync_files=/tmp/fenix-sync-files.log
src=${PODCAST_DIR}
dest=${MTP_DIR}/Primary/Podcasts/
options="-n --omit-dir-times --no-perms --recursive --inplace --size-only"
rsync ${options} --out-format="%n" --exclude=".*" ${src} ${dest} > ${sync_files}
cat ${sync_files}
xargs -a ${sync_files} -d '\n' -I {} gio copy -p {} ${MTP_DIR}/Primary/Podcasts/.

Remove old podcasts from the Garmin smartwatch:

delete_files=/tmp/fenix-delete-files.log
options_delete="-n  --omit-dir-times --no-perms --recursive --inplace --size-only --delete"
rsync ${options_delete} --out-format="%n" --exclude=".*" ${src} ${dest} | grep deleting | sed 's/deleting //' > ${delete_files}
cat ${delete_files}
xargs -a ${delete_files} -d '\n' -I {} gio remove ${MTP_DIR}/Primary/Podcasts/{}

Color prompt for screen under byobu

When using the GNU screen window manager under byobu, the default .bashrc file in Ubuntu (releases 18.04 and 20.04) does not recognize screen as supporting a color prompt. 1)You can find the default .bashrc file in /etc/skel. At first I thought the lack of color was due to byobu not loading the .bashrc file. But lo and behold, it was related to the case statement which identifies whether your terminal supports color.

The default case statement that checks the value of $TERM to see whether the terminal supports a color prompt is as follows:

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    xterm-color|*-256color) color_prompt=yes;;
esac

Under screen, the value of $TERM is as follows:

user@computer:~$ echo $TERM
screen-256color-bce

As you can see, the conditions xterm-color and *-256color in the case statement will not set the color_prompt flag to yes when using screen.

I opted for expanding the *-256color condition with another asterisk:

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    screen*|xterm|xterm-color|*-256color*) color_prompt=yes;;
esac

I also included the conditions xterm and screen* to allow for a color prompt in Putty.

Footnotes   [ + ]

1. You can find the default .bashrc file in /etc/skel.

Cheat sheet for upgrading Nextcloud (manually)

Prerequisites:

  • Ubuntu
  • MYSQL database
  • Apache

Set your variables:

nc_path='/var/www/nextcloud'
nc_old="${nc_path}-old"
backup_root='/somewhere/backups'
htuser='www-data'
db_name='your-nextcloud-database-name'

date=`date +%F`
version=`grep VersionString ${nc_path}/version.php | awk -F\' '{print $2}'`
backup_path="${backup_root}/nc_${version}_${date}"
db_backup="${backup_root}/nc_${version}_${date}.sql"

Enable maintenance mode:

# put server in maintenance mode
cd ${nc_path}
sudo -u ${htuser} php occ maintenance:mode --on

Verify the current version:

# version
grep VersionString ${nc_path}/version.php | awk -F\' '{print $2}'

Make backups:

# backup nextcloud server files
mkdir -pv ${backup_path}
cp -prv ${nc_path}/* ${backup_path}

# backup nextcloud database
mysqldump -u root -p ${db_name} > ${db_backup}
gzip ${db_backup}

If your data folder is outside of your /nextcloud directory, backup your data files separately. I prefer using rsync-time-backup which provides a wrapper around rsync.

rsync_tmbackup.sh /source/data /destination/backup

Stop the web server:

# stop web server
service apache2 stop

Download the latest release:

You can use this PHP script to easily get the URL to the latest release and automatically download the archive. Name this PHP script get-update-url.php.

#!/usr/bin/php
<?php
        include("nextcloud/version.php");

        $updaterUrl = 'https://updates.nextcloud.com/updater_server/';

        $version = $OC_Version;
        $version['installed'] = '';
        $version['updated'] = '';
        $version['updatechannel'] = $OC_Channel;
        $version['edition'] = '';
        $version['build'] = '';
        $version['php_major'] = PHP_MAJOR_VERSION;
        $version['php_minor'] = PHP_MINOR_VERSION;
        $version['php_release'] = PHP_RELEASE_VERSION;
        $versionString = implode('x', $version);

        //fetch xml data from updater
        $url = $updaterUrl . '?version=' . $versionString;
        echo $url;

        # Example update url:
        # https://updates.nextcloud.com/updater_server/?version=18x0x1x3xxxstablexxx7x4x3
?>

See the source of versionCheck.php to determine the correct format of the update URL.

Change to the directory where Nextcloud is installed:

cd $nc_path; cd ..

Download the latest release using the get-update-url.php script:

wget `php -f get-update-url.php | xargs curl 2> /dev/null | grep url | awk -F "[<>]" '{print $3}'`

Move the current installation:

mv ${nc_path} ${nc_old}

Unpack Nextcloud archive:

unzip nextcloud-*.zip

Restore the configuration file:

cp -pv ${backup_path}/config/config.php ${nc_path}/config/.

Set the permissions and owner:

#!/bin/bash
nc_path='/var/www/nextcloud'
data_path='/somewhere/data'  # if located in nextcloud /var/www/nextcloud/data
htuser='www-data'

find ${nc_path}/ -type f -print0 | xargs -0 chmod 0640
find ${nc_path}/ -type d -print0 | xargs -0 chmod 0750

chown -R root:${htuser} ${nc_path}/
chown -R ${htuser}:${htuser} ${nc_path}/apps/
chown -R ${htuser}:${htuser} ${nc_path}/config/
chown -R ${htuser}:${htuser} ${data_path}
chown -R ${htuser}:${htuser} ${nc_path}/themes/

chown root:${htuser} ${nc_path}/.htaccess
chown root:${htuser} ${data_path}/.htaccess

chmod 0644 ${nc_path}/.htaccess
chmod 0644 ${data_path}/.htaccess

Find the permissions.sh script here.

Restart the web server:

service apache2 start

Perform the upgrade:

cd ${nc_path}
sudo -u ${htuser} php occ upgrade

Disable maintenance mode:

sudo -u ${htuser} php occ maintenance:mode --off

Check the installation:

Check the installed Apps:

sudo -u ${htuser} php occ app:list

Check the two factor state of a user:

sudo -u ${htuser} php occ twofactorauth:state <username>

On one occasion, I’ve had to reinstall twofactor_totp:

sudo -u ${htuser} php occ app:install twofactor_totp

RPi Compiling Icecast with support for SSL

In the Raspbian repositories, the Icecast2 package does NOT support encrypted connections via openssl. If you try to use the ssl tags in the /etc/icecast2/icecast.xml configuration file, Icecast will fail to start.

You’ll see something like this in /var/log/icecast2/error.log:

[2016-10-15 20:41:45] INFO connection/get_ssl_certificate No SSL capability.

To remedy this, you need to compile Icecast with openssl support enabled. I recommend installing Icecast2 from the repositories and then removing it. This builds all the configuration files in /etc/icecast2, creates a daemon user and group called icecast2 and icecast, respectively, and provides the init scripts necessary to start Icecast automatically during the boot process.

Make sure your repository cache is up-to-date:

sudo apt-get update

Install Icecast2 from the repositories:

sudo apt install icecast2

It will ask you three passwords to set. These will be stored as plain text in /etc/icecast2/icecast.xml, so choose your passwords wisely.

Remove Icecast2, but don’t purge:

sudo apt remove icecast2

Optionally, you can check whether the configuration files are still there:

ls -l /etc/init.d/ /etc/ | grep icecast

Install the development tools required to build Icecast from source:

sudo apt install git gcc build-essential

Note: I’m not positive these are all the development tools. Leave me a comment if you need help with this.

Now let’s get some of the dependencies required to compile Icecast from source. As of Icecast v. 2.4, it requires the following packages: libxml2, libxslt, curl (>= version 7.10 required), and ogg/vorbis (>= version 1.0 required). You’ll also need libssl-dev (of course).

sudo apt install libcurl4-openssl-dev libxslt1-dev libxml2-dev \ 
libogg-dev libvorbis-dev libflac-dev libtheora-dev libssl-dev 

If apt reports you already have these installed, no worries. Let’s get compiling!

The development libraries provided above are only the bare minimum necessary to compile Icecast with SSL support. You can also install other libraries to extend the functionality of Icecast. Once you have the Icecast source downloaded, you can run ./configure -h to see some of the extra packages that are supported. For example, you can install the Speex library to provide support for this speech codec:

sudo apt install libspeex-dev

Make a folder that we can use to compile the source code.

cd /home/pi/; mkdir src; cd src

Clone the latest release of Icecast (See Icecast.org Downloads):

git clone --recursive https://git.xiph.org/icecast-server.git

Move into the source directory and prepare the configuration script:

cd icecast-server; ./autogen.sh

Configure the source code with SSL support enabled:

./configure --with-curl --with-openssl

The configure script will not report that SLL was enabled, it will only report if it’s disabled. You can check that the configuration was successful by running this: 

grep lssl config.status

Grep should output a line similar to this:

S["XIPH_LIBS"]=" -lssl -lcrypto -L/usr/lib/arm-linux-gnueabihf -lcurl -lspeex -ltheora -lvorbis -logg -lm -lxslt -lxml2 "

If so, then openssl has been successfully enabled for compilation. Alternatively, you can look for “configure: SSL disabled!” near the end of the configure script output.

If the SSL library was successfully enabled, compile Icecast:

If you have a 4-core ARM, let’s use all 4 of them:

make -j 4

Otherwise, stick with your single core 🙁

make

Compiling Icecast only takes about 3 minutes with 4-cores enabled on the RPi 3. This is a breeze compared to FFMPEG, which can take over 90 minutes.

Install Icecast:

sudo make install

Create a self-signed SSL certificate to be used for encryption:

sudo mkdir /etc/icecast2/ssl; 
sudo openssl req -x509 -nodes -days 1095 -newkey rsa:2048 \ 
-keyout /etc/icecast2/ssl/icecast.pem -out /etc/icecast2/ssl/icecast.pem 

This command will provide you with several prompts to answer. Each one is optional, but I recommend filling in at least the Country, State or Province, and Organization.

Configure Icecast to use the newly minted SLL certificate. You need to tell Icecast to only use SSL on a particular port and where the SLL certificate is located:

sudo nano /etc/icecast2/icecast.xml

8443 … 1 … /etc/icecast2/ssl/icecast.pem

Since I was streaming with Darkice, I also needed to create another listen socket. This port will allow Darkice to communicate with Icecast. Icecast will stream to the world with the encrypted socket (port 8443), but communicate locally unencrypted with Darkice using port 8000.

Create symbolic links to the old repository version of Icecast2, so that we can use the /etc files:

sudo ln -s /usr/local/bin/icecast /usr/bin/icecast2 
sudo ln -s /usr/local/share/icecast /usr/share/icecast2 

Now, let’s start it up:

sudo service icecast2 start

And test whether Icecast is hosting via a browser:

https://<server ip>:8443/server_version.xsl

Update (2016-10-31): Fixed symbolic link commands, added pre-requisites for building, and added a comment on adding optional packages to the build based on the comment from acrawford.