23: Under construction04 Aug 2023
Rowan and Andy discuss how they built the dotnet rambles podcast website.
The term “podcasting” was coined in Feb 2004 by journalist Ben Hammersly in a newspaper article for The Guardian.
In the article Ben suggests that all the ingredients are there for a new boom in amateur radio - MP3 players like Apple’s iPod, audio production software had become cheap or free and weblogs (blogs) were a well established part of the internet. “But what to call it? Audioblogging? Podcasting? GuerillaMedia?”
Although internet audio / internet radio was already in existence, Adam Curry, a former MTV video jockey and software dev Dave Winer coded a program know as iPodder, which enabled them to download internet radio broadcasts to their iPods.
This was the first Podcast downloading software.
Adam Curry now hosts The Daily Source Code, a popular, long running podcast on the internet.
In 2005, big companies started recognising an opportunity in Podcasting. Apple lead with iTunes 4.9 providing support for podcasts. George W. Bush became the first president to deliver a weekly address as a podcast.
2005 is also the year in which the work “Podcast” was declared “Word of the Year” by the New Oxford American Dictionary.
In 2006 Apple CEO Steve Jobs demonstrated how to create a podcast using GarageBand and Ricky Gervais set a Guinness World Record for most downloaded podcast, with over 250 million downloads per episode of ‘The Ricky Gervais Show’ during its first month. This apparently massively exceeded expectations.
How we built our podcast website in a couple of days (and then tweaked it over the next couple of weeks).
We started recording podcasts after a conversation about how it would be useful to record all the chats we had about dev stuff. We thought that maybe other devs would maybe find something useful in all the rubbish we talk about, and dotnet rambles was born!
After we’d recorded a few episodes and mentioned several tools and websites it occurred to us that having a website would probably be fairly useful so that we could have show notes. So we decided start building it and podcast.dotnetrambles.com was born!
How we split up the tasks…
Rowan wrote the F# code
Andy wrote the page templates (because he can’t get his head around F#! listen to Podcast 18: F# what is it good for? Absolutely everything! - if you want to know more)
- Bootstrap 5 used for the site layout - because who doesn’t use bootstrap and I’m (Andy) no designer
- Github for source code
- Github Actions to build the site (runs F# code) and pushes to Netlify
- F# - used to process the RSS feed, show notes and images
- HandlebarsDotNet, Slugify, Markdig
- Stand alone search api
- Anchor RSS feed - provides all the data for the podcasts
- Markdown - used for the show notes
- Netlify - used to host the podcast website - all static files!
Azure - web application for search api - free tier
- Algolia - for searching the episodes
- Cloudflare - for domain registration and DNS management
Process to get a new Episode onto the site
- Episode is published and appears in the RSS feed
- Push show notes, images and keywords to Github which triggers a Github action
- Github action runs the F# code, builds the site and populates the search index then pushes the generated pages to Netlify
F# code - .Net 6
- A simple console app that has the following workflow:
- Use the rss feed to drive the construction of a model of episodes pulling episode hero image / html snippet, alt tags and show notes
- Feed the constructed model to the handlebars templates to generate the html files
- Make a publishable bundle containing generated html and other static resources required for the site
Builds the Lucene search index from podcast title, description, show notes and keywords
- Populates a search index to enable a search on the website
- Pushes to Netlify
A simple search api - one endpoint that queries the Lucene search index and returns the results in json - Minimal api written in F#. One
- All searching is via Algolia
Packages and APIs in use
- FSharp.Data - XmlType provider over the RSS feed - provides a strongly typed model around the rss content in one line!
- FSharpx.Async - The entire workflow is async, so this provides helper functions like Async.Map and Async.Bind to help the code flow nicely
- Handlebars.Net - to be able to generate static html files from handlebars templates
- Markdig - to be able to generate html from our shownotes that are written in markdown
- Slugify.Core - to be able to generate our url friendly episode slugs
Lucene.Net - core Lucene functionality Lucene.Net.Analysis.Common - StandardAnalyser Lucene.Net.QueryParser - MultiFieldQueryParser for search api - querys all fields in the lucene document and allows weighting of fields We are using Lucene 4.8 beta - this is extremely stable, has more than 7800+ passing unit tests, integrates with .Net 6.0, .Net 5.0, .Net Core 2+, .Net Standard 2.1 and 2.0 and Framework 4.5+
- Algolia.Search - to populate the search index - uses the API to do a complete update of the search index every week. Very simple to use, nice website where you can tweak settings and choose what should be searchable, ranking and sorting. Their free tier allows up to 10k searches per month which should be enough for us for a while!
- Argu - for parsing command line arguments - we pass the Algolia key and app id during the build process
GitHub Build Action - generates and deploys the site
- Runs F# console application
- Deploys the published bundle to Netlify to update the site
- Setup a scheduled deployment on a Friday at 10am GMT
After it went live
- Spent some time tweaking the site to improve the lighthouse scores
- Removing unused JS and CSS - had the whole Bootstrap 5 min file included but was using less than 80% of it so removed - used https://purifycss.online/ to do this
- Reducing images sizes and change image formats to WEBP (https://en.wikipedia.org/wiki/WebP) - WebP is an image file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency.
- Lazy load images - lazysizes.min.js (https://github.com/aFarkas/lazysizes) - this only loads images that are visible, as the users scrolls down the page other images are loaded
- Added to Google index
- Google analytics added to the site - recently updated to use Squeaky.ai
- Added missing security headers
- Added twitter meta tags
- Added search functionality - using a simple API built on top of a Lucene search index
- Replaced home grown search with Algolia
Costs to date £0
- Anchor podcast hosting - Free
- Github source code - Free
- Netlify website hosting - Free
- Cloudflare - Free
- Algolia - Free (10k searches per month) $1 per 1k searches
Azure - Free (we think!)
- Overall cost FREE!!!
Why did we opt for static generation for main site?
- Free hosting
- Didn’t need anything fancy - it’s static content
- Fast to get it up and running
- Nothing to manage (infrastructure)
Why didn’t we use X, Y or Z?
- Probably because we didn’t know about them
- If you have other things we could have used let us know via Twitter (@dotnet_rambles)
Things still to do - like all good software, its never going to be finished! Here’s our product backlog
- Suggest an episode page - where listeners can suggest content for an episode
- Suggest a random fact / OS utility of the week
- Audio timelining / jump straight to content in audio from show notes
- Backoffice metrics around show listens, managing suggested episodes
- Create a Random Fact and OS utility repository where you can view / search all Random Facts and OS utilities
- Visual design spruce up, when our resident designer has time 😄 - Hey what’s wrong with the design??!!
OS project/utility of the week
Lucene.Net is a port of the Lucene search library, written in C# and targeted at .NET runtime users
It is a mature, super performant, flexible search library. Lucene.Net can be included in your code by simply installing the required nuget packages
Apache Lucene is the basis for products like Solr and Elasticsearch, two of the most popular open source search engine platforms in existence.
It has been around for a long time, has bucket loads of functionality and there are many books written on it.