Skip to content


This project builds on many amazing other projects and would not be possible without them.

Bootstrap CSS

Writing CSS is not a trivial task. For many projects I have been using the Bootstrap CSS Framework which provides sensible default values, a 12-column grid system and a lot of components. Using this I didn't have to write any CSS myself and just attach a couple of classes to HTML elements.


Log messages in multiple colors are neat. Using the coloredlogs package we can get these super easily.


For reading FIT files I use the fitdecode library which completely handles all the parsing of this file format.


The webserver is implemented with Flask which provides a really easy way to get started. It also ships with a development webserver which is enough for this project at the moment.


Transferring geographic geometry data from the Python code to Leaflet is easiest with using the GeoJSON format. The official standard RFC is a bit hard to read, rather have a look at the Wikipedia article. And there is an online viewer that you can try out.


For a smooth open source project one needs a place to share the code and collect issues. GitHub provides all of this for free.


For reading GPX files I use the gpxpy library. This allows me to read those files without having to fiddle with the underlying XML format.


The interactive maps on the website are powered by Leaflet, a very easy to use JavaScript library for embedding interactive Open Street Map maps. It can also display GeoJSON geometries natively, of which I also make heavy use.


Writing documentation is more fun with a nice tool, therefore I use MkDocs together with Material for MkDocs. This powers this documentation.

Open Street Map

All the maps displayed use tiles from the amazing Open Street Map. This map is created by volunteers, the server hosting is for free. Without these maps this project would be quite boring.


Working with thousands of activities, thousands of tiles and millions of points makes it necessary to have a good library for number crunching structured data. Pandas offers this and gives good performance and many features.


I need to store the intermediate data frames that I generate with Pandas. Storing as JSON has disadvantages because dates are not properly encoded. Also it is a text format and quite verbose. The Parquet format is super fast and memory efficient.


For managing all the Python package dependencies I use Poetry which makes it very easy to have all the Python project housekeeping with one tool.


Almost all of the code here is written in Python, a very nice and versatile programming language with a vast ecosystem of packages.


For doing HTTP requests I use the Requests library. It provides a really easy to use interface for GET and POST requests.


Finding out which cluster is the largest one can either be formed as a graph search problem or as a data science problem. Using the Scikit-learn library I can easily use the DBSCAN algorithm to find the clusters of explorer tiles.


The Statshunters page allows to import the activities from Strava and do analysis like explorer tiles, Eddington number and many other things. This has served as inspiration for this project.


Although I have recorded some of my bike rides, I only really started to record all of them when I started to use Strava. This is a nice platform to track all activities. They also offer a social network feature, which I don't really use. They provide some analyses of the data, but they lack some analyses which I have now implemented in this project.


Strava has an API, and with stravalib there exists a nice Python wrapper. This makes it much easier to interface with Strava.

Strava local heatmap


Vega & Altair

Velo Viewer