31 Jan 2021   coding  0   api, mastodon, javascript_modules

We take a look at how I implemented a pretty simple implementation of a microblog feed using the API from a Mastodon instance.

You’ll probably want to skip ahead if you already know what Mastodon is. Mastodon is a Twitter-like microblogging service. Where it differs from Twitter is that it is a decentralized service, i.e. there is no central place where all toots (the equivalent of tweets) from every user in the Mastodon universe, also known as the Fediverse, live. Instead, your toots live with one instance of Mastodon, the instance that you signed up with.

Maybe you’d ask me, “Why not just use Twitter and their API?” Reason for that is two-fold:

  • Despite Mastodon being Twitter-like, I find it more comfortable to actually write things and share things on Mastodon, and I find myself being quiet on Twitter.
  • Twitter’s API requires some hoops that you need to go through. They aren’t high hoops, but it feels somewhat arduous to go through them. Mastodon (particularly mastodon.online) just felt way easier.

Without further ado, let’s dive into what I did.

The Tech

And that’s it. What did I leave out that we should expect to be there? I certainly am using Gulp as my task runner for various chores, but it actually is rather uninvolved in the process. In fact, for this module, I decided to write in just native JS without putting it through a transpiler, or even minify it.

The Code

The idea is straightforward, and definitely simple for those used to consuming APIs: grab contents → wrangle data → filter (if necessary) → truncate → show the data!

First, this is my function for grabbing the API response.

import { get } from './utils/get.js';

 * Gets and returns the raw feed data from Mastodon's API
async function getNewMastodonFeed() {
  const BASEURL = 'https://mastodon.online/api/v1';
  const USER_ID = 177072;

  const STATUSES_API_URL = BASEURL + '/accounts/' + USER_ID + '/statuses';

  return get(STATUSES_API_URL, 'json');

The get function here is just a promisified XML HTTP request (XMLHttpRequest), which is a function that I’ve been using for my other API-driven content on the site (the LastFM recent songs list).

To reduce API calls to the server, a very simple caching mechanism is implemented. The following is the function that wraps the API call above, and chooses to use the cached data if it is available and relatively fresh. The “freshness” timer is set to 12 hours, which is rather long but also somewhat harmless.

export async function getFeed() {
  const POST_LIMIT = 5;
  const SEPARATOR = '===SEPARATOR===';

  // Check the cache and see if the cached content is updated
  const cachedDate = window.localStorage.getItem('feed-cache-date');
  if (cachedDate) {
    // Use the cache if its refresh within 12 hours
    if (new Date() - new Date(cachedDate) < 43200000) {
      return window.localStorage.getItem('feed').split(SEPARATOR);

  const newFeed = await getNewMastodonFeed();

  const formattedFeed = newFeed
    .slice(0, POST_LIMIT) // Show only N posts
    .map(formatPost) // SEE below
    .filter((item) => item !== false);

  window.localStorage.setItem('feed', formattedFeed.join(SEPARATOR));
  window.localStorage.setItem('feed-cache-date', new Date().toISOString());

  return formattedFeed;

We avoid quite a bit of unnecessary client-side computation by storing a formatted copy that can readily be shown. The formatting is done by the formatPost function mentioned above. The function is as follows:

 * Formats each item from the API response
 * This is used as part of a `.map` operation.
function formatPost(item) {
  const { content, reblog, account } = item;

  const postCard = `<article class="microblog-card">
        ? `<header class="microblog-card-prepend">
          <span class="fas fa-retweet" aria-label=""></span>
          ${account.display_name} boosted this
        : ''
    <div class="microblog-card-body ${reblog ? 'reblogged' : ''}">
      <div class="microblog-card-author-image">
        <a href="${!reblog ? account.url : reblog.account.url}" target="_blank">
          <img src="${!reblog ? account.avatar : reblog.account.avatar}" />
      <article class="microblog-card-content">
        <h6 class="microblog-card-author">
          <a href="${
            !reblog ? account.url : reblog.account.url
          }" target="_blank">
              !reblog ? account.display_name : reblog.account.display_name
        <div class="microblog-card-post">

  return postCard;

To initiate the feed, we prepare the function below and run it when the script is loaded.

async function showMastodonFeed() {
  const feed = await getFeed();
  const feedList = document.querySelector('.microblog-feed');

  feedList.innerHTML = feed.join('');

// Initiate the feed on load
// Since this is an async function, it's also non-blocking!

Since the refresh timer’s rather long (12 hours, as mentioned above), I added a button to forcefully load fresh data from the API.

function refreshMastodonFeed() {
  const feedList = document.querySelector('.microblog-feed');
  feedList.innerHTML = 'Loading...';


// Add the refresh button listener on DOMContentLoaded event
document.addEventListener('DOMContentLoaded', function () {
    .addEventListener('click', refreshMastodonFeed);
// Remvoe the refresh button listener when the window unloads
window.addEventListener('unload', function () {
    .removeEventListener('click', refreshMastodonFeed);

The experience

Mastodon’s API documentation

It’s good enough that it gives you a fair idea of what endpoints are available, but it would be fantastic if the possible values and types of the fields in the response body is made clear. Almost right after figuring out my user ID, I was mostly inferring for things through the payload, instead of looking at the documentation.

Writing native JS with ES6 syntax

Being able to just write in native JS and have it run directly on the browser felt great. I didn’t have to muck around with webpack (as I did for most other pieces of JS on the site), and get lambasted by having to do weird gymnastics just to run some JS in an HTML file.

Of course, there are downsides to this:

  • Old browsers, or quirky browsers that don’t support the latest standards, won’t be happy. For instance, the feed won’t work on suckless’ surf.
  • My code isn’t minified, which leads to some degradation in load-times. This can be improved by putting the JS file through a minifier, which can be rather easily done through Gulp.

Writing the feed posts like writing a component

This has been somewhat lacklustre. I was basically writing HTML in a JS template literal, which can be described as trying to drive straight off-road while avoiding obstacles.

But! Perhaps this is where I buckle up and start learning how to write standard web components, which I believe is the future that we should head towards.

- Japorized -


There are currently no comments.
Comments have been disabled across the site.