Asko Nõmm

I make digital products.

ITYPD is now self-hosted

I've done it, and ITYPD now runs on a $7 per month VPS by DigitalOcean, making the whole thing cheaper by roughly $20 per month. I've been using DigitalOcean's services for near a decade now, and I've always been a happy customer. The whole ITYPD platform itself is composed of 5 docker services and things seem smooth thus far.

One notable change is that blogs no longer get a sub-domain blog address, but a itypd.com/@{blog} address instead. This is really due to me not bothering to do the extra steps necessary to get wildcard certs from Let's Encrypt. That said, custom domains are now a possibility! All there's left to do is for me to implement the NGINX parts and a API to talk to those parts.

Thoughts on custom domains on ITYPD

Currently each blog gets a {name}.itypd.com domain for their blog, which is fine, but I'd definitely want people to also own their content and brand identity, and so having custom domains coupled with fully exportable data is critical. Fully exportable data is a much easier thing to do, but custom domains, well, not.

ITYPD is hosted on Laravel Cloud right now, and while it's a bit pricey, I've been very happy with the convenience of that product. Namely, I've not needed to edit a config file, not even once. It just works. However, it has strict limits on custom domains, and I also need to add them programmatically, so Laravel Cloud quickly becomes a blocker for that feature.

I figure I need to spin up my server where I can control the cert generation parts myself, and then write a API that lets me do all that magic from the service itself (or a service worker). I wonder how (if at all) can I avoid having to restart the HTTP server in order to create new server configuration reflecting any new domains (perhaps I can make it dynamic?), but I think that's the way.

This does mean a lot more work on my side though - deployment pipelines, database management, worker management, security - things I'm not all that great in, but I think it's worth the effort.

ITYPD

I'm happy to say; no more social media platforms for me. No more algo-driven, AI-riddled, shallow platforms driven by people longing for attention, for people longing for attention. I'm making my very own, human-centered platform: ITYPD. The blogging of old, if you will, and I'll try to make sure no AI slop or bot content ever gets on the platform.

You can follow ITYPD's own blog for updates related to it here.

Routing with Ruuter in a Reagent / Re-frame project

Ruuter, my zero-dependency Clojure(Script) router can be used as a general router, without any HTTP server as well. This is true for both Clojure and ClojureScript, and because the router has no dependencies, also true for Babashka and NBB, and is exactly what I did in a Reagent / Re-frame project recently, and here’s how I did it.

At the core of it all are your routes, let’s define them as something simple:

(def routes
  [{:path "/"
    :response (fn [_]
                [:div "Hello, World"])}
   {:path "/hello/:who"
    :response (fn [{params :params}]
                [:div "Hello, " (:who params)])}])

Unlike with a HTTP server such as HTTP-Kit, we don’t need the route to have a :method, nor do we need it to return a response map. We can have it return anything we want, which in this case is a Reagent component.

Now let’s create a Re-frame event for setting URI path:

(ns events
  (:require
    [re-frame.core :refer [reg-event-fx]]))

(reg-event-fx
  :set-path
  (fn [{db :db} [_ path]]
    (.pushState (.-history js/window) nil "" path)
    {:db (assoc db :path path)}))

This allows us to call a :set-path event whenever we want to change the current route in-place, and it will also update the URL visible in the browser.

Then let’s create a Re-frame subscription, so we could listen to said path:

(ns subs
  (:require
    [re-frame.core :refer [reg-sub]]))

(reg-sub
  :path
  (fn [db _]
    (-> db :path)))

And finally let’s put it all to work in our core component:

(ns core
  (:require
    [reagent.core :as r]
    [reagent.dom :as rd]
    [re-frame.core :refer [dispatch dispatch-sync subscribe]]
    [ruuter.core :as ruuter]
    [events]
    [subs]))

(def routes
  [{:path "/"
    :response (fn [_]
                [:div "Hello, World"])}
   {:path "/hello/:who"
    :response (fn [{params :params}]
                [:div "Hello, " (:who params)])}])

(defn- app []
  (let [popstate-fn #(dispatch [:set-path (-> js/window .-location .-pathname)])
        path (subscribe [:path])]
    (r/create-class
      {:component-did-mount
       (fn [_]
         (dispatch-sync [:initialise-db])
         (.addEventListener js/window "popstate" popstate-fn))
       :component-will-unmount
       (fn [_]
         (.removeEventListener js/window "popstate" popstate-fn))
       :reagent-render
       (fn []
         (when @path
           (ruuter/route routes {:uri @path})))})))

(defn ^:export init []
  (rd/render [app] (.querySelector js/document "#app")))

As you can see, when the Reagent app loads, it adds an event listener for popstate, which listens to a URI change by the user. Thus, if the user changes the URL manually, the app will call :set-path on its own. Regardless if you call the :set-path event yourself manually or whether the popstate event promps that call, the end result is the same – it re-renders the app component, which then will run Ruuter again, matching against the new path, loading the corresponding component.

So if you now navigate to /hello/John, it should render “Hello, John” on the page. Oh and, currently when you visit the page via a link directly, it won’t load the correct component, because the default path isn’t set, so I recommend you set it via your Re-frame db initialisation, like so:

(ns events
  (:require
    [re-frame.core :refer [reg-event-fx]]))

(def default-db
  {:path (-> js/window .-location .-pathname)})

(reg-event-fx
  :initialise-db
  (fn [_ _]
    {:db db/default-db}))

And that’s how you can use Ruuter to do any type of routing, whether that would be in Clojure side, ClojureScript or even in Babashka and NBB.

Correcting Markdown: Newlines

Part of the upcoming 2.0 release of Clarktown are Correctors. Correctors, like the name would suggest, correct inputted Markdown. They are the middlemen which the input goes through before Markdown gets passed to the Parsers, which then do the job of converting Markdown into HTML.

In the future there will probably be many different types of Correctors, but at the time of writing this there's only one type: Block Separation Correctors. These correctors ensure that there are empty newlines where need-be so that the Parsers get correct blocks, because in Clarktown everything is a block, separated by two newlines (\n\n or \newline\newline in Clojure).

The problem

Take for example the following Markdown:

This is some paragraph text.
# This is some heading.

Since there's only one \newline between these two lines, Clarktown will think of it as one block, and the block Matcher (which identifies a block) will start from the beginning, see regular text, and think the whole thing is just a paragraph, and will render HTML like this:

<p>This is some paragraph text.
# This is some heading.</p>

Where instead what should be the end result is this:

<p>This is some paragraph text.</p>

<h1>This is some heading.</h1>

Now while I personally do not write Markdown like that and nicely always add two newlines between blocks myself, some users will not do that, and for them the end result will be broken.

The solution

Solution to this problem is to create correctors. Essentially we'll be splitting the entire Markdown input into a vector of lines, and going over each line. Then we run the correctors over each of those lines and they will determine if a fix is needed or not. Should there be a \newline above or below of the current line? Perhaps both? A corrector will answer these questions.

The type of heading block that starts with a hashbang is called an ATX heading block, so let's create a function that determines whether we should have an extra \newline on top of the block by feeding it all the lines, the current line, and the current index, like this:

(defn empty-line-above?
  [lines line index])

First let's make sure that this line is indeed a ATX heading block line:

(clojure.string/starts-with? line "#")

Then let's make sure that this is not the very first line, because if it is then there's no need to add anything above.

(> index 0)

Finally the important bit, which is to check if an actual new \newline is required or not:

(not (= (-> (nth lines (- index 1))
	    clojure.string/trim)
	""))

You see clojure.string/trim removes any newlines, and so if we check what are the contents of the line previous to the current line, we should then get a result which is an empty string.

And so our final empty-line-above? corrector would be:

(defn empty-line-above?
  [lines line index]
  (and (clojure.string/starts-with? "#")
       (> index 0)
       (not (= (-> (nth lines (- index 1))
		   clojure.string/trim)
	       ""))))

There's a bit more to the corrector of a ATX heading block, such as the empty-line-below? function as well as detecting if we're in a code block, because we do not want to correct anything inside of a code block, but this here is the gist of it.

Bundling the correctors

Once we have a bunch of correctors, we don't want to manually integrate them, so we'd rather create a map, like this:

(def block-separation-correctors
  {:newline-above [...]
   :newline-below [...])

The vectors of each will include references to functions like the one we created above (the empty-line-above? function).

And we'll use these by running them over each line in our inputted Markdown, like so:

(let [lines (clojure.string/split-lines "our markdown goes here")
      above-correctors (:newline-above block-separation-correctors)
      below-correctors (:newline-below block-separation-correctors)]
  (->> lines
       (map-indexed
	 (fn [index line]
	   (let [add-newline-above? (some #(true? (% lines line index)) above-correctors)
		 add-newline-below? (some #(true? (% lines lien index)) below-correctors)]
	     (cond
	       (and add-newline-above?
		    (not add-newline-below?))
	       (str \newline line)

	       (and add-newline-below?
		    (not add-newline-above?))
	       (str line \newline)

	       (and add-newline-above?
		    add-newline-below?)
	       (str \newline line \newline)

	       :else line))))))

And this mostly concludes how the \newline Markdown corrections are done in Clarktown. You can check more by reading the engine.clj file.

A contentEditable, pasted garbage and caret placement walk into a pub

Pasted garbage says to contentEditable; "Hey! I'd really like to become part of you" and contentEditable says back; "Not so fast, you! First we got to rinse you down!". And thus begins a story of how to make contentEditable take in a good ol' paste, parse that paste for anything we might not want, put the result of that parsing into the right place in contentEditable and place the caret just after that paste. Sounds easy, right? Right.

The filthy default of contentEditable behaviour

By default, contentEditable takes in just about anything you'd like to give it. If you copy text from anywhere that also has mark-up and styles (like a Word document) and then paste it into the contentEditable, it would gladly take all that mark-up and styles as well. But this isn't a great user experience if you're building a content editor like I am, so the best solution is to parse that paste and remove anything you might not want - which in my case was to remove all styles and only allow certain mark-up.

Rinsing down the paste

Alright so let's create a simple contentEditable that also listens to the Paste event. I'll be doing this in ClojureScript as it is my favourite language, using Reagent for the React goodness as this is a React app, but all of this also applies for good ol' regular JS and React.js.

(defn contentEditable []
  [:div
   {:contentEditable true
    :on-paste #(on-paste! %)}])

Don't you just love it how little code you have to write to make a React component in ClojureScript? I sure do, and this is totally NOT (wink wink) my way of saying you should try ClojureScript. Anyway, let's create the on-paste! function as well.

(defn on-paste! [event])

Oh shoot, it's empty! Yeah so, I wanted to stop here because little did I know, there's now a standard Clipboard API that you should use to get the pasted user content - but it comes with a gotcha - as soon as you try to use it, the browser will ask the user to give your page permission to read clipboard data, which I found not very user friendly for something as simple as being able to paste text into an input seeing as it wont ask that when you paste text into an input using the default behaviour, but anyway, ce'st la vie.

So, retrieving the pasted content with the Clipboard API would look like this:

(defn on-paste! [event]
  (.then 
   (.readText 
    (.-clipboard js/navigator)) 
    (fn [clip]
      ;; `clip` contains the pasted content
    )))

Now the clip is the actual paste, along with all of its horrible formatting and styles, so I went along and used the sanitize-html NPM package to clean it right up (I do want to build a native Clojure version of this at one point, but for now this works just swell!). So, with that package, the on-paste! function would look like this:

(defn on-paste! [event]
  (.then 
   (.readText 
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)]
        ;; do something with `pasted-content` here
      ))))

And the parse-html function would look like this:

(ns your-app
  (:require ["sanitize-html" :as sanitize-html]))

(defn parse-html [html]
  (sanitize-html
   html
   (clj->js
    {:allowedTags ["b" "strong" "i" "em" "a" "u"]
     :allowedAttributes {"a" ["href"]}})))

Which, as I'm sure you can tell, only allows the tags b, strong, i, em, a, u and would only allow attributes on the a tag and only if that attribute is href. Pretty cool right? I sure think so.

Putting the paste in the right place

Woah! That rhymed! Maybe I could have a career in hip hop after all haha! Right, so now that we have the paste and we've successfully cleaned it from any garbage it might have, we have to put that paste somehow into our contentEditable.

How do we do that? Do we simply insert it into the DOM element? That's not very React-y now is it. What if we create a local state for the content and just modify that? That sounds a lot better, actually. Let's do just that by going back to our React component and changing it to look like this:

(ns your-app
  (:require [reagent.core :as r]))
  
(defn contentEditable []
  (let [content (r/atom "")]
    (fn []
      [:div
       {:contentEditable true
        :on-paste #(on-paste! content %)
        :on-input #(reset! content (.-innerHTML (.-target %)))
        :dangerouslySetInnerHTML {:__html @content}}])))

As you can see, we create a Reagent atom and set it as an empty string, which we then dereference into the contentEditable content using the :dangerouslySetInnerHTML attribute. On every change to the content (the :on-input event), we update the content atom so that it is always up-to-date with what is actually inside the contentEditable, and finally notice the on-paste! call - we now pass the content along to it as well, so that the on-paste! function would be aware of what is the current content.

So now all we need to do to paste the content into the right place, is to change the on-paste! function to be aware of where your caret was when the paste happened and insert the paste there. The on-paste! function will then look like this:

(defn on-paste! [content event]
  (.then 
   (.readText 
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)
            selection (.getSelection js/window)
            offset (.-anchorOffset selection)
            new-content (string->string @content pasted-content offset)]
        (reset! content new-content)))))

So check this out, we get the current selection via (.getSelection js/window) which then allows us to get the caret offset using (.-anchorOffset selection), and that offset is key! That's how many index-based characters from the beginning of the text your caret was when you made the paste, and so that's also where we need to put the pasted content. I made a helper function called string->string for exactly that, and it looks like this:

(defn string->string [string inserted-string index]
  (let [split-beginning (subs string 0 index)
        split-end (subs string index)]
    (str split-beginning inserted-string split-end)))

Which takes the original content as string, then the content you want to insert into it as inserted-string and finally the index at which you want to insert that new content. It would then return the final string.

And as you saw in the end of the on-paste! function we called reset!, which basically just overwrites the content atom with the new content, prompting a re-render of the component, and thus now the contentEditable has the pasted content with all of the garbage removed in the right place as desired.

Why you got to Caret me like that?

One thing you may have noticed is that when pasting content the caret itself will end up in the wrong place - or rather the right place, which is to say that the caret will stay where it was, but you probably expect it to end up just AFTER the pasted content, as that's how it usually works. This happens because while the content of the contentEditable changed, the caret position did not, so we have to make it change ourselves.

Thankfully this is easier than one would think, we just have to take the current caret offset and add to it the number of characters that the pasted content has. Let's say that your caret was at offset 10, and the pasted string has a length of 7, then naturally we want 10 + 7, which means that the caret will be the 18th character.

To do this, we have to turn our component into a class component, because that's how you get lifecycle events in Reagent. Why? Because we need to able to place the caret AFTER the component has rendered, not before, as we won't yet have the updated text in the contentEditable otherwise and caret placement will throw an error for index being out of bounds. So, with that in mind, the updated component would look like this:

(ns your-app
  (:require [reagent.core :as r]))
  
(defn contentEditable []
  (let [ref (r/atom nil)
        content (r/atom "")
        caret-location (r/atom nil)]
    (r/create-class
     {:component-did-update
      #(place-caret! ref content caret-location)
      :reagent-render
      (fn []
          [:div
           {:contentEditable true
            :ref #(fn [el] (reset! ref el))
            :on-paste #(on-paste! content caret-location %)
            :on-input #(reset! content (.-innerHTML (.-target %)))
            :dangerouslySetInnerHTML {:__html @content}}])})))

Aye! You can see that we're also passing to the on-paste! function a new state variable called caret-location, which by default will be nil, and we'll use that to know where to put the caret with our place-caret! function you can see is being called from within the :component-did-update lifecycle event. We also create a new state called ref, which will hold the actual DOM element of our contentEditable so that we know in what element do we focus our cursor in.

Our updated on-paste! function should look like this now:

(defn on-paste! [content caret-location event]
  (.then 
   (.readText 
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)
            selection (.getSelection js/window)
            offset (.-anchorOffset selection)
            new-content (string->string @content pasted-content offset)]
        (reset! content new-content)
        (reset! caret-location (+ offset (count pasted-content)))))))

So now the caret-location will hold a value that is whatever the offset was when you pasted + the length of the pasted content, so it should now appear right after the paste. Well, not yet - we still have to create our place-caret! function, so let's go ahead create it looking like this:

(defn place-caret! [ref content caret-location]
  (when (and (not (nil? @caret-location))
             (>= (count @content) @caret-location)
             (first (.-childNodes @ref)))
    (let [selection (.getSelection js/window)
          range (.createRange js/document)]
      (.setStart range (first (.-childNodes @ref)) @caret-location)
      (.collapse range true)
      (.removeAllRanges selection)
      (.addRange selection range)
      (.focus @ref)
      (reset! caret-location nil))))

What this function does is that it takes a ref, which is the DOM element e.g our contentEditable, the content and caret-location states and it will then make sure that the content is not longer than caret-location (because if it is, we won't be able to change caret location because the index is out of bounds) and we check that the caret-location is not nil, because it's by default nil, so that we could only invoke caret placement when we want to, which in our case is during paste.

After all is good, we get the current selection, create a new range, set the start of the range to be our caret-location, collapse that range, remove all existing ranges from selection and add our new one instead, and then we'll focus on the ref element and reset the caret-location state.

Browsers decode images differently

I'd like to put down some thoughts about how browsers decode images - and how they do it differently, which can make things a bit tricky for you if you want to deliver the same user experience for every user of your application.

So what does this mean exactly? Well, let's say that you have a single img tag on your web page, but you update the src attribute of it via JavaScript, and you do this often enough to trigger this bug in Firefox. You can easily trigger it if you hook a scroll event to switching the src attribute so that on each scroll the image source updates, which should work just fine on Chrome, but on Firefox will start blinking.

Why does it start blinking? Well, it has everything to do with image decoding. The reason that it blinks on Firefox is that the image hasn't yet decoded when your scroll event is being triggered, but you are already attempting to display it - hence the blink. There's a pretty easy solution for this that I also wrote about on the bug report but the gist of the matter is that the HTMLImageElement has a Promise called decode and that you should not replace the src attribute until the decode finishes, which you can do like this:

const imgUrl = 'yournewimage.png'; // your new image
const img = new Image(); // create temporary image

img.src = imgUrl; // add your new image as src on the temporary image

img.decode().then(() => { // wait until temporary image is decoded
    document.querySelector('img').src = imgUrl; // replace your actual element now
});

You see because in Firefox once the decode happens, even if it happens on an image element other than the one you are updating, the decoded result of that image is cached, and with it, the bug resolved.

So I should always listen to the decode promise, right?

Technically yes, the MDN recommends that to know when it is safe to add the image to DOM, but what happens in Chrome with this code? Well, turns out it slows to a crawl and you're better off not using it. Now I don't think Chrome has implemented this feature in any other way from Firefox, except that for some unbeknownst reason it is a lot slower, but I do think that the two browsers decode images in a different way.

While in Firefox you will see an artifact in the form of a blink when the decode is taking place, I think in Chrome you'll just not see an updated image until that image has decoded, thus you don't see a blink and everything feels smoother, even if it is probably just the same. I tried to find more information on the differences but was unsuccessful, so if you do know something please get in touch. For now, without knowing more, my best recommendation is to in such a case simply write one implementation targeting Firefox and the other Chrome, like this:

const firefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1;
const imgUrl = 'yournewimage.png';
const img = new Image();

img.src = imgUrl;

if (firefox) {
   img.decode().then(() => {
      document.querySelector('img').src = imgUrl;
   });
} else {
   document.querySelector('img').src = imgUrl;
}

In Firefox we wait for the decode Promise to tell us when we can safely update the image src attribute, according to MDN spec. Otherwise, we'll just update the src regardless of waiting for the decode to happen or not.

And that's how I unified the experience across Firefox and Chrome with this particular issue. It's actually funny because just recently I remember thinking that browsers had come such a long way in the past 10 years that if you write something in one it always works in the others. Well, almost always.