HTTP server in bash

@lokxii.bsky.social

HTTP server.. in bash? My journey of learning web dev with strange technologies

The image My web app with minimal HTML, CSS and JavasScript, screenshot taken from my iPhone.

The problem

Recently I bought a refurbishied 14" M2 Pro Macbook Pro and installed asahi linux on it. I have been programming for a few years and I have only programmed on MacOS. I have spent enough time on Unix so I decided it's time to move to linux. I had a great time setting up the environment. Slowly but surely, I am getting used to the linux enviornment.

See, I am a typical person trapped in the Apple ecosystem. I have a Macbook, an iPhone and an iPad (I don't have airpods because I hate wireless headphones). Let's be honest, clipboard sharing and airdrop are pretty awesome, except the times when they refuse to work properly. So realizing that I can't clipboard-share text from my iPhone to Mac or airdrop between the devices bordered me a lot.

I remember one time my friend sent me a URL to take a look at. Some functions on the webiste required a hardware keyboard to work properly, so I hand typed the URL (with some long url-encoded utf-8 string) on firefox on my Mac. Maybe I could have opened the WhatsApp client on the browser, but then I have to scan the QR code and wait for it to sync which often take ages.

I need a solution to this.

Reinventing the wheels, that's how you learn as a programmer

I didn't do any research on how clipboard sharing is done on Apple devices, but I can imagine there is a daemon running in the background monitoring clipboard events, broadcasting them to other devices in the network. I'm not sure if I can install a daemon on iOS without jailbreaking and broadcast clipboard events to asahi linux. Even if I can, the project scope is going to grow indefinitely and I may get burnt out by Apple documentations before finishing the project.

I bet there are existing solutions, but we programmers always love to reinvent the wheels.

Maybe this can be done by building a simple web app that only has 2 buttons, copying from and pasting to my computer. Maybe I can write the frontend with plain JavaScript, HTML and CSS. Maybe I can write the backend in Bash only.

Ok calm down, calm down. Let me explain.

You see, I have never written a fullstack web app. My past web development experiences were writing a backend for a project with my friends using python and flask, and writing bsky bots hosted on Render. The frontend world is a chaotic dumpster fire that different frameworks argue themselves to be the best solution to frontend, solving problems that I don't even understand. I figured that I should start simple and understand the problems before adopting the solutions. No matter how, using a framework to build a single page website with only 2 buttons is definitely an overkill.

But why Bash?

During the course of getting used to linux workflow, I have built a habit of writing small tools using Bash for my daily workflow. For example, I have a script called wifi-toggle which is pretty self-explanatory. I imagined that there will be a server listening to requests piping to wl-copy or responding with wl-paste (asahi linux uses wayland). Maybe a simple bash script can do the job. This is also a good chance for me to learn how HTTP actually works. I remember watching a video about someone writing a minecraft server in Bash. If one can write a minecraft server in Bash, why can't I write an HTTP server?

It won't be that difficult right?

What is a "line"

The first thing to do is to bind and listen to a port. We can use netcat to do the job. Upon Reading The Friendly Manual, I realized that all I have to do is the write the request router.

# This starts a server listening to port 5000, pipe the request to router.sh
# and respond with the output of router.sh
ncat -lk 5000 -e ./router.sh

I decided to write a file based router. The file will be either html, js and css files or scripts to be executed. All I have to do is to parse the HTTP request header line by line, get the path and return the results.

In the world of HTTP reuqests, a line is terminated by "\r\n", but the read command sees a line by seeking "\n" only. Upon a few hours of trial and error and googling (no ChatGPT used), I came up with the following way to parse the header and the payload.

header=()
content_length=""
while IFS= read -rd $'\r\n' line && read && [[ -n "$line" ]]; do
    echo "$line" >> $REQUEST_FILE  # save the request for logging / later use
    header+=("$line")

    if [[ "$line" =~ Content-Length:\s*\d* ]]; then
        content_length=$(echo "$line" | cut -d' ' -f2)
    fi
done

# Only try to read the payload when Content-Length is specified
if [[ -n "$content_length" ]]; then
    dd ibs=1 count=$content_length of=$BODY_FILE 2>/dev/null
fi

METHOD=$(printf "%s" ${header[0]} | awk '{ print $1 }')

Notice the use of dd. ncat doesn't seem to close the stdin after the whole request is written to stdin. If we use cat to read the payload to a file, it will wait indefinitely for more incoming data. I should have returned 411 Length Required when Content-Length is not specified, but I believe most browsers will inlucde the header. I don't need the router to be perfect anyways.

Constructing the response

After parsing the header, I setup a pipeline to process the request and return the file.

get_route |
    route_to_file_path |
    construct_body |
    construct_http_response

get_route and route_to_file_path are pretty self-explanatory. They turn the request route into actual file path on my computer. construct_body tries to find the request file or execute it according to the permissions of file, and return the HTTP response status, the payload, and Content-Type.

construct_body() {
    read path
    status=200

    if [[ -d "$path" ]]; then
        path+="/index.html"
    fi
    path=$(readlink -f "$path")

    if [[ ! -e "$path" ]]; then
        status=404
        echo "Not Found" > $SERVER_OUT_FILE
        path=$SERVER_OUT_FILE

    elif [[ ! $(dirname $path) =~ $BASE/routes.* ]]; then
        echo $(dirname $path) >&2
        echo $BASE/routes >&2

        status=403
        echo "Forbidden" > $SERVER_OUT_FILE
        path=$SERVER_OUT_FILE

    elif [[ -x "$path" ]]; then
        cat $BODY_FILE | $path "$REQUEST_FILE" "$BASE" "$ADDITIONAL_HEADER" > $SERVER_OUT_FILE
        status=$?
        path=$SERVER_OUT_FILE
    fi

    echo $status
    echo $path
    echo $path | mime_type
}

Finally the results are used to construct the HTTP response. It is written to stdout for ncat to send back to client.

See? It isn't really that difficult. It only took me 3 days to write 145 line of Bash, including spending a few hours realizing I have to add Content-Length in HTTP response header so that browsers don't wait indefinitely for more data.

iOS sucks

HTML isn't really that hard, at least for a page that only have a title and 2 buttons. The real hard part is the CSS and JavaScript. I quickly wrote the HTML (15 mins) and CSS (5 hours; how to center flex-wrapped items?). But when I start to write JavaScript, I soon came across a few problems.

  1. The clipboard API requires HTTPS
  2. The classic NotAllowedError: The request is not allowed by the user agent or the platform in the current context

Adding SSL to HTTP is pretty simple. Although ncat has a --ssl option, it somehow refuses to handle simultaneous requests when ssl is on. So I decided to add a TSL reverse proxy. I chose stunnel but there are plenty of options out there.

The second problem is a bit more complicated. To write to the system clipboard, we have to use navigator.clipboard.write() which takes an array of ClipboardItem. To construct one, we have to pass a dictionary with mime as the key and blob as the value. So to "copy" data from server, I would make a request to the endpoint responsible for retrieving system clipboard on linux, pass the payload as a blob to ClipboardItem.

Here is how It would look like:

setTimeout(async () => {
    const res = await fetch(uri);
    const blob = await res.blob();
    await navigator.clipboard.write([
        new ClipboardItem({
            [blob.type]: blob
        })
    ]);
});

But on Safari, you have to think different. It seems to require the blob in ClipboardItem to be a promise that contains the fetch call. That means I have to first make an HTTP OPTIONS request to get the Content-Type, and then make a second HTTP GET request for the blob content and pass it to the clipboard API.

Implementing the behaviour isn't that difficult, but guessing the "correct" way to do the thing is the most frustrating. Think different, huh.

Know when to stop

The paste button is pretty straightforward. You get the clipboard content and send to server. I added an upload progress bar with XMLHttpReqest although most of the time it just disappear instantly.

Now clipboard sharing is done, I could have stopped working on this project anymore. But then I thought, can I also implement airdrop? Airdrop is just file uploading and downloading. Surely I can implement it in one day right?

Downloading is pretty easy. Make an endpoint, create an <a> tag that links to the endpoint, click it, Done!

But uploading is not that straightfoward. Uploading can be done by <input type="file" />, but that means I have to parse form data on server side. When enctype of form data tag is set to multipart/form-data, there's a boundary string separating the input fields and uploaded files. Luckily the form data I will be sending contains only one file, so I can just use simple maths to extract the file from form data, or else I may need to find some ways to parse the form data.

I added a button on top right corner of the page to switch to file sharing page. The new page has the same layout as the clipboard sharing page to keep things simple.

Now everything is done. I could have sit down to brainstorm more features to add to the page, but I decided to end here so that I don't keep working on this project forever. Knowing when to stop is very important.

To host the server, I wrote a simple script to start a background process on my computer. My devices are connected using ZeroTier private network. Everything worked fine first try (translation: another few hours of debugging) and finally the project came to an end (translation: hopefully I won't find more bugs in the future).

Conclusion

So the HTTP server is just a file-based rounter with minimal processing (it is just parsing HTTP request and returning files most of the time). You may wonder how is the perforamnce.

Going to https://localhost:5000 on my computer and look at the network timings, waiting time for each request was on average 80ms. When I go the page on my iPhone, it generally takes 100ms for each request. I don't know how fast or slow this is, but loading the page feels instant from my iPhone. Maybe there are ways I can optimize the server, but the UX is good enough. I don't think I will touch the codebase anymore as long as it doesn't break some day.

It was pretty fun writing the web app. I learnt a lot about the basics of full stack development, especially CSS and HTTP requests. I'm glad that I didn't use any frameworks / libraries for this project.

Update

So I finally cleaned up a bit the mess and open sourced the project: https://github.com/lokxii/clipboard-server

Feel free to enjoy the madness of a quickly-pieced-together project.

lokxii.bsky.social
ろくしぃ

@lokxii.bsky.social

快楽主義
ガンランスでMH3G四天王制覇した人
メゼポルタに所属するガンサー
星の翼でチンパンやってる

個人サイト: lokxii.github.io

アイコン→ @yutan-po.bsky.social

子供たち
ちゃあはんくん → @marchov.bsky.social
しゅうけいくん → @shuukei.bsky.social
よみあげくん→配信用、非公開

Post reaction in Bluesky

*To be shown as a reaction, include article link in the post or add link card

Reactions from everyone (0)