Blog

Wasm: The Hero We Need

22 Feb, 2023
Xebia Background Header Wave

To say that I am a huge proponent of WebAssembly (Wasm) and it’s stated outcomes would be an understatement. It has me thoroughly convinced that what it brings to the table is a considerable step forward for the software industry, and I am extremely enthusiastic about a future influenced by Wasm.

For the uninitiated, Wasm has grown beyond the browser, piqued the interest of some, and has sparked an enthusiastic community of builders. While I could regurgitate everything about Wasm and what makes it exciting, I am going to refrain from doing so here. Such content already exists and I implore you to distill the morass of information surrounding Wasm in order to form your own opinion.

It is, however, important for me to look past the hype generating content and to recapitulate the essence of Wasm for the topic at hand. The topic being that Wasm is not only ideal for software beyond the browser, but that it arrived just in time.

A Blast From The Past

When boiled down, Wasm is intended to be an intermediary binary format that is:

  • small
  • fast
  • secure
  • highly portable

If you parse this carefully, it may become clear that the goals behind Wasm, and by extension Wasm itself, are not at all novel. We have, in fact, already attempted to achieve such idyllic goals many times over. Some of you may even have a very lofty and oft quoted tagline from Sun Microsystems surfacing in your mind: "Write once, run anywhere."

Wasm is indeed analogous to Java bytecode, or the .NET Common Language Runtime intermediate representation. A common criticism I see of Wasm is this very fact.

Anyone sceptical of Wasm might then ask: "What makes Wasm different and worthy of our attention this time?" The difference, if you ask me, is not even technical – even though such differences do exist and may be convincing in their own right. No, the difference I see lies in the timing.

Timing is Everything

We’re in

Software is nearly ubiquitous and pervades practically every aspect of human life. We are at a point where it is extremely unlikely that we can undertake any task in the absence of some software executing somewhere in the chain of causation. As a result, we appear to be heading to a world where we may be fully dependant on, or inextricably linked to, technology enabled by software.

Yet, we have been unable to reconcile with the fact that every day the list of software that is improperly maintained and / or configured is rapidly outpacing our efforts to keep up. We are seeing the implications increasingly cropping up in the news. There may very well be a software reckoning, and the bytes of the past will come back to haunt us, unless we do something about it.

Doomsaying aside, addressing software security is an ever increasing concern and has birthed specialisations in our industry. This has marked just how important it has become.

Everything Everywhere, All At Once

Speaking of that which is ubiquitous, the adoption of Internet of Things (IoT) and it’s applications has risen and is likely to continue to do so.

Every day more and more devices join an ever-expanding global network. This produces a host of challenges, ones we would do well to get ahead of before they inevitably yield unsatisfactory outcomes. Last I checked, we don’t have the necessary skills and manpower to create and maintain software that operates on the myriad of devices with specific architectures and constraints that currently exist, let alone those that await us in the future.

Also something worth considering is that IoT could potentially be the instigator of the blurring of the line between the cloud, the edge, and everything in-between. Given the vast network of devices, does it make sense to make expensive and unreliable network calls to some cloud or edge devices? Assuming we had a way to run any software fast and securely on virtually any device that advertises it’s support, would it not be beneficial to run said software on a nearby device?

This may seem speculative, and is indeed nuanced, however, the question remains: are we ready to tackle a world in which there are no designated tiers of compute, but simply an amorphous and heterogeneous pool of devices on which to run software?

Even now we are encountering situations where portability would be ideal. In the cloud, the relatively recent inclusion of ARM-based CPUs is a major boon, however, not everyone can make use of this entirely new architecture.

The Way We Are

Another time-related factor to consider is how our roles and views on software development have evolved. To put it succinctly, it has only been consistent in its inconsistency.

We have, however, mostly settled on certain design principles and approaches that have been deemed desirable more often than not for software in a contemporary context. Here I am referring to the buzzwords that tended to stick around, such as statelessness, service oriented architectures, event driven systems, continuous integration and continuous delivery / deployment, and agile processes, just to name a few.

It is worth noting that this has, importantly, been driven by us shifting the bulk of our compute back to the mainframe, or rather the cloud. The flexibility with relative ease-of-use afforded by the automatable infrastructure and the everything-as-a-service model has left an indelible mark on how we develop software.

We have even had some time to (hopefully) learn from our past mistakes, and can make more informed decisions about what works under certain conditions. The problem is that sometimes these solutions are relatively complex and frought with numerous moving parts.

Tricking Rocks Into Thinking

Then there is Moore’s law. Can we confidently say that the law will continue to hold?

This is a fairly debated topic, and there is immense room for interpretation. I remain unconvinced, however, that we will continue to see the same pace of improvements for the time being.

We are still likely going to see increases in performance from upcoming processors, but we are starting to see signs that we may well be squeezing the last drops for this particular lemonade.

That is to say, we may have to look elsewhere for improvements in computing, and we cannot simply continue to ride the wave of increased processor performance we have taken for granted over the years. It seems to me that the software industry has room to improve and get creative.

Green Threads

Also at the forefront of our minds ought to be the environment. With the backdrop of climate change, record-setting temperatures, and increasing weather anomalies, there is a marked uptick in environmental conscientiousness.

We, at least the majority of us in the software industry, rarely give a second thought (if ever) to how our software contributes to an overall rise in global carbon emissions, or otherwise impacts the environment. We should, perhaps, look into those wasted CPU cycles.

Enter Stage Left: Wasm

So then, will Wasm arrive to save the day? Much to my dismay, I am unable to predict the future, and I am not able to definitively state that Wasm is the solution to all our problems, but then again who would? I would still bet on Wasm succeeding rather than it failing, at least in so far as it being a potential stepping stone for further development.

Is it truly unreasonable to assume that it may well fully achieve the stated outcomes and aid in alleviating our security woes, an aspect inherent to the conception of Wasm?

Facing an expanding ocean of devices that are joining global networks, each with unique constraints and architectures, would Wasm not alleviate future headaches?

Given our modern proclivity towards rapidly delivered and preferably small pieces of software in increasingly complex systems that are ideally event-driven in nature, would Wasm not flourish?

At a time when we are reaching the physical limitations on the amount of transistors we can cram onto a single silicon-based processor die, would Wasm not afford us the flexibility to get creative while maintaining acceptable levels of raw performance?

In the wake of rising eco-conscientiousness, would Wasm not enable us to run software only on an as-needed basis where it is most efficient?

If you were inclined to disagree on the above-mentioned points on a purely technical basis, I would have to draw your attention to the fact that these are the current trends in our industry. These points are worthy of debate, yet I don’t think that I am being naive in stating that good technology alone does not guarantee adoption, and for that matter, I could be easily convinced that even bad technology has been adopted for far more spurious reasons.

Wrapping Up

Of course Wasm is not some panacea, and of course there is more work needed to be done that could drastically alter the outlook. In fact, it is worth noting that my enthusiasm surrounding Wasm is not necessarily for Wasm itself, but for the initiatives and projects that are enabled by it. My point is that I believe Wasm does not just have technical merit, and is more collaborative / open than similar previous attempts, but that the distinguishing factor is its timing. The timing is such that it seems we may very well need Wasm to help solve some of the problems we are facing.

Besides, I do not think we have much to lose given what is at stake. In the worst case scenario, we can simply toss Wasm onto the pile of "failed" technologies that we will inevitably revisit and learn from.

Image by stevepb from Pixabay.

Jan-Justin van Tonder
Cool, calm, contemplative and a constant tinkerer. I am always on the lookout for new, strange and wonderful challenges to overcome. After chasing ever-expanding horizons, I unwind by seeking out truly compelling stories, be it between the pages of a book or rendered on a screen.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts