I’m one of those people with too many interests and hardly any singular passion. This naturally makes me a generalist rather than a specialist. Over the past 15+ years, in both academia and industry, I’ve worked across a range of domains: from formal verification of embedded systems, and software product lines, to building distributed, data-intensive microservices, ML infrastructures, privacy-preserving frameworks, and solutions for securing software supply chains.
What I’ve aimed to keep constant throughout is delivering excellence. Being a generalist doesn’t mean a lack of depth in my knowledge; rather, I focus on acquiring deep knowledge in areas that matter the most, i.e., areas that yield the highest impact and shape the main substance of a given field.
This approach to learning is perhaps the most valuable skill I gained during my PhD in empirical software engineering, and through years of collaboration with leading researchers. Constructive skepticism, a demand for solid evidence, and the willingness to discard anything that doesn’t meet the bar have since shaped my professional practice.
During my time as the CTO of Testify AS, the question that occupied my mind the most was: how can one make good decisions quickly? I’ve since come to believe that rapid decision-making is only possible when you’re repeating familiar patterns. Breakthroughs, by contrast, require tolerating ambiguity and swinging between belief and doubt over extended periods of time. Both modes of work are rewarding in their own way, but can become cumbersome over time. The ideal work strikes a balance between the two.
In this role:
In this role I was part of project-oak, focused on building a Trusted Runtime for running privacy preserving applications.
Based on an analysis of prior user feedback, I proposed and implemented a mechanism to improve the chatbot.
In the context of the project for modernizing the Norwegian population registry:
In this role, I led the Model-Fusion project, co-supervised a PhD candidate, and contributed to building a software change recommendation system, in the context of the evolveIt project.
My PhD project started with a focus on integration problems in cyber-physical and integrated control systems (e.g., subsea systems). These problems surfaced themselves during system integration testing. A study of the root-causes, at an industry partner, pointed the finger at too many configuration parameters. I studied software product-line architectures, and proposed and developed a number of techniques, some using constraint programming, for semi-automated and consistent configuration of large embedded software systems. See my dissertation.
In this project, relying on verifiable data structures, we built an end-to-end release process to trace software binaries to their source code, in a transparent and verifiable manner. The idea is very similar to Certificate Transparency.
See the project's GitHub repository.
Talks and publications:
Relying on a number of cutting edge technologies, including Trusted Execution Environments (TEEs), Remote Attestation, and sandboxing (e.g., using WebAssembly), the team developed a Trusted Runtime. In this project, we used Rust as the main programming language.
The goal of this project was to generate synthetic, and statistically representative population data to use for testing. I suggested framing the problem as a language modeling problem, built a proof-of-concept by training a char-RNN network, and proposed the overall architecture.
Talks and publications:
Thesis: A Model-Based Approach to the Software Configuration of Integrated Control Systems
Thesis: Improving Model Checking Using Reinforcement Learning
Thesis: Modeling and Verifying Timed Systems Using Rebeca