Interview: Matthias Puech (GRM)
Matthias Puech is a French composer, instrument designer, and researcher from Paris. He is currently holding a position as a research and development scientist at the French experimental music institution INA-GRM, where he is heading the development of the legendary GRM Tools suite. He is also the lead developer of the new GRM Atelier software, a modular sound processing sandbox that recently celebrated its 1.0 release.
Impressed by the sound and capabilities of Atelier, we reached out to Puech to chat about a wide variety of topics, including his background in academic computer science and early love for GRM Tools, his Eurorack collaborations with 4ms, the history of GRM, DSP design philosophy, handling feature requests, and the upcoming development roadmap for Atelier.
Stromkult: Can you say a little bit about your background? You originally started out as a researcher in academic computer science, right?
Matthias Puech: So I have a background in pure theoretical computer science, which is basically the branch of mathematics that deals with computing and reasoning. But you know, I fell in love with music and computers simultaneously, for me they have always been very related. And for me, the start of it all was really a result of the fact that I grew up in Paris, and we are very lucky to have two big centers for experimental and electronic music – the GRM and IRCAM – here, and I was introduced to them from very early on.
My mom basically, quite randomly, took me to IRCAM one Saturday afternoon somewhere around 1996 when I was twelve, and I think she quickly saw that I was very interested in what was going on there. So I ended up going to IRCAM quite often as a child, and even took a few of their classes – and it‘s funny looking back, because those were classes meant to introduce actual working composers to the new digital tools that were being developed at the time, and there I was, as a twelve year old (laughs). Obviously, I didn‘t really fully understand what was going on, but I knew that this was something that interested me and that I wanted to learn more about.
I also had a great and supportive university professor who introduced me to the field of formal reasoning, which lies somewhere between computer science, philosophy and mathematics. And then I suppose the combination of all of that eventually led to me pursuing an academic career in pure computer science, which I was in professionally for over a decade. But over time, I just got very bored with the academic world – and so around 2015/2016, when I was doing my first DSP work with the Parasites firmware I was already really on my way out of academia and looking for something else, which then eventually led me to my current work here at GRM.

Image Credit: Nils Maisonneuve
Do you remember the first piece of audio software you were using when you were young?
Oh man, that takes me all the way back (laughs)! I remember ReBirth was a big thing at the time in the late 90s. There were also quite a few plugins already, and I think a pre-Apple version of Logic was the first DAW I used. There was also Max/MSP of course, but I didn‘t have access to it since you needed a Mac to run it at the time, and for a kid like me, having a Mac was more like something you dreamed about, an unattainable fantasy (laughs). But you know, looking back from what I’m doing now, the software that really stuck with me the most when I was young actually was the GRM Tools suite. I don‘t know why, but I got really into it from very early on, and I remember using those plugins on literally everything I was making as a fourteen-year-old (laughs).
Now I’m dying to know what kind of music you were making with GRM Tools as a fourteen-year-old (laughs).
God damn, that’s embarrassing (laughs) – although, I suppose I actually wasn’t doing anything too different from what I am doing now (laughs). In terms of my musical influences, I remember I was listening to a lot of IDM from that era, Aphex Twin and Autechre and so on, so that definitely influenced what I was making. I was also very into the whole French Touch thing that was big in Paris at that time.
That‘s amazing – I‘m pretty sure you‘re the only person in the world that has ever tried to make a Thomas Bangaltar style French house banger with GRM Tools (laughs).
That’s probably true (laughs)! But you know, even at that time, I was also already very much into contemporary electro-acoustic music, the kind of stuff the composers at GRM were making, as well as experimentals labels like Editions Mego and Raster-Noton that were coming out of the fringes of the techno scene at that time.
And you‘ve always kept making music, even during your academic career?
Yes, I was always making music on the side. So I never really stopped making music, but I think I eventually reached a point where – because I was spending so much time staring at a screen during my academic day job – that making music on a computer in the evenings could feel very fatiguing sometimes.
That’s why I think getting into modular synthesis in the early 2010s actually really reinvigorated my love for making music, because it allowed me to get away from the screen; and it also solved the big problem I always had with Max/MSP, which is that it requires you to have a plan beforehand, and I prefer a more intuitive process. So discovering the world of Eurorack synthesis was a big deal for me, in that it combined a screen-less hardware workflow with this very intuitive and free capacity for experimentation that I enjoy.
Do you remember what your first modules were?
Of course, how could you forget! There are just things in life you will always remember: the names of your children; your first modules (laughs). My first modules were a MATHS from Make Noise and a Peaks and a Braids from Mutable Instruments.
And then it quickly spiralled from there (laughs)?
Yes (laughs)! I was living in Montreal at the time, I had a Postdoc position at McGill university. And the work was quite dry most of the time, so the highlight of my day was always passing by the Moog Audio store on my way home. I was stopping by there almost every day, just seeing what‘s new and trying everything out.

Image Credit: Nils Maisonneuve
Did getting into modular influence also your decision to take your programming skills into the world of DSP?
Yes, for sure. You know, I was always interested in DSP from a theoretical angle, but I didn‘t really have a lot of practical experience with it. And then getting into modular and discovering the open-source nature of the digital Mutable Instruments modules specifically made it really easy and inviting to start tinkering with things.
I think I was actually one of the first people to get a Clouds unit, and I really fell in love with it, but there were also a few things I found a bit frustrating. And that then made me go, “well, I have a background in computer science, I know how to program; maybe I can play around with the code and modify it to make it more to my liking”. Which then very quickly led to me spending a lot of my free time at the Montreal public library, just going through all of the available academic literature on DSP (laughs).
Coming from the academic world where you can spend forever on a single paper, programming DSP just immediately felt very creative and empowering to me, because you can just take something from a book and then have a working prototype the next day that you can play with. That made it very fun to explore and experiment, which I think was really what led to me learning the basics of DSP quite quickly.
I then got into contact with the guys from 4ms – I think it started because I sent them an email with a technical question – and we got talking, which ultimately led to a collaborative project, the Tapographic Delay, a digital delay module for which I wrote the DSP and firmware, which was then followed up by the Ensemble Oscillator quite quickly.
What was it like for you to be working on DSP for an actual commercial hardware project for the first time, rather than just playing around with code?
It was really … an obsession (laughs), it very quickly became this almost all consuming thing for me, because I just had so many ideas I wanted to try out from my previous DSP experiments. And Dan from 4ms was incredibly open-minded about everything, he really gave me a lot of freedom to experiment without immediately thinking about whether it would lead to a market-viable product.
Can you say a little bit about the inspirations behind the Tapographic Delay and the Ensemble Oscillator?
You know, the Tapographic Delay was actually very inspired by the GRM suite (laughs) – it had this multi-tap delay plugin that had some unusual features that I remember being very excited by as a teenager. So the Tapographic Delay is almost like a modern Eurorack take on that old GRM delay plugin.
With the Ensemble Oscillator, that was very inspired by a particular piece from the spectralist composer Gérard Grisey, the piece “Partiels”. I was really into this kind of spectral music at the time and fascinated by how it plays with the concept of sonic illusions – like the way your ear can get tricked into hearing an “ensemble” when encountering a big stack of sine-like sounds. So I rigged up this additive synth prototype that made sounds somewhere between a chord and a harmonic series, and Dan from 4ms was really into it, and that then ended up becoming the Ensemble Oscillator.
How did you then go from your collaborations with 4ms to working at GRM?
I started doing the 4ms collaborations on the side while still in academia, but by 2019 I was just very exhausted with the academic world overall, and my musical activities were also taking off a bit with my live shows and my work with 4ms, and so I decided to take a leave from my academic position and step away from academia for at least a few years and try something else. I then got into contact with GRM through some commissioned work I was doing for them as composer, and it just so happened that the prior R&D lead of the GRM Tools suite went into retirement at that time, and so they ultimately ended up offering me the position.
After hearing your story, that really sounds like a childhood dream come true – being able to work on the same software that inspired you so much when you were younger (laughs).
Yes, it absolutely is a dream (laughs)!
What exactly is your role at GRM?
I would say my job is split about 90-10 between developing the new GRM Atelier software and maintaining the old GRM Tools 3 suite – which is actually far from a trivial task, because the old suite is a 30+ year old piece of software. It‘s probably one of the oldest pieces of audio software that people still commonly use in their music and it‘s never had a clean rewrite, so it actually requires a bit of work to keep it running well on modern systems.

On the topic of Atelier – I was curious about the references to [GRM-founder and musique concrète pioneer] Pierre Schaeffer in your Atelier press release. Can you say a little bit about how his work has influenced the development of Atelier?
I think the Schaeffer inspiration really lies less in the specific technologies he was using and building in the 1950s and 1960s and more with the general philosophy behind his compositions. You know, we usually take it for granted today, but he was the one who invented the whole idea of composing music by directly interfacing with the recorded sound material [what Schaeffer refers to as the “concrete”], which was an incredibly revolutionary idea at that time.
Because when you‘re working with concrete sounds, you‘re not just plotting out a composition on a piece of paper that will then later be realized by musicians in front of an audience, but it‘s you yourself that is the first listener of your own piece, and you are then re-integrating this listening experience into the ongoing compositional process. I think that‘s really the biggest takeaway of Schaeffer‘s legacy, this idea of composing music in real time by forming an ongoing dialogue with the sound material.
I‘m also very curious about the SYTER [“SYsteme TEmps Reel”] system mentioned in the press release. As far as I understand it, it was this very early real time audio computer system that was basically a precursor to the later GRM Tools suite, but there‘s not a lot of information about it online.
SYTER was one of the first computers that could do real time processing of audio in the late 1970s and 1980s. And obviously, real time digital audio processing is a bit of a trivial thing today, but up to that point, computer audio processing pretty much always required offline rendering time – there was always a gap between programming a sound and actually hearing the results, because the processing power required for real time audio just wasn‘t there yet. So this whole idea of manipulating audio on a computer in real time really was a very novel thing at the time, and there was almost a kind of arms race (laughs) between IRCAM and GRM about who could develop the first real time audio computation system.
That‘s the background that led to GRM developing the SYTER system, which was hand-wired, custom-built computer hardware running software specifically designed and programmed for that exact hardware. I think the last remaining system is now stored somewhere in the outskirts of Paris (laughs), and I myself actually don‘t know that much about the technical details, but from looking at the old manuals we have here, I found the whole concept of it very striking. Because, even though it was one of the first real time tools, it was very performance oriented from the beginning and it had a very elegant and immediate design with just a few-well chosen algorithms that could work on all kinds of music. So what I took from SYTER was really that approach to design, of making a piece of software that can be very immediate and playable.
Around what time was this?
This was the late 1970s and 1980s. And then in the 90s, there was another revolution when it finally became possible to run real time audio on personal computers with the help of DSP acceleration cards. So you could suddenly make computer music at home without the need for a huge system like SYTER, and there was now also a viable market for real-time audio software.
This then led to the development of the GRM Tools suite designed for personal computers that still exists today. What‘s funny is that I heard from the original developer of GRM Tools that the first versions running on personal computers actually had a lot less processing power than the big SYTER system had – so people at first almost saw it as this toy version of SYTER, it wasn‘t taken very seriously. But that of course changed very quickly as personal computers got more and more powerful.
That‘s really interesting, I had no idea! And how did you then go from workin on the old GRM Tools suite to the development of the new Atelier software?
Atelier actually originally started out as an attempt to do a complete clean rewrite of the existing GRM Tools – but then along the way we got sidetracked with all these new features we were putting in and it just started to make more sense to turn it into its own thing. You know, offering people something new with a different concept rather than trying to replace this very expansive legacy software with thirty years of ideas and features already baked into it.
Something I‘ve found quite striking about Atelier is that it is available not just as a plugin, but also as a functional standalone application, which is quite unusual these days. Was that important to you?
You‘re very right about that! Having a standalone application was actually something that was very dear to me personally. I really enjoy being able to open it super quickly and just play around with it while I‘m on a train or something. It‘s very different from being in a typical DAW environment where you blink and suddenly there‘s 50 plugin windows open (laughs). I really wanted the standalone application to be this creative playground that‘s very open and inviting but also has real limitations, it‘s not meant to be a DAW replacement.
And of course, when you open up a DAW, there‘s already all these things baked into it that make it really easy to go “okay, let‘s start with a 4/4 kickdrum at 120 BPM, I guess” (laughs). Whereas when you‘re in this standalone application that doesn‘t have an arrangement timeline or a piano roll, it‘s a very different way of working with sound and you come up with different ideas.
Exactly – yeah, that‘s really the essence of it, I think. It was also really important for me to make it very easy to record and play back audio. There‘s a simple, but very fun tape-like audio player – the “play” module – as well as a dedicated record button that invites you to continually record snippets while you‘re working. And the combination of those two things then makes it very quick and fun to go back and forth between playing and recording and iterating on your sounds in real time. Which is a way of working that‘s very central to my own musical practice and also what GRM stands for musically in a way, I think.

How did you arrive at the DSP modules that currently exist in Atelier like the Pitch, Time and Comb modules?
I think a part of it was me looking at what already existed historically in SYTER and GRM Tools and getting inspiration from that. The other half came from my interest in these very basic, foundational elements of digital signal processing – essentially taking a “DSP 101” book and just taking the first thing from every chapter (laughs). Like, the concept of a comb filter may sound exotic to some musicians, but from a DSP perspective, it‘s one of the very first things you learn about because on a technical level, it‘s really just a very short delay. And almost all of the DSP in Atelier so far is very much this kind of “white box” thing, there‘s no complex proprietary algorithms. But if you then add a few extra bells and whistles to these very simple concepts and combine them with the built-in modulation system of Atelier, it suddenly becomes something very unique and interesting.
Was the modulation in Atelier inspired by your own journey into the world Eurorack modular?
It was definitely inspired by modular synthesis in a general sense. But more specifically, I was taking inspiration from this lesser-known feature from GRM Tools 3 called “Agitation” that basically adds random modulation to parameters. But there‘s a trick to it, which is that every destination of the modulation gets a different random modulation signal. So you can connect the same modulator to different parameters, but each destination gets a slightly different modulation signal. In Atelier we call this “polyadic modulation”, and I think it‘s a really interesting concept – I guess you could technically replicate it with hardware, but it would be extremely impractical and expensive (laughs).
How are you using Atelier in your own musical practice?
I‘m obviously biased here because Atelier is my baby (laughs), but I currently have a live project where I‘m basically performing live with just a computer using Reaper as a digital mixer and then multiple instances of Atelier for all sound generation and processing. With my recorded music, it always tends to be a real mix of sounds and processes – you know for me, it‘s often the sounds I randomly find on my hard drive that are the most interesting, even though I made them years ago and have no idea where and how I‘ve made them.
Those are always the best sounds (laughs)!
Exactly! And I think that actually goes back to Pierre Schaeffer, this idea of “de-correlating” a sound from its source that he developed – trying to make people hear the sound of a train not as an “image”, but as a pure sound that has no connection to the real-world object of a train. To me, that‘s always been a very crucial part of electronic music, this idea of taking sound material out of its original context. I think that‘s when a sound really comes alive, when it‘s taken on a certain distance to its original creation and has become something else.

Image Credit: Jean-Baptiste Garcia
Going back to Atelier – can you say a little bit about the future of Atelier? Is there a development roadmap for upcoming features?
So we have a lot of plans for Atelier as an ongoing project, but we‘re mostly taking things step-by-step right now. Our main priority currently is getting the Windows version of Atelier out somewhere in Q1 of 2026, since so far it is only available on Mac. You know, we‘ve actually managed to get a Windows build running for the first time today – so it‘s still a lot of work but we know it‘s possible and we can make it happen for you (laughs).
Once that‘s wrapped up, we‘ll start focusing on new core features, which includes some of the more popular feature requests we have gotten so far that revolve around sync and quantization, since Atelier is currently completely “off the grid” – which we maybe took a bit too much unreasonable pride in initially, like “we‘re GRM, we don‘t need notes and grids” (laughs). But we do recognize that these are necessary features for a lot of people in the modern music landscape.
After that, we‘ll then start working on adding additional processing modules. For the modules we don‘t really have a concrete roadmap yet, but we do have a lot of cool ideas we can‘t wait to start working on.
Do you have plans to bring some of the FFT and spectral processing tools and techniques from GRM tools to Atelier?
Yes! Spectral processing definitely will be in Atelier at some point, since it is such a big part of what made the GRM Tools famous in the first place. That said, we‘re really looking for more of an “inspired-by” approach rather than implementing direct ports, so Atelier will have a slightly different take on spectral processing than what‘s available in GRM Tools.
That sounds great – I think there‘s definitely demand for more cool spectral tools. You know, there‘s this thread on KVR Audio called the “death of spectral plugins”, because it really seems like every single spectral plugin eventually ends up becoming abandonware (laughs). There‘s not so many current options these days.
It‘s funny, because I remember back in the 2000s, spectral processing was all the rage, it was all over the experimental music of the time. So to me there‘s almost a bit of a risk of it sounding a bit dated or something now, because, you know, it‘s on every Mego record between 1995 and 2005 (laughs).
But then again, everything from 2005 is now old enough to be “retro” and cool again, so all the 19 year olds are probably really into spectral processing now (laughs).
Exactly (laughs)!
What does your development process usually look like from a creative perspective?
You know, it‘s funny, because my software development and compositional processes are actually quite similar. What‘s always worked for me is to take certain technological concepts that I‘m curious about as a jumping off point. So for example, I‘ll get interested in understanding how a phase vocoder works in DSP, and I‘ll read a bunch of academic papers, throw together a prototype and then twist it into something new and different over time.
So in a way, it is really a bit similar to how I like to interact with sound in my musical process, this back-and-forth between me and the code or music. I‘m not someone that lays out a meticulously planned design document and then just implements it, I enjoy a more gradual interactive process where the design develops over time.
What has the feedback you have received for Atelier so far been like?
The most surprising thing to me has really been just how much feedback we‘ve gotten. It can be a lot sometimes (laughs), but it is great news to have some people adopt and use it already. It‘s been really interesting for me to see it get picked up by people from very different backgrounds than ours that are approaching it from a very different angle.
For example, we really underestimated the demand for a Windows version because in our little world, everyone has a Mac, we hadn‘t seen a Windows machine in ten years (laughs). And like I said earlier, all the feedback regarding sync and quantization has also changed our mind a bit in that respect. So that‘s been one of the most interesting parts of finally putting the software out in the world, but you also have to stay true to your vision and capabilities of course, you can‘t just add everything.
There‘s always too many feature requests, never too few (laughs).
Obviously, you always get a lot of user requests that are just not feasible to implement (laughs). But in general, seeing users actually use the software and actually make music with it is really the whole reward of the job, it‘s what you do it for. So it‘s been incredible to finally get it out to the public. And Atelier really took 110% of my energy for the past few years, and the launch and all the feedback was just very, very intense. I had to step back a bit for the first few weeks because it was too much (laughs). But I think we‘re now back to it and I‘m also having the time to make more music again. You know, somehow, I‘ve actually released three records in 2025 – I don‘t know how that happened! – and have some interesting stuff coming up, so please check them out if you would like.
You can find out more about Matthias' work over on his website and Bandcamp and find a demo Atelier over at the GRM website.
Sign up for our newsletter
to get new Stromkult content directly into your inbox!