2022.01.31
Yuzo Koshiro "opsix" Interview
In the MI industry, FM sound sources are usually associated with digital synthesizers such as Yamaha's DX-7, but Yamaha's sound chips have also had a major impact on the world of computer game music and mobile phone ringtones.
We asked composer Yuzo Koshiro, who started his musical career on the piano and later mastered FM not with synthesizers but with sound chips, about his impressions of the modern FM synthesizer "opsix".
Yuzo Koshiro
Composer and game producer who mainly works on music for computer games. President of the game production company Ancient Corp.. His representative works include "Ys", "Ys II", "Sorcerian", "Actraiser", "Shenmue", "Wangan Midnight Maximum Tune", "Labyrinth of the World Tree" and others.
Q: Please tell us about your first encounter with synths and Korg.
The first one I bought was the Korg Delta. I came from a piano background, so I was looking for a poly synth. I happened to stumble across the Delta so I persuaded my parents to buy it for me. I played around with it all through junior high school, and even tried to recreate some of my favorite video game music on it. However, I didn't have a computer or access to a sequencer, so I played everything by hand.
When I entered high school, I bought a computer. It was the first generation of the PC-8801 (NEC), and it didn't have any music production software. During that time, a company called AMDEK (now Roland DG) came out with the CMU-800, and I was able to control it easily with a computer. I couldn’t stop playing around with it, and I remember programming songs from "The Tower of Druaga" and so on.
It would be a bit later on in my life until I really got into synthesizers. Until then, I used the PC-88SR (PC-8801 MK II SR) (NEC), which allowed me to use FM sound source. I began producing video game music with it, and applied for a job at Falcom, where I started working on "Ys" and "Sorcerian”. For a couple of years, I used a Yamaha chip ("YM2203") to create music on my computer.
However, I started to want my music to have more dazzle and richness to its sound. When I consulted an acquaintance about this, he recommended the Korg M1, which he said was very good. I bought the keyboard type M1, making my second synth also a KORG!
Q: I heard that you’ve been involved in PC music since its emergence, but how do you program game music development in the first place?
It's not much different from programming a game, in the sense that I have to create a sound driver.
Q: Since we're here today to talk about opsix, I'd like to ask you a few more questions about FM sound sources. Is the Yamaha chip sound source a four-operator system?
Yes, four operators.
Q: Were the parameters the same compared to something like the Yamaha DX-21 that you own?
As for the DX-21, it's almost the same. However, MIDI controls are limited, and the chip can do a lot of tricky things when controlled directly. I had been working on something similar to what’s known now as a modulation matrix. Looking back in retrospect, I guess I was sort of a frontrunner in this field!
At first, the maximum number of voices was three FM sounds and three PSG (Programmable Sound Generator) sounds. Since I started out as a piano player, not being able to use ten fingers felt very restricting, and I thought I wouldn’t be able to voice my chords as I wished. However, at that time there were no synths at all, and I had developed a habit of composing music by typing values directly onto the computer via a keyboard, so I naturally gravitate towards a monophonic approach rather than thinking in polyphony or in chords. This really helped me understand how a single change of a note can affect the overall mood of the composition.
Q: In that sense, it may be close to the music of the Baroque era.
I think music school students learn things like counterpoints, but I realized these things on my own and worked on them. For me, it was a matter of taste more than theory. The SNES (Super Nintendo Entertainment System) had eight sampled voices, and "Actraiser" was a very famous game at the time because it was able to reproduce an orchestra even with such limitations.
Q: I heard that you started with piano, and your mother was a piano teacher, and Joe Hisaishi also taught you.
My mother was a high school teacher for a long time, and she was a teacher of Joe Hisaishi's wife. That's how I got to know Mr. Hisaishi. What he taught me was improvisation. When Mr. Hisaishi played a phrase, he would say, "Play the rest of the phrase,". I’d say that you can’t get any more practical than that! Nowadays, if we go take lessons, you’d probably start with the basics; harmony, chords, and rhythm. Nonetheless, Mr. Hisaishi would tell me to “just play instinctively”, without proper training or theory…
On the other hand, my mother's style was classical and orthodox, taking it step by step, from playing exercises to playing sonatas.
Q: Looking through your SNS, we got the impression that you’re the type of person who instinctively takes to action what you say you want to do. Maybe Mr. Hisaishi’s education rubbed off on you?
If I have an idea in my head, by the time I notice, I’m usually already working on it. Nonetheless, it can get a bit tough when working on a song, so I do try to take breaks in between, but I can't really go out as often because of the pandemic. So instead I gather up synths and make various sounds, and that's how I take a break.
Q: Why have you reverted back to using hardware synths?
The main reason is that hardware synths have their own “personalities”. Of course, it's partly because I’ve been bound to writing songs on the computer, and I began to notice that the sound of a software synth is the same no matter what kind of synth you buy. It's very good in the sense that you can quickly load up what you've saved before and get to work, but there's no physical connection between what's in your head and the music that’s being produced. Once you begin looking for true “uniqueness”, you can’t go wrong with the sound of hardware synths, especially the old ones. The components are not so good, but they are engineered to produce the best sounds possible even with this limited technology/ components. I believe that great craftsmanship is truly demonstrated on hardware synths.
Q: What is your favorite piece of equipment?
I sold my M1 once, so I bought the M1R again, and of course I bought the software for the M1 (the old KORG Legacy Collection, now the KORG Collection). Not to down talk the software, but the sound seemed different from the M1 that I used to own. It's true that they sound almost identical, but the feeling of depth is completely different. That's why when I bought the M1R, it brought back some memories of the past. That's why when I make a song using the M1, I first make the parts with the software version and then record the actual take with the hardware version. The software has the convenience of having all the sound cards in it.
Q: Now that you’ve touched opsix, what do you think?
I've been programming Yamaha chips for a long time, so I'm familiar with their internal structure. The moment I looked at the opsix, I knew immediately that it was designed to allow me to tweak the operators, and that the knobs above them would be for selecting the ratio. It embodies the distinctive features of FM and is very easy to tweak. We can tweak the sound while running the sequence. In that sense, I think "opsix" is probably the first time that FM has been brought to the surface in such a bold way. The LEDs on the operator mixers have also changed color. I knew these were carriers and modulators as soon as I saw them. So I didn't have to look at the manual at all, I just checked the envelope.
Q: Since you haven't seen the manual, did you find out about the 5 operator modes (*interview was before opsix v2.0 release)?
Of course, I knew right away that I could switch the overtones there, but it was a way to expand the variation of the sound, wasn't it? There are two ways to think about it. First of all, FM is about how much we can create with a sine wave, so I don't often choose other waveforms when I create my own. On the other hand, the recent trend in software synths is to incorporate wavetable and wave shaping. I felt that "opsix" incorporates the latter, but personally that section is not so much “FM”.
Q: To be honest, FM is quite difficult to fully understand in the beginning. I think being hands-on and tweaking your own algorithms is the first step to understanding FM. But with the opsix, since you can select different waveforms and manipulate the filter, it's somewhat like a normal analog synth in a way. In a sense, this is a lighter version of FM. What do you think is important when really diving deep into the world of FM?
When it's FM, I make it as FM, and when it's not, I see it as additive + wave shaping synthesis. If I don't modulate FM, it's an additive with 6 oscillators, and I can add various waveforms to it, so it's just like an additive synthesis with filters. It's a bit like a virtual analog. So these are two completely different approaches.
The first time I tried to do something that wasn't FM with opsix was to make a Super SAW, so I made a bunch of sawtooth waves and detuned them, and it worked. That’s when I felt like I was finally in the 21st century (and no longer in the 80s)!
Q: You said that you would like to use a two-operator software synthesizer as well.
With two operators, the easiest way to understand is that there is only one carrier and one modulator. I can only choose whether to connect them in parallel or in series, but it's easy to understand the basic idea of modulating and changing the harmonics of the output. If there are more operators, it changes the way I think about connections based on whether it is additive, FM, and so on. It's very simple, and I can create a variety of sounds. Not only the original sine wave, but since the OPL2 (*YM3812 equivalent), I can choose from four different waveforms, so I can use the sawtooth-like waveform that divides the sine wave into four parts, detune the two voices, and combine them to make a rich string sound. I can't do that with just sine waves, even with four operations. But with the opsix, this can be done if I just select a waveform other than sine wave.
Q: With analog synths, the structure is easy to understand: oscillator, filter, amp, and so on, and the two operators are a little closer to that.
ARP ODYSSEY also combines two oscillators and applies FM modulation, so I think that kind of thinking has been around for a long time, but it's been reproduced with the opsix in a simpler way.
Q: This is the kind of stuff that becomes really interesting especially when you are planning ways to design your sounds. And, it might even be useful when making your sounds on the opsix!
The opsix has good sliders (in the operator mixer). If I want to have two operators, I can just turn down the four sliders. It's easy to understand how to create a sound here and then layer it like this. It's like the drawbar on an organ, where you add overtones.
Q: You are also using wavestate right?
I use wavestate more often than opsix because the patches are so rich and “complete”. Whenever you’re asked to create a specific soundscape. I can easily find a sound that matches that image. When I want a background sound, I start up wavestate, select a preset, and then tweak them to fit the piece even better. It may come as a surprise to you, but I'm pretty much a “preset person”. It looks like I've done a lot of sound creation, but when it comes to work, I choose a preset that best fits the client’s image, and I tweak the patch. If I start making sounds from scratch, I end up consuming an infinite amount of time, and I end up getting derailed from the bigger picture, which is the entire composition.
I wanted to use the wavestate at work because it contains a lot of sounds that expand the soundscape. I also liked the fact that Korg had come out with something that was different from what I had been using up to that point, so I wanted to try it out as soon as possible. So I contacted Mr. Sano (Denji) and was able to borrow a demo unit.
Q: Did the rhythmic sounds strike a chord with you?
In the past, I actually had a WAVESTATION AD. I had loved wave sequencing since then, but I had to control the very fine details to recreate the desired patches. The first thing that attracted me to wavestate was the fact that it was all on the panel, so I could just change the wave sequence and the preset sound would keep changing. This really made it easy to develop the sounds that I had in mind.
Q: Maybe it's the elements that can be changed that are so intriguing to creators.
One of the things I've always liked about Korg is their presets. Even on my Electribe, I mainly used its presets. But if I want to change it a little, I can. It's great that the knobs are there on the front panel, and this DNA of KORG products has obviously been passed down to these synths.
Q: When you use wavestate, do you import it into your DAW?
For example, I make a beat with a sampler first, and then find something on the wavestate that fits the beat and record it tempo-synced via USB. When I want to add a little ambience to a track, I don't just add a typical synth pad, but I try to see if I can add anything chaotic that’s signature to the wavestate. Ultimately, if I think it fits, I record it.
--- Thank you!
Improvised track making by Yuzo Koshiro
*Please turn on CC on YouTube.
Information sur le produit
wavestate
WAVE SEQUENCING SYNTHESIZER