On David Fincher’s Mank, sound designer Ren Klyce was tasked with crafting a monaural soundtrack, similar to those heard in films of the ’30s and ’40s, engaging in a laborious, experimental process, in order to round out the world of one of the year’s most distinctive films.
Scripted by Fincher’s late father Jack, the director’s longtime passion project follows Herman J. Mankiewicz (Gary Oldman)—a washed up, alcoholic screenwriter from Hollywood’s Golden Age—as he endeavors to finish the screenplay for the iconic Citizen Kane.
The goal with Mank was to immerse viewers in its period world through the creation of visual and sonic ‘patinas,’ each working in concert with the other. While cinematographer Erik Messerschmidt shot the black-and-white film digitally, at extremely high resolution—allowing Fincher to degrade the image in post—Klyce would tinker with sonic degradation, tapping into all of the characteristics that gave early 20th century soundtracks their unique feel.
One of Fincher’s closest collaborators—who has worked with him on 10 features and two television series since 1995—Klyce had experimented only briefly with mono sound in the past, on a handful of Fincher films. “But we never did it with the conviction of, ‘This is the purpose,’” the sound designer notes, “‘because we want it to feel like it was made using the technology of the time.’”
Below, the seven-time Oscar nominee recalls his earliest conversations with Fincher about Mank, and the multifaceted process of fashioning its vintage sonic palette.
DEADLINE: When did you first come to speak with David Fincher about Mank? And what did he tell you early on about the sonic approach he had in mind for the film?
REN KLYCE: I’ve known David for a long time and actually knew his father, Jack, who was a lovely man, so I remember David wanting to make this movie for years, and for whatever reason, it didn’t happen.
Then, when it came back in the last few years, he goes, “I want it to be mono. Maybe the music can be a little wider than mono, but I want it to sound old. I want it to sound like it was made back in the day—not something modern, but authentic.” And I thought, Oh, that’s interesting.
So, those are the very first conversations that we had, and then from there, we got into specifics, trying to see exactly what we were trying to do, by looking at older films.
DEADLINE: What other kinds of materials did you look at early on, from Mankiewicz’s era?
KLYCE: There were some old-time movie trailers, and what was interesting was, the trailer that he sent me was so old-looking and so old-sounding—as in, bad quality—that I was a little bit worried, initially. Because I thought, “Well, this is really bad. Are you thinking you want it to sound this lousy?” And he said, “Yes!”
So, there was a whole process that we went down, where I started doing versions and experimenting, and we came up with the name, ‘patina’—and referred to the ‘patina effect’ both in sound and picture. I said, “Well, what’s the patina effect going to look like visually? Do you want the audio patina to match, one to one, the visual patina?” And we embarked upon an experimental process, taking something that would sound good normally and distressing it, and making it sound old.
DEADLINE: Technically, a mono soundtrack is simply one in which dialogue, music and effects sit on one track. But what kinds of sonic attributes were you tapping into with Mank? And what kinds of questions did you have to ask yourself to figure out what was required?
KLYCE: I think one of the first things we wanted to figure out was, why do old movies sound old? What is it? Clearly, there’s not somebody back then going, “Let’s make this sound sh—y.” [Laughs] They were trying to make it sound as good as they could, and really, it was just the limitation of the technology.
So, we went down the road of, “Well, what was it about the technology?” And we learned about the optical soundtrack that’s on the side of the film, right next to the image. This tiny vertical line that squiggles back and forth was where the sound lived, and because of the real estate limitations of 35mm, you could only have one track, which is this monaural track. [That] squiggly line was modulating, and light would shine through it, and out would come a soundtrack.
The whole world of film at the time had to exist within this very limited technology of optical sound. So, once we got into that, we started to analyze, “Well, what are the actual limitations of this?” And we realized, “Oh, there’s limitations in fidelity.” There weren’t any low frequencies, there weren’t any high frequencies; it was all mid-range frequencies. So, when an orchestra would play, you would hear bass, but it wasn’t really deep—and if there were high strings or crickets, you would never hear those sounds, because they were higher than the frequency range of the optical track.
Then, we [asked], “Well, what else is there?” We started listening to these old films, and we started noticing that there’s distortion. Why is it distorted? Well, again, because the light is traveling through the celluloid, and now there’s an optical reader that’s re-modulating the sound, and it’s creating a distortion. So, that was part of the sound.
Then, the other part was, we sort of forget the name Dolby. Of course, you know Dolby, but really, Dolby’s claim to fame was this thing called noise reduction. Because everything was so gosh-darn noisy and hissy, Ray Dolby invented a noise reduction method. But that was much later in time, so that was one of the things we realized: “Oh, these old movies did not have any Dolby noise reduction, they were hissy.”
So, it was like, “Okay, well we’re going to have to add hiss, we’re going to have to add distortion, we’re going to have to modulate the frequencies,” and that’s what we did.
DEADLINE: You came to some of your conclusions about the sound you were pursuing by doing a spectral analysis of the Citizen Kane soundtrack. What did that entail?
KLYCE: It was actually a lot of fun. We got a copy of Citizen Kane, digitized it, and then ran the soundtrack through a spectrum analyzer, which looks at the frequencies of the sound and gives you a visual representation of what’s there. And it was amazing to see because it looked like that mountain in Close Encounters, Devils Peak. It had this weird shape; it wasn’t flat on the top. It had all these weird, pokey bits in the mid-range, and then it just dropped off, and dropped off again.
We realized, “Wow, that’s a very limited number of frequencies that Citizen Kane is playing.” So then, what we did was, we reverse-engineered that shape in a filter that then, we put our mix through, and this filter then did all the similar types of shading. That was a big step.
DEADLINE: With Mank, Fincher didn’t just want to emulate the texture of a mono soundtrack. He also wanted to make the film sound as if it were being played for an audience in a vintage movie house. What inspired that idea, and how did you arrive at the extra layer of patina involved in pulling this off?
KLYCE: David had this memory, and I think we all have this memory of watching a film in a huge theater, something like the Mann’s Chinese. Those theaters, back when they built them, they didn’t think about acoustics, and those rooms have a pretty pronounced echo to them, but we sort of accepted that. Oftentimes, those theaters, too, were transitioning from theaters that were used for live performance—like Radio City Music Hall, for example. You could play a movie in Radio City Music Hall, but the Rockettes would also perform there, or an opera singer, or they would show the Grammy Awards there. So, having a nice, acoustic echo was sort of desirable, back in the day. Of course now, movie theaters are tiny and dead, and the walls are soft, but the walls were very hard back then.
So, David had this memory like, “You know, we used to watch movies in big rooms, and there’d be this big echo.” And he goes, “Once we’ve finished the mix, what if we took our mix and then played it back in the Grand Lake Theater in Oakland?” We considered that, but then in the end we thought, “Well, it’d be really hard to control it, because there’s cars and trucks and people.”
So, we realized that at Skywalker Sound, there’s this beautiful, enormous room that’s used to record orchestras. What’s beautiful about that room is that it literally has, on the far end of the wall, a screen that’s 60 feet wide, with speakers behind it. Then, that room has this beautiful echo to it, so we set up 12 microphones, and the first was close to the screen. The second was a little further away—the third, even further. The last was all the way towards the rear of the theater, and then we played the entire movie back in the scoring stage, rerecording the movie’s echo in that room. Then, once we did that, I curated the microphone selection and added that reverb back onto the soundtrack that you’re hearing.
DEADLINE: What were the biggest challenges posed by your process on this film? Fincher has compared it to “splitting the atom,” noting that the process dragged on for weeks longer than he’d initially expected.
KLYCE: David was very frustrated, I’ll be honest with you, because what we realized was that in order for us to get to that last process I described, where we’re playing the movie back in the theater, we had to be at the very end [of the process]. But then to be at the very end, we had to be at the very end of the process that preceded that, which was the patina. But then to be done with the patina process, looking backwards from that, we had to be done with the full bandwidth mix for the movie.
David and I argued about [that]. He’s like, “I don’t understand why we have to mix the non-patina version of the movie first. Why don’t we just mix the s—y version all the way, as we go?” It’s a really good point, because why are we doing that? And I realized it was in our best interest to keep our soundtrack—the music, dialogue and effects—as high fidelity as possible, for as long as possible, while we were making the creative decisions about how loud the music was, what dialogue lines should be quieter, which line should we swap out, et cetera. That frustrated David, and I said to him, “You sit here with us as we mix, and if you want to change anything, make as many changes as you like now. But once we get into the patina, we can’t make changes to it. Because now, we’re patina-ing it, and after we patina, we’re going to add the reverb. [And at that point], you can’t go, ‘Hey, you know that music? Can you slide it 10 frames earlier?’ Because we’re at the end of a process.”
So, knowing that, David thought, “Okay, I’ll play along with your method.” He wanted to make sure that he was satisfied with the mix, and because of that, it took us an additional 10 days—because he wanted to roll through every reel over and over again. He wanted to really tweak all of it, so at the very end, he had touched every frame of the soundtrack.
In a way, just to parallel it to visuals, it’s sort of like asking [DP] Erik Messerschmidt, “Well, why don’t you just shoot on a 16mm camera and put Vaseline all over the lens? Because it’s going to look s—y.” You know what I mean? Erik and David needed to photograph the film at the best resolution possible, so that they had the data—and then, they could manipulate the data and downgrade it how they wanted.
DEADLINE: How did you aid composers Trent Reznor and Atticus Ross in bringing period texture to their score?
KLYCE: Well, Trent, Atticus and I have worked together for quite some time now, so we have a nice shorthand of communication that we’ve developed. But this was their first departure from their normal electronic-type soundtrack. It was also very much leaning upon a style of music of a period of time, and so they divided their music into three categories. The first was big band, the second was orchestral—with sort of Bernard Hermann moments of the film—and last, there was the source cues, the music that was coming from radios and record players, and playing at parties.
Similarly to what David was saying to me about the soundtrack, he was saying the same to Trent and Atticus. Like, “I want to make sure that the music sounds like it was recorded with old, vintage microphones.” And there is a sound, especially, to the brass horns and saxophones of the ’30s and the ’40s, because they recorded them using older ribbon microphones, which had a little distortion and sizzle built in.
So, they went ahead and recorded a lot of their music, knowing they had to move towards this old-timey sound. They were very much on it from the get-go, with their engineering team, and then when we got in and mixed it, it was close. Then, they gave me more data, if you will, to sort of just ruin and distort and limit even further. [Laughs] So, that was our process.
Read More About:
Source: Read Full Article