Blinded by science?

There’s an interesting new piece of research about the perceptual process of watching films which I can’t get out of my head without writing a blog about it. This research is not just of theoretical interest, but touches on pedagogic concerns. Part of learning to watch films critically is to understand how editing works, and this means learning to see every cut. This is an acquired ability (especially well developed among film editors). Thanks to this research we now know for certain what the film teacher has always known—that the untutored film viewer, and any of us some of the time, simply doesn’t notice a lot of the cuts. Two experimental psychologists investigating the topic, Tim Smith and John Henderson, call this ‘edit blindness’. 

My interest in the subject of visual perception goes back more than thirty years, when I was writing about the invention of cinematography, and wanted to answer the question of how come what we see on the screen is a moving image when it actually consists in a series of still images? (What’s more, between each projected frame the screen momentarily goes blank while the projector pulls the next frame down into position.)

My answer, drawing on psychologists of visual perception from Max Wertheimer to Richard Gregory and a dose of information theory, can be found in the chapter on ‘Theories of Perception’ in The Dream That Kicks, where I drew two main conclusions. First, the projector works at a speed that is faster than the brain is capable of detecting; the brain employs a process called ‘backward masking’ to merge the new image with the last one. At the same time, the reason we don’t notice the tiny gaps is because they carry no information. A gap cannot mask the image which precedes it. As long as each successive image arrives faster than the brain’s minimal perceptual delay, the gaps simply disappear.  (This is why early hand-cranked projectors, with their slow and uneven frame rate, caused a flicker.)

The second conclusion was that getting the invention to work was a trial-and-error process independent of scientific understanding, because the theories of perception current at the time, which attributed it to the persistence of vision, were mistaken. Persistence of vision doesn’t work that way. Although these new findings are fascinating, my general feeling about the research reported here is that science has yet to catch up with aesthetic practice, and probably never will. Although it’s not really a question of catching up, because the underlying problem is that the discourses of science and aesthetics are incommensurate.

However, in the intervening years the paradigms of experimental psychology have been radically shifted by computerisation, including the power to detect the tiniest intervals, measured in milliseconds. Tim Smith, a psychologist at Birkbeck, uses computerised eye tracking equipment to record people’s eye movements while they watch something. The data is then graphically superimposed on the extract in question and we can see how rapidly their eyes move around and what they’re attending to. You can view the results with a clip from There Will Be Blood (dir. Paul Thomas Anderson, USA, 2007) The report explains: eleven viewers were shown the scene and their eye movements were recorded using an infrared camera-based eye-tracker. Each circle represents the centre of one viewer’s gaze. The size of each circle represents the length of time they have focused on that particular area.

There Will Be Blood with gaze locations of 11 viewers from TheDIEMProject on Vimeo.

* * *

Smith’s research (which is part of TheDIEMProject) was recently reported by The Guardian because it has aroused a certain interest in Hollywood—I’ll come back to this below. Meanwhile Smith has written a guest blog about his research on David Bordwell’s website on cinema, which follows up another post on the subject of visual perception from Bordwell himself. Both of them engage intelligently with the issues (with reservations I’ll come to later). There is also a paper from 2008 by Smith and John Henderson in the Journal of Eye Movement Research, ‘Edit Blindness: The relationship between attention and global change blindness in dynamic scenes.’  (link)

The results of this research are certainly fascinating, but what do they tell us? We learn that the eye responds with great rapidity to what it’s shown, is constantly shifting between different points of interest, and that this is not arbitrary. For example, lighting, colour, and focal depth can guide the viewer’s attention within the frame, giving different points within the scene priority over others. Many continuity errors escape us because our gaze is tightly focussed, but the viewer’s gaze is naturally attracted by faces, moving hands, bodies, things appearing suddenly (Méliès discovered that one). Gaze cues, says Smith, like someone looking or pointing, form the basis of editing conventions such as the match on action, shot/reverse-shot dialogue pairings, and point-of-view shots. However, even in a long take with a static wide view, like the opening shot in the example analysed, many of these cues—our sensitivity to faces, hands, and movement—still operate powerfully to produce what Smith calls ‘attentional synchrony’ (or more simply, synchronous attention): ‘Something about the dynamics of a moving scene leads to all viewers looking at the same place, at the same time.’ (At least, all eleven. And a certain amount of variation.)

The thing is that hardly any of this is new. Not only are the rules of continuity well understood at a theoretical level, but this is exactly what every seasoned film-maker—director, cinematographer, designer, set-dresser, editor, etc.—has learned and knows and applies, sometimes instinctively, sometimes with deliberation. Much of it is taught every day in film schools and departments all over the world. I watch a student’s rough cut with her. A fiction scene shot on the street. She’s worried by a cut which doesn’t seem smooth enough—it doesn’t recede from attention—but she can’t work out why. We run through it slowly and I show her something in the background at the side that moves in the opposite direction to the gaze cue and momentarily distracts the eye. I watch her shave three or four frames off the cut to eliminate the offending movement, and she declares herself satisfied. I say, ‘Always look at the periphery of the frame when a cut like that doesn’t seem right’. Next time she’ll see it herself. (Credit where it’s due: I learnt that from watching my brother Noel editing one of my first films.)

What bothered me, however, was not the blogs by Smith and Bordwell but the interview with Smith in the Guardian’s podcast, and here the problem is about journalism and the popularisation of science. The interviewer has evidently read Bordwell’s blog because he borrows his questions from it, but is ignorant when it comes to scholarly and critical film studies. From this perspective his questions are ingenuous and skewed by a naïve concept of ‘the director’, for example, when he asks ‘what directors do to shift our gaze’ apart from ‘directing the actors on screen?’ Smith, perhaps flustered by the situation, starts to explain that ‘there are a lot of techniques for manipulating our attention and changing the way we view a scene’, but ends up with the false claim that until now ‘people didn’t understand a great deal how attention could be manipulated’. False because of course they did and do, not in scientific but in aesthetic terms, on the set and in the cutting room, and they get paid a lot of money to apply this understanding.

And not just directors and actors but everyone, because even the bosses know that film is a collective and collaborative art, in which the business of directing the attention of the audience is the labour of an entire film crew, working towards the ends which it is the director’s job to define. Or the producer.

There’s worse to come when the interviewer asks ‘’What is it film makers can learn from this sort of stuff?’ to which Smith replies with something about quantifying the viewing experience, adding that until this point film-makers ‘have been using their own introspection and experience to be able to guess whether [a scene is] working, but they’ve had no way of actually testing that online…’  This is nonsense. The film-maker’s ‘introspection’ is their creative imagination, and they don’t guess, they use their artistry and aesthetic intuition, which is tested in every film.

Referring to the interest of Dreamworks, our interviewer then asks ‘what kinds of thing were they interested in?’ Here’s the rub, because Dreamworks is an animation studio, so instead of all that collaborative creative labour on the film set, everything has be drawn and traced and coloured in. But the problem they have isn’t just ‘how to structure a scene so that you get a clear fluid shift of attention from one shot to another, how do you create this phenomenon known as continuity’, because of course they already know that, they’ve been doing it long enough. It turns out that the shift to 3-D is upsetting some of the established rules.

The popularisation of science befuddles things (although I freely admit that I read a lot of it). Our scientistically oriented media habitually suppress other non-scientific forms of knowledge, and scientists themselves are usually naïve about the mysteries of the imagination (except curiously for mathematicians, who tend to have an active sense of beauty). Computer sciences have come up with the concept of ‘expert knowledge’, a kind of practical knowledge not formulated discursively but learnt on the job (curious echo of Marx’s idea of  ‘learned non-guild handicraft knowledge’, like sixteenth-century clock-making, which comprised an elaborate oral lore passed down from master to apprentice). It is claimed that this can be codified and represented in the form of computer algorithms. But if it can’t be coded? If it isn’t statistical, for example? If it depends on aesthetic judgement, which cannot be quantified?

Bordwell’s purview is broader than Smith’s. He knows, for example, that the viewer’s flow of attention is fundamentally affected by narrative and the expectations it sets up (this needs to be connected with a proper understanding of genre):

[W]hen we don’t have any narrative expectations, as when we’re confronted with a lyrical avant-garde film by Stan Brakhage or Nathaniel Dorsky, perhaps we will let our eyes roam around the images more freely. Confronted by a film that denies us a narrative, we attend to composition, colour, and other qualities that we may not notice in most storytelling cinema.

Well, but that depends. Perhaps what characterises the difference between the ordinary viewer and the cinephile is that the latter attends to the cinematographic qualities quite as much as narrative at the same time. But even the cinephile often misses some of the cuts.

* * *

This brings us back to the question of edit blindness. Smith and Henderson assemble the data: a typical Hollywood movie contains between one and two thousand edits, at a rate of every 2.7 to 5.4 seconds, but film editors assume the majority of these edits remain invisible. They devised a way of testing this by recording eye movements while subjects watched a series of clips and pressed a button every time they saw an edit. They found that the film editor’s ‘intuition’ about their techniques was broadly vindicated.

There’s more to it than that of course. They also found that more cuts remained invisible the more a film conformed to orthodox continuity editing (like Blade Runner). They also threw up the finding that orthodox continuity cuts, such as gaze cuts within a scene, are detected faster, and fastest when they’re cuts on action. If this confirms what directors and editors effectively already know, nevertheless it’s interesting to learn that on the other hand, with an unorthodox film like Koyaanisqatsi, cuts took longer to detect, but the number of missed cuts was lower. (This in part confirms a couple of previous studies, by different methodologies, which found that viewers seem more aware of discontinuity cuts.)

Remember this is all milliseconds, ranging from 353ms to 643ms, and this is the time it takes for the subject to push the button. Actual recognition is obviously rather faster. And these are laboratory conditions, where subjects are primed to perform the task. In normal viewing, the rate of edit blindness must be rather higher than what’s reported here. On the other hand, the intense concentration which film induces in the viewer is perfectly capable of heightened perception and distributed attention—watching, listening, making narrative links, associating all at the same time.

You can interpret the psychologists’ findings in different ways. One of the things it suggests to me is something else I already know: that Hollywood is good at controlling the spectator, while films that throw aside the orthodox language of continuity editing give more back to the viewer. (And as Jean Renoir once said, the camera can be used in two ways: to call attention to things, and to allow things to call attention to themselves.)

These findings, then, are quite suggestive but they lead on to questions that cannot be answered in the same way. For example, our psychologists says that if a cut between scenes is less likely to be missed but takes longer to detect, this is because ‘the viewer has no expectation of what will happen next’. This is vague and ambiguous, and not always true. In a sequence composed of parallel cutting between two scenes of action (or nowadays even more), there is plenty of expectation. But there are other questions: What happens, for example, when the end of a scene is expected but doesn’t happen? This is like asking about the effect of the long take, when a scene doesn’t cut. How is it perceived by a viewer with a high quotient of edit blindness? What these questions point to is that part of the viewing experience which is not like scanning a surface but  listening to music, where you’re carried along by both melody and rhythm, which are not points but flows and surges. Film is duration. (Cue Deleuze.)

Expectations are created, of course, by cues carried in the film’s narrative and semantic content. What this requires, in Smith and Henderson’s vocabulary, is ‘a direct comparison of the influence of semantic relatedness and visual saliency on overt attention’. In short, these are questions that go beyond the limits of cognitive analysis. What we need is a new poetics—against edit blindness. A poetics of shock, that maps a territory originally staked out by Walter Benjamin:

The painting invites the spectator to contemplation; before it the spectator can abandon himself to his associations. Before the movie frame he cannot do so. No sooner has his eye grasped a scene than it is already changed. It cannot be arrested. Duhamel, who detests the film and knows nothing of its significance, though something of its structure, notes this circumstance as follows: “I can no longer think what I want to think. My thoughts have been replaced by moving images.” The spectator’s process of association in view of these images is indeed interrupted by their constant, sudden change. This constitutes the shock effect of the film, which, like all shocks, should be cushioned by heightened presence of mind.

‘The Work of Art in the Age of Mechanical Reproduction’

 

 

This entry was posted in Film matters. Bookmark the permalink.

One Response to Blinded by science?

  1. wjrcbrown says:

    Salve.

    A most interesting blog. I’d be tempted to guide the author to my article, ‘Resisting the Psycho-Logic of Intensified Continuity’ in Projections: The Journal for Movies and Mind, 5:1, pp. 69-87, published Summer 2011. It takes up many similar themes and also deals with Smith and Henderson’s work. It can be found here:

    http://www.ingentaconnect.com/content/berghahn/proj/2011/00000005/00000001/art00006.

    A couple of things come to mind:

    “First, the projector works at a speed that is faster than the brain is capable of detecting; the brain employs a process called ‘backward masking’ to merge the new image with the last one. At the same time, the reason we don’t notice the tiny gaps is because they carry no information. A gap cannot mask the image which precedes it. As long as each successive image arrives faster than the brain’s minimal perceptual delay, the gaps simply disappear.”

    As Joseph Anderson pursues, another example of this is the fact that wagon wheels turn backwards when we watch classic westerns. We ‘bind’ into the continuous backward movement of the same wheel spokes what in fact are the separate forward movements of different wheel spokes.

    In other words, we do not notice the tiny gaps – even though logically they are there.

    This blog rightly connects this issue to duration, which is a core element that is missing from cognitive studies to film. Recent developments in neuropsychology recognise many of the unconcious processes that go into perception, but the unconscious perception of temporal duration that underpins the vision of constantly renewed fixed states that consciously we perceive has barely been addressed at all.

    However, I think also that the blog is a bit tough on Smith and Henderson.

    There may not be ‘anything new’ about this to cinephiles/academics/filmmakers. But, as per your student who only just learned this now, many people do NOT notice cuts. In fact, a lot of people probably would DENY that they notice cuts in a film – but here at least (and with no real claims to anything less modest than this), Smith and Henderson have proven it to be true.

    Besides which, having done some amateur ‘cinemetrics’ in my time (for the ‘Psycho-Logic’ article, no less), I can assure you that while watching a mainstream Hollywood film (Black Hawk Down) that I was advancing frame-by-frame myself by hand, I STILL reckon I missed a couple of cuts. I am a cinephile, an academic, and a filmmaker of sorts. In other words, aside from less experience than the blog’s author, I have all the ‘right’ credentials to ‘know’ about edit blindness without Smith and Henderson needing to tell me. But it is still surprising how, even when TRYING to spot cuts – and when maximising everything in my favour to be able to spot cuts (controlling the progress of the film myself) – there were moments when I found it hard to do so – because of issues relating to working memory and being sucked into the visual narrative (there was no sound when I did this).

    I am intrigued that we apparently need a ‘new poetics’ – and then the blog reverts to Walter Benjamin.

    If Smith and Henderson are telling us nothing ‘new,’ then nor is Benjamin, since his essay is among the most quoted in film studies and has been around for 75 years already. Or is it that Benjamin does not get read by lay people (as lay people are not filmmakers or editors)…?

    What is interesting, though, is that one CAN – I think – head towards a ‘new poetics’ precisely THROUGH Smith and Henderson, particularly when their work is combined with the ‘neurocinematics’ of Uri Hasson et al, and the Finnish team lead by Kauppi.

    Here we have Hitchcock as ‘controlling’ audience brain responses, while ‘boring’ films see different audience member’s brains firing in different areas.

    If combined with the eye-tracking stuff (something that has not yet been done – as far as I am aware), we might have relatively suggestive evidence that cinema can control brains.

    It – cinema, specifically mainstream cinema – may not control intensity of thought, it may not control content of thought, but it controls attention and it takes up much working memory.

    One thing that does become an issue, though, is the following: we are social beings. That our brains tick together and that our eyes follow the same pathways while looking at man-made objects, suggests that we are good at communicating. THIS is no bad thing.

    The ‘bad thing’ remains the question of power: whose product gets most access/distribution because it has mass appeal and because it has mass appeal it makes more money, etc…