Acoustic Inkblot, Ancient Ghosts, Artificial Voicebox, Artist As Medium, Audio Illusions, Audio Inkblot, Audio Pareidolia, Auditory Inkblot, Auditory Projective Test, Auditory Rorschach, BBC, BBC Radio, BBC Radio 4, BBC Radio Four, Bing Crosby, British Museum, C, CM von Hausswolff, Damion Searls, Dave Tompkins, Dead Lines, Diana Deutsch, Disinformation, Drawing Room, Electro-Larynx, EVP, Finkel, First Ghosts, First Ghosts: A Rich History, First Ghosts: A Rich History of Ancient Ghosts, First Ghosts: A Rich History of Ancient Ghosts and Ghost Stories, George Meek, Ghost Stories, Gyles Brandreth, Hauntology, Hayward Gallery, Hayward Gallery Touring, Hayward Lars Bang Larsen, Hermann Rorschach, High Static, High Static Dead Lines, Hodder & Stoughton, How To Wreck A Nice Beach, Huib Emmer, Human Conversation, Irving, Irving Finkel, Joe Banks, Jolyon Jenkins, Jonathan Calver, Justine Picardie, Ken Hollings, Konstantin Raudive, Kristen Gallerneaux, Marco Pasi, Mary Had a Little Lamb, MIT Press, My Ghosts, Not Without My Ghosts, Not Without My Ghosts: The Artist As Medium, Now You're Talking, Penguin, Penguin Random House, Pete Bebergal, Peter Bebergal, Radio 4, Radio Four, Random House, Rich History, Scientific American, Shawn Carlson, Simon & Schuster, Simon Grant, Sine-Wave Speech, Sir Robert Mayer, Sonic Inkblot, Sonic Pareidolia, Sonic Rorschach, Southbank, Southbank Centre, Speak Spirit Speak, Spiricom, Stephen Rorke, Strange Attractor, Strange Attractor Journal, Strange Attractor Press, Strange Frequencies, The Artist As Medium, The First Ghosts: A Rich History of Ancient Ghosts and Ghost Stories, The Inkblots, The MIT Press, Thomas Edison, Tom McCarthy, Trevor Cox, Ventriloquism, White Christmas, Without My Ghosts
BBC Radio 4 documentary “Out of the Ordinary”
BBC Radio 4 producer Jolyon Jenkins put together a new documentary about EVP and ghost voice recording, for the 2nd episode of his “Out of the Ordinary” series (broadcast today, 25 March 2013). One slight oversight in Jolyon’s commentary was it wasn’t me who heard psychoacoustician Diana Deutsch’s auditory projection demonstration as saying “take me, take me, take me”, it was Scientific American correspondent Shawn Carlson (see “Rorschach Audio” book page 30). Overly romantic mishearings notwithstanding, it’s an excellent programme, featuring an unexpected contribution from former Tory MP Gyles Brandreth, starting and ending with (quiet) extracts from the “Rorschach Audio” sound installation, and shedding quite alot of light on the ghost-voice recording technology that was marketed under the name Spiricom (see earlier posts).
Gyles Brandreth seems to have mis-remembered extracts from Konstantin Raudive’s “Breakthrough” demonstration record as having been recorded in his presence, and as being the “voice” of Winston Churchill. Dr Stephen Rorke claims to have tapes which suggest that Spiricom inventor William O’Neill rehearsed the (allegedly spontaneous) conversations that feature in Spiricom demonstration recordings. These rehearsal tapes apparently feature good quality audio, while the promotional tapes feature noisy audio, and the practice of deliberately adding noise to make alleged ghost voice recordings seem less implausible – a practice discussed at some length in the “Rorschach Audio” book – is memorably referred to by Rorke as “audio camouflage”. Stephen Rorke also says William O’Neill was a professional ventriloquist, and reproduces the distinctive Spiricom sound using an electro-larynx or artificial voicebox of the sort apparently owned by William O’Neill.
The full programme is available to listen to here –
http://www.bbc.co.uk/programmes/b01rg1gh
Many thanks to Jolyon Jenkins
Related
From → Uncategorized
People tend to approach EVP by asking ‘is this sound a voice, is it words’? Perhaps a better approach would be ‘what makes a sound comprehensible as human speech?’. If you look at it that way, you can see that if a non-speech sound contains simultaneous frequency peaks in simple harmonic ratios, like formants, it can turn the human brain into ‘speech mode’, so interpreting the sound as words. It is possible to produce such ‘formant noise’ from quite simple sound sources that can be found commonly. See http://www.assap.ac.uk/newsite/articles/Formant%20noise.html for a full account of this.
Interesting you mention the brain having a specific “speech mode”, because it’s consistent with the idea that perception involves reducing a fantastically complex array of undifferentiated environmental sense-data, to a much more manageable number of meaningfully-constructed semantic “objects”, that, in the chat before the Resonance FM broadcast (see earlier posts), one of the participants mentioned evidence from brain-imaging research which apparently shows the brain consuming less energy when processing speech-sounds than it does when processing environmental noise.
In other words, once, as adults, we’ve acquired linguistic fluency, it’s less work for the mind to process speech than to process some other noises, because speech sounds are by definition those stimuli that most readily fit into and elicit memorised definitions, whereas other sounds may require more concentrated analysis. As it happens most EVP recordings are IMHO real, albeit misheard, speech sounds – stray communications interference, however I’m sure you’re right, and what you say may explain why recordings of other noises can also be misinterpreted as speech (see book page 96 for instance)
If you imagine listening to someone talking in a language with which you are entirely unfamiliar, that is what your ear actually sends to your brain. With a language you know, your brain automatically starts to turn those sounds into words. You no longer hear the noise itself, only the words.
I’ve created ‘formant noise’ examples using all sorts of common sounds, like rustling paper or clothing. If it has the right frequency structure, duration and rhythm, you start to hear words rather than the sound itself, just as with real speech. Different people hear different words though some just hear noise (I’m not sure why but it would be interesting to research). Once you’ve decide what the ‘words’ are, you tend to hear them that way each time. If someone suggests an interpretation, you get the same effect of tending to hear what you expect.
I agree that actual speech, or fragments of it, makes the best formant noise because it contains all the right ingredients automatically. There is a ‘gallery’ illustrating how to make sounds that appear as speech at http://www.assap.ac.uk/newsite/articles/Analyzing%20EVP.html
dude this is great!