Skip to content

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

BBC Radio 4 documentary “Out of the Ordinary”

March 25, 2013

BBC Radio 4 producer Jolyon Jenkins put together a new documentary about EVP and ghost voice recording, for the 2nd episode of his “Out of the Ordinary” series (broadcast today, 25 March 2013). One slight oversight in Jolyon’s commentary was it wasn’t me who heard psychoacoustician Diana Deutsch’s auditory projection demonstration as saying “take me, take me, take me”, it was Scientific American correspondent Shawn Carlson (see “Rorschach Audio” book page 30). Overly romantic mishearings notwithstanding, it’s an excellent programme, featuring an unexpected contribution from former Tory MP Gyles Brandreth, starting and ending with (quiet) extracts from the “Rorschach Audio” sound installation, and shedding quite alot of light on the ghost-voice recording technology that was marketed under the name Spiricom (see earlier posts).

Gyles Brandreth seems to have mis-remembered extracts from Konstantin Raudive’s “Breakthrough” demonstration record as having been recorded in his presence, and as being the “voice” of Winston Churchill. Dr Stephen Rorke claims to have tapes which suggest that Spiricom inventor William O’Neill rehearsed the (allegedly spontaneous) conversations that feature in Spiricom demonstration recordings. These rehearsal tapes apparently feature good quality audio, while the promotional tapes feature noisy audio, and the practice of deliberately adding noise to make alleged ghost voice recordings seem less implausible – a practice discussed at some length in the “Rorschach Audio” book – is memorably referred to by Rorke as “audio camouflage”. Stephen Rorke also says William O’Neill was a professional ventriloquist, and reproduces the distinctive Spiricom sound using an electro-larynx or artificial voicebox of the sort apparently owned by William O’Neill.

The full programme is available to listen to here –

Many thanks to Jolyon Jenkins

From → Uncategorized

  1. M Townsend permalink

    People tend to approach EVP by asking ‘is this sound a voice, is it words’? Perhaps a better approach would be ‘what makes a sound comprehensible as human speech?’. If you look at it that way, you can see that if a non-speech sound contains simultaneous frequency peaks in simple harmonic ratios, like formants, it can turn the human brain into ‘speech mode’, so interpreting the sound as words. It is possible to produce such ‘formant noise’ from quite simple sound sources that can be found commonly. See for a full account of this.

  2. Rorschach Audio permalink

    Interesting you mention the brain having a specific “speech mode”, because it’s consistent with the idea that perception involves reducing a fantastically complex array of undifferentiated environmental sense-data, to a much more manageable number of meaningfully-constructed semantic “objects”, that, in the chat before the Resonance FM broadcast (see earlier posts), one of the participants mentioned evidence from brain-imaging research which apparently shows the brain consuming less energy when processing speech-sounds than it does when processing environmental noise.

    In other words, once, as adults, we’ve acquired linguistic fluency, it’s less work for the mind to process speech than to process some other noises, because speech sounds are by definition those stimuli that most readily fit into and elicit memorised definitions, whereas other sounds may require more concentrated analysis. As it happens most EVP recordings are IMHO real, albeit misheard, speech sounds – stray communications interference, however I’m sure you’re right, and what you say may explain why recordings of other noises can also be misinterpreted as speech (see book page 96 for instance)

  3. M Townsend permalink

    If you imagine listening to someone talking in a language with which you are entirely unfamiliar, that is what your ear actually sends to your brain. With a language you know, your brain automatically starts to turn those sounds into words. You no longer hear the noise itself, only the words.

    I’ve created ‘formant noise’ examples using all sorts of common sounds, like rustling paper or clothing. If it has the right frequency structure, duration and rhythm, you start to hear words rather than the sound itself, just as with real speech. Different people hear different words though some just hear noise (I’m not sure why but it would be interesting to research). Once you’ve decide what the ‘words’ are, you tend to hear them that way each time. If someone suggests an interpretation, you get the same effect of tending to hear what you expect.

    I agree that actual speech, or fragments of it, makes the best formant noise because it contains all the right ingredients automatically. There is a ‘gallery’ illustrating how to make sounds that appear as speech at

  4. Romona Ullery permalink

    dude this is great!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: