With regard to the extraction of useful information
from what appears to be background noise, there are
some new speech processing algorithms based on neural
networks which are outperforming even the keenest
human ear in recognizing words and even identifying
the speaker.
I remember reading a report a few years ago about an
experiment where a persons hand was strapped into an
X,Y potentiometer setup, like a joystick and the
person signed their name which was recorded as an
array of x,y voltage levels.
When those levels were played back into an amplifier
that drove a pen type plotter to rewrite the image, it
was jerky and not smooth like the original.
They used a neural program that emulated the synapses
of a worm or frog (forgot which) and when it was
played back through the plotter, the image was very
clean, just like the original. I wish I could find
the reference for the EXACT details as its a matter of
resolution of the motors in the plotter also, but the
point was the neural program more precisely duplicated
the original image...such neural algorithms could be
applied to decrypting a whole range of information
ranging from EVP to Reverse Speech as well as
extracting useful details from visual images and
anything else that be placed in matrix arrays, 2D or
3D.
I envision one day an aura viewing device, realtime TV
with various filters and the ability to sequence
photos to produce a 3D image...the same for ways to
view aether or gravity flows. Check out;
http://unisci.com/stories/19994/1001991.htm
....biomedical engineers have created the world's first
machine system that can recognize spoken words better
than humans can. A fundamental rethinking of a
long-underperforming computer architecture led to
their achievement.
The system might soon facilitate voice control of
computers and other machines, help the deaf, aid air
traffic controllers and others who must understand
speech in noisy environments, and instantly produce
clean transcripts of conversations, identifying each
of the speakers. The U.S. Navy, which listens for the
sounds of submarines in the hubbub of the open seas,
is another possible user.
Potentially, the system's novel underlying principles
could have applications in such medical areas as
patient monitoring and the reading of
electrocardiograms.
In benchmark testing using just a few spoken words,
USC's Berger-Liaw Neural Network Speaker Independent
Speech Recognition System not only bested all existing
computer speech recognition systems but outperformed
the keenest human ears.
The system can distinguish words in vast amounts of
random "white" noise -- noise with amplitude 1,000
times the strength of the target auditory signal.
Human listeners can deal with only a fraction as much.
And the system can pluck words from the background
clutter of other voices -- the hubbub heard in bus
stations, theater lobbies and cocktail parties, for
example.
Even the best existing systems fail completely when as
little as 10 percent of hubbub masks a speaker's
voice. At slightly higher noise levels, the likelihood
that a human listener can identify spoken test words
is mere chance. By contrast, Berger and Liaw's system
functions at 60 percent recognition with a hubbub
level 560 times the strength of the target stimulus.
With just a minor adjustment, the system can identify
different speakers of the same word with superhuman
acuity.
=====
=================================
Please respond to jdecker@keelynet.com
as I am writing from my work email of
jwdatwork@yahoo.com.........thanks!
=================================
__________________________________________________
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com
-------------------------------------------------------------
To leave this list, email <listserver@keelynet.com>
with the body text: leave Interact
list archives and on line subscription forms are at
http://keelynet.com/interact/
-------------------------------------------------------------