An Investigation of Negative Gravitational Propulsion
By William F. Hamilton III
Düzenleme: Çetin BAL
Abstract: The repeated observations of unconventional flying objects in earth’s atmosphere, their apparent ability to nullify the gravitational and inertial forces, the alleged recovery of one or more of these objects for study, and testimony of reverse engineering the advanced flight technology of these craft leads to speculation about the principles of gravitational, electro-gravitational, and magneto-gravitational or in one word negative gravitational methods of propulsion and how these principles may be rediscovered. This paper addresses the need for a new model of physics that explains the interaction and coupling of electric, magnetic, and gravitational fields, a unified field theory that can lead to engineering novel space-time craft.
Background: The discoveries of physicist Townsend T. Brown in electro-gravitic phenomena, and countless scientists, engineers, and inventors who have considered the problem of gravity nullification as well as inertial nullification for the purpose of propelling an object of any mass to unlimited velocities has been studied and expounded on by a growing list of articles and books. New discoveries in solid-state physics, fluid physics, ultra-cold domains of fluids and solids have all led to a growing realization that new breakthrough discoveries are on the immediate horizon and a collation of all this information toward the application of repeatable experiment and discovery may lead to the development of ultra-fast fuel less space flight.
Dark Energy: Recent discoveries in cosmology that an anti-gravitational force exists in the universe that is causing the acceleration of expansion since a specific cosmological time in the history of the universe has led to a search for a new mysterious form of energy that has been given the name dark energy.
Scientists have known since the 1920s that the universe is expanding, and they discovered in the last year that the expansion is likely to go on forever. In recent months, however, evidence has emerged to suggest that not only will the expansion continue, it will accelerate. The only way to account for such acceleration is the existence of a force to counteract the gravitational forces that would stabilize or shrink the universe.
A news article refers to studies of dark energy by scientists at Princeton:
"The evidence is now getting stronger that there really is a force in the universe that competes with gravity and causes repulsion instead of attraction," says Ostriker.
To account for this force, referred to as cosmic dark energy, scientists recently have revived a concept called the cosmological constant. In their paper, the Princeton scientists describe this cosmic dark energy as "a vacuum energy assigned to empty space itself, a form of energy with negative pressure." Einstein first introduced the cosmological constant in 1917, but later withdrew it, calling it the worst mistake of his life. Understanding the source and nature of this force poses deep new problems for physicists. "It's of very profound physical significance," says Ostriker.
The work to explain the source of this force already has begun. Steinhardt, a co-author, recently introduced a possible new force called quintessence, which may account for the dark energy.”
However, dark energy studies are not going to lead us from the unknown to the understanding of another unknown. We need to start from the known to find those principles of the unknown.
Theory on Generating Dark Energy:
A New Model of Matter: My idea revolves around the introduction of a new model of matter. Matter is the seat of forces and atomic structure and function should reveal to us the secrets of gravity, antigravity, electricity, and magnetism and how they function according to some natural principle. This model does not need to elaborate an entire theory of unification, but serve as our principal model for gravity-control technology.
The author considers that essential elements of this model include: the nature of space, the nature of electric charge, the nature of magnetic moment and the interaction of elementary particles of matter in the space medium. It is now known that space is not an empty container and is filled with energetic activity that we now refer to as the ZPE (Zero-Point Energy) which is herein considered to be activity in the space-ether substance which serves as the substrate of space-time.
The remainder of this thesis consists of material I have already written and will integrate into the present work with certain transitions from one idea to the next.
The Aether: Its an ancient concept revived in modern physics in the 19th century, disposed of by Einstein in the 20th century with the publication of his seminal paper, “On the Electrodynamics of Moving Bodies” where he first proposed what is now known as the Special Theory of Relativity, and now going through a second rebirth in the 21st Century; the Aether returns as a fundamental ingredient in a Grand Unified Theory.
Aether in Greek mythology was the personification of the “upper sky”, space and heaven. He is the pure, upper air that the gods breathe, as opposed to “aer” which mortals breathed. He was the son of Erebus, the Greek God of darkness who dwelt in the underworld.
In physics and philosophy, aether was once believed to be a substance which filled all of space. Aristotle included it as a fifth element on the principle that nature abhorred a vacuum. Aether was also called “Quintessence”. The luminiferous aether of the better known 19th century invocation was a concept held by some physicists and was an attempt to reconcile electromagnetic theory and Newtonian physics.1
Einstein could sometimes speak as though the aether was superfluous (Einstein 1905) and at other times say "space without aether is unthinkable" (Einstein 1922). This was due, of course, to not starting with physical terms—matter its motion, and its interactions (force).
Since the wide acceptance of the Special Theory of Relativity, scientists have generally accepted the notion that the speed of light in vacuo is the upper limit of all material speeds. For this reason, space travel greater than the speed of light is usually considered unattainable except through special contrivance, which is used to alter the properties of the space-time continuum. If the Special Theory of Relativity is correct, the speed of light in vacuo is the only universal absolute. Another way of stating this principle is that light or, more precisely, electromagnetic waves have no preferred frame of reference. Often cited in support of this principle is reference to the classic Michelson-Morley interferometer experiment. This experiment was an attempt to measure the earth’s motion through the hypothetical ether at rest in space. The negative result of this experiment was used to prove that Einstein’s proposition that the speed of light is not altered by addition of velocities with light-emitting objects or that an ether was necessary for explaining the propagation of light across empty space.
Professor Laro Schatzer has made this cogent statement regarding an ether frame:
“There have been a variety of theories to describe electromagnetic waves (light) as excitations of some medium, quite in analogy to sonic waves which propagate in the medium air. This hypothetical medium was called the ether and it was supposed to be at rest in the absolute space-time frame. That is why this frame is also called the ether frame sometimes. Since the establishment of the theory of special relativity it has become extremely unpopular among scientists to speak about”ether". However, we know today that electromagnetic waves are indeed excitations of some "medium". However, this medium is not a solid or a liquid in the classical sense, but it is governed by the laws of quantum mechanics. Quantum field theorists found the name vacuum for it. Some people interpret the vacuum as space-time itself, but this does not cover the fact that its true nature still remains a mystery. Anyhow, the term quantum ether might be used to indicate a possible modern synthesis of both concepts.”
A number of scientists have now revived theories of the ether and a few have re-analyzed the Michelson and Morley experiment as well as pointing out positive results from other experiments. Physicist Paul Marmet has written:
“We show that Michelson and Morley used an over simplified description and failed to notice that their calculation is not compatible with their own hypothesis that light is traveling at a constant velocity in all frames. During the last century, the Michelson-Morley equations have been used without realizing that two essential fundamental phenomena are missing in the Michelson-Morley demonstration. We show that the velocity of the mirror must be taken into account to calculate the angle of reflection of light. Using the Huygens principle, we see that the angle of reflection of light on a moving mirror is a function of the velocity of the mirror. This has been ignored in the Michelson-Morley calculation. Also, due to the transverse direction of the moving frame, light does not enter in the instrument at 90 degrees as assumed in the Michelson-Morley experiment. We acknowledge that, the basic idea suggested by Michelson-Morley to test the variance of space-time, using a comparison between the times taken by light to travel in the parallel direction with respect to a transverse direction is very attractive. However, we show here that the usual predictions are not valid, because of those two classical secondary phenomena, which have not been taken into account. When these overlooked phenomena are taken into account, we see that a null result, in the Michelson-Morley experiment, is the natural consequence, resulting from the assumption of an absolute frame of reference and Galilean transformations. On the contrary, a shift of the interference fringes would be required in order to support Einstein’s relativity. Therefore, for the last century, the relativity theory has been based on a misleading calculation.”
Also, the ether drift experiments of Dayton Miller have received new attention and seem to indicate a positive result for the existence of an ether.
Dayton Miller's 1933 paper in Reviews of Modern Physics details the positive results from over 20 years of experimental research into the question of ether-drift, and remains the most definitive body of work on the subject of light-beam interferometry. Other positive ether-detection experiments have been undertaken, such as the work of Sagnac (1913) and Michelson and Gale (1925), documenting the existence in light-speed variations (c+v > c-v), but these were not adequately constructed for detection of a larger cosmological ether-drift, of the Earth and Solar System moving through the background of space. Dayton Miller's work on ether-drift was so constructed, however, and yielded consistently positive results.
There seems to be a growing preponderance of evidence that a space ether medium exists and that physical theories on gravitation, inertia, electromagnetism, and nuclear forces as well as cosmological theories will need to take account of these. It is even possible that an extensive revision of theoretical physics will be necessitated by these discoveries, both old and new.
If a velocity-dependent medium such as the ether could be established by experiment, then it could open the door to alternative explanations to SR and GR regarding physical phenomena. If this ether is quantized, then we could explore the theoretical nature of a quantum ether. Is gravity a result of some state of the quantum ether?
Does the quantum ether explain inertia? What does an electric or magnetic field do to the state of the quantum ether? Are material particles some wave-state of the quantum ether? Can we unify physical principles by considering a quantum ether?
A complete theory of the ether has been attempted but many such but these theories lack the sweep and power of modern mathematical theories.
A complete theory of the ether would not only account for the origin of forces, but the origin of matter and mass. Past theorizing has postulated the existence of circulating flows in a hydrodynamic ether that form hollow or ring vortices that give rise to electromagnetic forces and constitute the elementary particles that make-up the atomic nature of the world. Experiments conducted on the alternating gradient synchrotron with colliding protons seem to indicate that protons behave like composite vortices as described by Helmholtz and others in their excellent treatises on hydrodynamics.
In 1897, the English physicist J.J. Thomson discovered the electron and proposed a model for the structure of the atom. Thomson knew that electrons had a negative charge and thought that matter must have a positive charge. His model looked like raisins stuck on the surface of a lump of pudding. Rutherford thought that the negative electrons orbited a positive center in a manner like the solar system where the planets orbit the sun. Bohr came up with the first non-classical description of the electron in order to explain why electrons do not lose energy and spiral into the nucleus of the atom. Schrödinger pictured the electron as a standing wave. Physicist Max Born turned the electron into a cloud of probability. Modern quantum theory treats the electron as a point-particle with no specific structure or extension in space. The many versions of the new String theories treat the electron as an extended 1-dimensional string or loop, and some variations treat it as a 2-dimesional structure including a ring-like vortex structure. Lord Kelvin was the first to propose a vortex ring as a model for the electron. This seems to be undergoing a revival in new proposals in string theory, now known as M-Theory.
One mainstream physicist who is raising waves about ether drift experiements and the detection of absolute motion is Reginald T. Cahill of Flinders University in Adelaide, Australia. On this centenary anniversary of Einstein’s Special Theory of Relativity (1905-2005), he has written a critical review of Einstein’s postulates:
“The Einstein postulates assert an invariance of the propagation speed of light in vacuum for any observer, and which amounts to a presumed absence of any preferred frame. The postulates appear to be directly linked to relativistic effects which emerge from Einstein’s Special Theory of Relativity, which is based upon the concept of a flat space-time ontology, and which then lead to the General Theory of Relativity with its curved space-time model for gravity. While the relativistic effects are well established experimentally it is now known that numerous experiments, beginning with the Michelson-Morley experiment of 1887, have always shown that the postulates themselves are false, namely that there is a detectable local preferred frame of reference. This critique briefly reviews the experimental evidence regarding the failure of the postulates (of the Special Theory of Relativity), and the implications for understanding of fundamental physics, and in particular for our understanding of gravity…”
Nicola Tesla, the prodigal genius and inventor of the 19th and 20th century made this statement:
“"There manifests itself in the fully developed being , Man, a desire mysterious, inscrutable and irresistible: to imitate nature, to create, to work himself the wonders he perceives.... Long ago he recognized that all perceptible matter comes from a primary substance, or tenuity beyond conception, filling all space, the Akasha or luminiferous ether, which is acted upon by the life giving Prana or creative force, calling into existence, in never ending cycles all things and phenomena. The primary substance, thrown into infinitesimal whirls of prodigious velocity, becomes gross matter; the force subsiding, the motion ceases and matter disappears, reverting to the primary substance."
Tesla opposed Einstein’s ideas and now he may be vindicated by new experiments, including one to be performed in the International Space Station in 2007-2008 to detect the absolute motion of the earth through the aether.
The new popular notion of the aether is embodied in the concept of Zero Point Energy (ZPE) and the Zero Point Field (ZPF), however I have written that I believe ZPE is the activity we detect in the Aether and not the Aether per se. My model of the Aether is of a superfluid substance that constitutes physical space itself.
According to Barry C. Mingst, General Relativity’s first postulate is that the source of the gravitational field is the stress-energy tensor of a perfect fluid, “T”. “T” contains four non-zero components. These four components are the density of the perfect fluid and the pressure of the perfect fluid in each of the three physical axes. A perfect fluid in general relativity is defined as a fluid that has no viscosity and no heat conduction. This basically describes a superfluid.
French physicist Mayeul Arminjon in his Ether Theory of Gravitation: Why and How? writes:
“The first point is that, in order that it does not brake the motion of material bodies, the physical vacuum or “micro-ether” must be some kind of a perfect fluid. A “truly perfect” fluid is free from any thermal effect that is necessarily bound to dissipation, hence, as noted by Romani , it must be perfectly continuous at any scale. It is then characterized by its pressure and its density, which are connected by the state equation, and by its velocity. It exerts only pressure forces. Therefore, if one attempts to introduce a perfectly fluid ether “filling empty space”, then any interaction forces “at a distance”, thus including gravity, have to be ultimately explained as pressure forces, and hence as contact actions. As far as gravitation is concerned, this is quite simple. I assume that elementary particles are extended objects. The resultant of the pressure forces exerted on a particle is Archimedes’ thrust, that is proportional to the volume δV occupied by a given particle. In order that this force be actually proportional to the mass δm of the particle, it is hence necessary and sufficient that the average density inside a given particle, thus ρp = δm/δV , be the same for all particles—at least at a given (macroscopic) place and at a given time. However, since the gravitational attraction is a field, the density ρp may also be a field, where the space-time variability has to come from that of the pressure in the fluid, pe. In fact, as is suggested by the observed transmutations of elementary particles into different ones, I assume that the particles themselves are made of that microether: each of them should be some kind of organized flow in this imagined fluid—something like a vortex. (This is Romani’s idea of a “constitutive ether” . In that case, the density ρp would be nothing else than the local density in the fluid, ρe = ρe(pe). Under these assumptions, the gravity acceleration is obtained as : g = −grad pe/ρe.”
I think that Arminjon is taking the first steps toward a real unified theory which must be based on the true properties of space. It is the density differentials of space and the pressure waves (forces) of the ether that constitute a foundation for a complete theory of matter and energy.
Combining the stress-energy tensor of pressure with the stress-energy tensor of mass results in the stress-energy tensor for an ideal fluid
Tmn = (r 0 + p/c2)UmUn - gmnp (1)
My suggestion is that the quantized vortices that form in a superfluid are those
that, in the space-ether superfluid, compose the elementary particles. The assembly and aggregation of these vortices produce mass. Mass displaces the surrounding unaggregated fluid which then exerts pressure against the mass. We can then combine the stress-energy tensor of mass and pressure to arrive at equation (1).
In "Ether and Relativity", 1920, Sidelights on Relativity, page 23, Einstein writes:
“Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without an ether is unthinkable; for in such a space there would not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”
Einstein admits that space is endowed with physical properties, as it must be in order to conform to geometrical distortions and affirms that, in that sense, there is an ether, but does not ascribe any motion to this ether. Since further developments postulated the existence of gravitational waves, it is difficult to reconcile this early statement with modern thinking on the subject.
The empty space within atoms or the distant spaces that separate galaxies is referred to as the physical vacuum. The physical vacuum is considered far from empty. It is seething with activity. Physicists describe a vacuum constantly boiling with virtual particles that appear and disappear out of the depths of space. The “Casimir Effect” is cited as experimental evidence of this activity in the physical vacuum.
More recent theorists Carlo Rovelli (University of Pittsburgh) and Lee Smolin (Pennsylvania State University) completed their analysis of a quantum gravity model developed by Abhay Ashtekar at Syracuse University in 1985. Unlike string theory, Ashtekar's work applies only to gravity. However, it posits that at the Planck scale, space-time dissolves into a network of "loops" that are held together by knots. Somewhat like a chain-mail coat used by knights of yore, space-time resembles a fabric fashioned in four dimensions from these tiny one-dimensional loops and knots of energy.
These theories of the physical vacuum are based on theoretical work in quantum theory and string theory, but may not necessarily be correct. There is room for other models including a hydrodynamic model as postulated here.
Flowing Gravity is based on a general hypothesis that space has physical properties that can best be described as super fluidic. By postulating the superfluid nature of space problems in controlling gravity and inertia can be clearly approached. New understandings in electromagnetic phenomena, nuclear and particle physics, cosmology, and the basis of quantum mechanics may be clarified with this shift of emphasis. What remains is to develop a more specific theory and a general theory that can make predictions that are in accord with natural measurements and observations and to devise experiments that can test the nature of the space medium.
A New Model of the Atom:
In aether theory, elementary particles are considered vortices. The electron is considered to be either a toroidal vortex (a theory I favor) or a spherical vortex. The proton and neutron are considered to be compound vortices composed of electron and positron vortices. The vortex circulation creates a void core which reduces aetheric pressure to a null state and produces an inward-directed aetheric static pressure from ambient space which gives us gravitational and inertial force. The dynamic rotation of the toroid produces an outward-directed dynamic pressure which gives us the electric field while the circulation of the toroid produces an axial magnetic field. This is a simplified version of a model which we will use to visualize field propulsion in a saucer-shaped craft that utilizes this knowledge to produce a gravitationally-repulsive force for propulsion.
Electron Ring Vortex Model: In 1897, the English physicist J.J. Thomson discovered the electron and proposed a model for the structure of the atom. Thomson knew that electrons had a negative charge and thought that matter must have a positive charge. His model looked like raisins stuck on the surface of a lump of pudding. Rutherford thought that the negative electrons orbited a positive center in a manner like the solar system where the planets orbit the sun. Bohr came up with the first non-classical description of the electron in order to explain why electrons do not lose energy and spiral into the nucleus of the atom. Schrödinger pictured the electron as a standing wave. Physicist Max Born turned the electron into a cloud of probability. Modern quantum theory treats the electron as a point-particle with no specific structure or extension in space. The many versions of the new String theories treat the electron as an extended 1-dimensional string or loop, and some variations treat it as a 2-dimesional structure including a ring-like vortex structure. Lord Kelvin was the first to propose a vortex ring as a model for the electron. This seems to be undergoing a revival in new proposals in string theory, now known as M-Theory.
The purpose of this paper is to propose an ether-vortex model of the electron as a rotating toroidal ring vortex and a positron as a counter-rotating toroidal ring vortex that is based on a synthesis of various other proposed models of particle physics that may eventually be integrated into a unified theory.
The first attempt to construct a physical model of an atom was made by William Thomson (later elevated to Lord Kelvin) in 1867. The most striking property of the atom was its permanence.
Thompson wrote the following on Vortex Atoms 1
“After noticing Helmholtz's admirable discovery of the law of vortex motion in a perfect liquid -- that is, in a fluid perfectly destitute of viscosity (or fluid friction) -- the author said that this discovery inevitably suggests the idea that Helmholtz's rings are the only true atoms. For the only pretext seeming to justify the monstrous assumption of infinitely strong and infinitely rigid pieces of matter, the existence of which is asserted as a probable hypothesis by some of the greatest modern chemists in their rashly-worded introductory statements, is that urged by Lucretius and adopted by Newton -- that it seems necessary to account for the unalterable distinguishing qualities of different kinds of matter. But Helmholtz has provided an absolutely unalterable quality in the motion of any portion of a perfect liquid in which the peculiar motion which he calls "Wirbelbewegung" has been once created. Thus any portion of a perfect liquid which has "Wirbelbewegung" has one recommendation of Lucretius's atoms -- infinitely perennial specific quality. To generate or to destroy "Wirbelbewegung" in a perfect fluid can only be an act of creative power. Lucretius's atom does not explain any of the properties of matter without attributing them to the atom itself. Thus the "clash of atoms," as it has been well called, has been invoked by his modern followers to account for the elasticity of gases. Every other property of matter has similarly required an assumption of specific forces pertaining to the atom. It is easy (and as improbable -- not more so) to assume whatever specific forces may be required in any portion of matter which possesses the "Wirbelbewegung," as in a solid indivisible piece of matter; and hence the Lucretius atom has no prima facie advantage over the Helmholtz atom. A magnificent display of smoke-rings, which he recently had the pleasure of witnessing in Professor Tait's lecture-room, diminished by one the number of assumptions required to explain the properties of matter on the hypothesis that all bodies are composed of vortex atoms in a perfect homogeneous liquid. Two smoke-rings were frequently seen to bound obliquely from one another, shaking violently from the effects of the shock. The result was very similar to that observable in to large india-rubber rings striking one another in the air. The elasticity of each smoke-ring seemed no further from perfection than might be expected in a solid india-rubber ring of the same shape, from what we know of the viscosity of india-rubber. Of course this kinetic elasticity of form is perfect elasticity for vortex rings in a perfect liquid. It is at least as good a beginning as the "clash of atoms" account for the elasticity of gases. Probably the beautiful investigations of D. Bernoulli, Herapath, Joule, Krönig, Clausius, and Maxwell, on the various thermodynamic properties of gases, may have all the positive assumptions they have been obliged to make, as to mutual forces between two atoms and kinetic energy acquired by individual atoms or molecules, satisfied by vortex rings, without requiring any other property in the matter whose motion composes them than inertia and incompressible occupation of space. A full mathematical investigation of the mutual action between two vortex rings of any given magnitudes and velocities passing one another in any two lines, so directed that they never come nearer one another than a large multiple of the diameter of either, is a perfectly solvable mathematical problem; and the novelty of the circumstances contemplated presents difficulties of an exciting character. Its solution will become the foundation of the proposed new kinetic theory of gases. The possibility of founding a theory of elastic solids and liquids on the dynamics of more closely-packed vortex atoms may be reasonably anticipated. It may be remarked in connexion with this anticipation, that the mere title of Rankine's paper on "Molecular Vortices," communicated to the Royal Society of Edinburgh in 1849 and 1850, was a most suggestive step in physical theory.”
Today, Thompson’s vortex atom seems like a quaint piece of physics history and the standard model of the electron assumes the electron to be a point-like particle without extension in any dimension. However, it seems that the properties of an electron are such that it is not reconciled by a point-like particle and for this reason string theory has proposed that particles such as the electron are extended objects called strings. Whether this is proven in the long run remains to be seen as there have been few tests of string theory to allow us to believe that it is a description of the real world. With the negation of the ether in modern physics, albeit it may be making a comeback, the electron as a standing wave or vortex in the ethereal medium has generally been rejected, but now needs to be reconsidered.
The Electron Ring Model:
From Model of the Electron by Ph.M. Kanarev.
Here is a table listing the properties of an electron:
The ring model of an electron is derived from an ether vortex flow. This vortex creates a pressure normal to its spin that is conjectured produces the electrostatic charge. The magnetic pressure gradient is normal to the electrostatic pressure gradient and acts along the central axis of spin. A vortex contains a low internal pressure, and a high stream pressure. When the stream flows mesh, the particles will attract one another and when they clash, will repel. The vortex field produces a pressure gradient that diminishes with radius from the core boundary. The force between electric charges is inversely proportional to the radius (sq) and directly proportional to the kinetic energy (mv2) of one vortex times the kinetic energy of a paired vortex with the sign relative to the circulation vectors.
From Model of the Electron by Ph.M. Kanarev.
To quote Mayeul Arminjon again:
“I assume that the particles themselves are made of that microether: each of them should be some kind of organized flow in this imagined fluid—something like a vortex. (This is Romani’s idea of a “constitutive ether”).
The toroidal form of the electron vortex may be generated by the helical form of the photonic wave that produces the electron in pair creation. The antimatter counterpart of the electron, the positron, has a circulation in the opposite sense.
Barry Mingst has said,
“A long time ago, Lord Kelvin (W. Thompson), Lorentz, Maxwell, and Hemholtz recognized that the behavior of matter had characteristics similar to vortex ring structures in a fluid (the atomic vortex hypothesis). This concept was abandoned in the early 1900's. This abandonment was more philosophical than substantive with the real problem being the math describing the model was, "at the time", intractable. Must more success was being obtained by QM methods. This same model rears up again in modern physics in the form of the mathematical topology of string/super string theory as well as in superconductivity and superfluidity. Penrose's twistor is a vortex ring, as is a magnetic field. It is interesting to note that vortex rings can sustain transverse vibrations (analogous to guitar string vibration), indeed Kelvin proved mathematically that linear disturbances in a saturated 3D vortex fluid (he termed a vortex sponge) would produce propagation of pure transverse waves identical to the equations and properties that describe the propagation of light through space. It was this relationship as well as many others that caused this hypothesis to be considered seriously. It also is interesting to note that Maxwell used this conceptual model as the basis for his derivation of the EM relationships.”
There is little doubt that the Aether Theory of Space is experiencing a revival among scientists especially in the light of further experiments and discoveries. The idea of a universal substance-energy that lies at the root of all material manifestation is a magnificent conception that conveys to the mind a unifying principle behind all physical phenomena. Paralleling this revival is the concomitant reappraisal of particle vortex theories. It is possible that we will see many new developments in the physics of the 21st century.
1. Proceedings of the Royal Society of Edinburgh, Vol. VI, 1867, pp. 94-105; reprinted in Phil. Mag. Vol. XXXIV, 1867, pp. 15-24.
"Ether" or "Aether"?? The term for the cosmological medium, used by those scientists of the 1800s and early 1900s most engaged with the question, was "ether" with an "e". Sometime in the 1950s, the spelling was changed by ether-critics to "aether" with an "a". This was done in part to remove confusion with the chemical fluid ether as used for anesthesia, but mostly the replacement appears to have been undertaken to relegate the ether of space into ancient history, as an unproven speculation similar to Aristotelian elements of "fire, air, water and earth". I have used the "Aether" spelling in the past myself, but now believe this form carries with it an assumed disproof, that the cosmological medium or energy in space does not exist. Since I fully accept the work of Dayton Miller as a proof of the existence of the ether, use of the other term is no longer acceptable. Consequently, until some better evidence or argument is put forth, I use the term used by Crookes, Lodge, Faraday, Michelson, Moorley, Miller, Tesla, Reich and even by Einstein, spelled with an "e": ether
One of the fundamental forces studied in aerodynamics is lift, or the force that keeps an airplane in the air. Airplanes fly because they push air down. The leading edge of an airplane wing is higher than the trailing edge. All aircraft have wings or lifting bodies or rotating blades as their lift depends on them. UFOs may not have wings and may even have some unaerodynamic configuration. Lift is often explained using Bernoulli’s principle, which relates an increase in the velocity of a flow of fluid (such as air) to a decrease in pressure and vice versa. The air pressure on the upper side of an airplane wing is lower than that on the lower side giving a resultant net force upward.
Another important aspect of aerodynamics is the drag, or resistance, acting on solid bodies moving through air. The thrust force developed by either the jet engine or the propellers, for example, must overcome the drag forces exerted by the air flowing over the airplane. Streamlining the body can significantly reduce these drag forces. For bodies that are not fully streamlined, the drag force increases approximately with the square of the speed as they move rapidly through the air. The power required, for example, to drive an automobile steadily at medium or high speeds is primarily absorbed in overcoming air resistance.
Supersonics, an important branch of aerodynamics, concerns phenomena that arise when the velocity of a solid body exceeds the speed of sound in the medium, usually air, in which it is traveling. The speed of sound in the atmosphere varies with humidity, temperature, and pressure. Because the speed of sound, being thus variable, is a critical factor in aerodynamic equations, a so-called Mach number, named after the Austrian physicist and philosopher Ernst Mach, who pioneered the study of ballistics, represents it. The Mach number is the speed of the projectile or aircraft with reference to the ambient atmosphere, divided by the speed of sound in the same medium and under the same conditions. Thus at sea level, under standard conditions of humidity and temperature, a speed of about 1220 km/h (about 760 mph) represents a Mach number of one, that is, M-1. The same speed in the stratosphere, because of differences in density, pressure, and temperature, would correspond to a Mach number of M-1.16. By designating speeds by Mach number, rather than by kilometers or miles per hour, a more accurate representation of the actual conditions encountered in flight can be obtained.
Another factor, long known to rocket designers, is the direct influence of ambient atmospheric pressures on the efficiency of the flight of planes in supersonic speed ranges. That is, the closer the surrounding medium is to a perfect vacuum, the more efficient is the power plant of the plane. Reducing the area, or cross section, displacing atmosphere, can also increase the range of the supersonic plane. Increasing the weight by increasing the length, but at the same time making the plane more slender and equipping it with a needle nose, are necessary features of design for planes operating in the supersonic range in the atmosphere.
Generally, UFOs seem to bend the rules when it comes to aerodynamics. The maneuverability of discs seen in flight is such that the UFO accelerates so quickly that it seems to overcome any forces of drag that would retard its movement. Discs have been seen to make 90-degree turns instantly, and in some rare cases, instantly reverse their direction of travel. When accelerating to speeds estimated to be supersonic, no shock wave seems to be generated and no sonic boom is heard. Some maneuvers accomplished by UFOs would place extraordinary stress on the airframe if flying like conventional aircraft. Coming in contact with the surrounding atmosphere at high rates of acceleration would challenge the structural integrity of the vehicle, would induce enormous drag and heat the skin of the craft to glowing temperatures, but perhaps the UFO does not come into direct contact with the atmosphere, but actually repels the atmospheric boundary layer surrounding its form. This would account for how they can move quickly without encountering air resistance and thermal stress.
Structural integrity is a major factor in aircraft design and construction. No production airplane leaves the ground before undergoing extensive analysis of how it will fly, the stresses it will tolerate and its maximum safe capability.
Every airplane is subject to structural stress. Stress acts on an airplane whether on the ground or in flight. Stress is defined as a load applied to a unit area of material. Stress produces a deflection or deformation in the material called strain. Stress is always accompanied by strain.
Current production general aviation aircraft are constructed of various materials, the primary being aluminum alloys. Rivets, bolts, screws and special bonding adhesives are used to hold the sheet metal in place. Regardless of the method of attachment of the material, every part of the fuselage must carry a load, or resist a stress placed on it. Design of interior supporting and forming pieces, and the outside metal skin all have a role to play in assuring an overall safe structure capable of withstanding expected loads and stresses.
Engineers carefully calculate the stress a particular part must withstand. Also, the material a part is made from is extremely important and is selected by designers based on its known properties. Aluminum alloy is the primary material for the exterior skin on modern aircraft. This material possesses a good strength to weight ratio, is easy to form, resists corrosion, and is relatively inexpensive.
Fittings must be made of carefully selected materials because of their importance of holding the aircraft together under expected stress and loading. The same holds true for important fasteners such as bolts and rivets. It is essential that these parts not fail under stress. It is also essential that these parts not weaken with exposure to stress and weather elements.
UFOS have been observed that seem to have seamless, rivetless hulls which could give such a craft high structural integrity.
Corrosion is also a consideration. A fitting made of one metal cannot be secured to the structure with a bolt or fastener made of another metal. This situation may result in "dissimilar metal corrosion" over a period of time and result in a weakening of the assembly to the extent that the assembly is rendered unsafe.
Types Of Structural Stress
While there are many other ways to describe the actual stresses, which an aircraft undergoes in normal (or abnormal) operation, they are special arrangements of these basic ones.
"Tension" is the stress acting against another force that is trying to pull something apart. For example, while in straight and level flight the engine power and propeller are pulling the airplane forward. The wings, tail section and fuselage, however, resist that movement because of the airflow around them. The result is a stretching effect on the airframe. Bracing wires in an aircraft are usually in tension.
"Compression" is a squeezing or crushing force that tries to make parts smaller. Anti-compression design resists an inward or crushing force applied to a piece or assembly. Aircraft wings are subjected to compression stresses. The ability of a material to meet compression requirements is measured in pounds per square inch (psi).
"Torsion" is a twisting force. Because aluminum is used almost exclusively for the outside, and, to a large extent, inside fabrication of parts and covering, its tensile strength (capability of being stretched) under torsion is very important. Tensile strength refers to the measure of strength in pounds per square inch (psi) of the metal. Torque (also a twisting force) works against torsion. The torsional strength of a material is its ability to resist torque. While in flight, the engine power and propeller twist the forward fuselage. The force, however, is resisted by the assemblies of the fuselage. The airframe is subjected to variable torsional stresses during turns and other maneuvers.
"Shear" stress tends to slide one piece of material over another. Consider the aircraft fuselage. The aluminum skin panels are riveted to one another. Shear forces try to make the rivets fail under flight loads; therefore, selection of rivets with adequate shear resistance is critical. Bolts and other fasteners are often loaded in shear, an example being bolts that fasten the wing to the spar or carry-through structure. Although other forces may also be present, shear forces try to rip the bolt in two. Generally, shear strength is less than tensile or compressive strength in a particular material.
"Bending" is a combination of two forces, compression and tension. During bending stress, the material on the inside of the bend is compressed and the outside material is stretched in tension. An example of this is the G-loading an airplane structure experiences during maneuvering. During an abrupt pull-up, the airplane's wing spars, wing skin and fuselage undergo positive loading and the upper surfaces are subject to compression, while the lower wing skin experiences tension loads. There are many other areas of the airframe structure that experience bending forces during normal flight.
An airplane structure in flight is subjected to many and varying stresses due to the varying loads that may be imposed. The designer's problem is trying to anticipate the possible stresses that the structure will have to endure, and to build it sufficiently strong to withstand these. The problem is complicated by the fact that an airplane structure must be light as well as strong. The manufacturer states upon certification that the design meets or exceeds all FAR requirements for the category of aircraft being produced. However, hard landings, gust loads caused by extreme turbulence, performing aerobatic maneuvers in a non-aerobatic airplane, etc,. can affect the airworthiness of one or more major airframe assemblies to the extent that the airplane is no longer airworthy. This reiterates the necessity of operating the aircraft within the limitations outlined by the manufacturer. Every flight imposes loads and stresses on the aircraft. How carefully it is flown, therefore, will have an effect on the service life of its assemblies.
It is the UFO’s ability to withstand or defy the normal loads and stresses of our conventional aircraft that allows them to fly in such erratic modes as zigzag flight, instantaneous decelerations, and instantaneous accelerations. The type of flight pattern makes a UFO stand out from the aerobatic performances of conventional aircraft.
This is only a small part of a subject that could fill a textbook. It is by capturing these details of UFO flight dynamics for the record that adds weight to the evidence that unconventional flying objects have been cavorting around the earth for decades.
Magnetic Bubbles and Supercavitation:
If we consider space a medium and that medium is a particulate composition of space itself, similar to aether, then we can model space propulsion systems based on that space-aether concept.
The material of space is said to be filled with a froth of virtual particles. Perhaps, in addition, there are some lepton neutrino space-filling particles that constitute a hydrodynamic energy that exerts pressure on mass concentrations. If these particles, real or virtual normally flow through matter, but encounter increasing resistance with mass density, we could attribute the force of gravity to this virtual particle pressure. Likewise, such a pressure may be responsible for inertia.
I quote from an article on supercavitation:
“Lately there has been a resurgence of interest in a technology that allows naval weapons and vessels to travel submerged at hundreds of miles per hour. The fastest traditional undersea technologies are limited to a maximum velocity of about 80 miles per hour. The technology that allows some undersea vessels to travel faster than the speed of sound in water is called supercavitation. First explored in the 1940s, supercavitation exploits a loophole that allows underwater travel with minimal drag. For many years naval experts studied its parent field, cavitation, because of the problems that it brings about. Only recently did researchers consider supercavitation as a way to build faster submarines and torpedoes.
To understand supercavitation, first cavitation must be understood. When a fluid moves rapidly around a body, the pressure in the flow drops. This pressure reduction over the surface of the body is the same effect that generates lift on airplane wings and gives sailboats the ability to move on the water's surface with only the wind to propel them. As the velocity increases and the pressure continues to drop, a point is reached at which the pressure in the flow equals the vapor pressure of water, whereupon the fluid undergoes a phase change and becomes a gas: water vapor.
Under certain circumstances, especially at sharp edges, the flow can include attached cavities of approximately constant pressure filled with water vapor and air trailing behind. This is called natural cavitation. Normally, cavitation is a condition to be avoided in fluid flow systems, because it can distort water flow to rob pumps, turbines, hydrofoils, and propellers of operational efficiency. It can also lead to violent shock waves (from rapid bubble collapse), which cause pitting and erosion of metal surfaces.
In supercavitation, the small gas bubbles produced by cavitation expand and combine to form one large, stable, and predictable bubble around the supercavitating object. The bubble is longer than the object, so only the leading edge of the object actually contacts liquid water. The rest of the object is surrounded by low-pressure water vapor, significantly lowering the drag on the supercavitating object.
A supercavity can also form around a specially designed projectile. The key is creating a zone of low pressure around the entire object by carefully shaping the nose and firing the projectile at a sufficiently high velocity. At high velocity water flows off the edge of the nose with a speed and angle that prevent it from wrapping around the surface of the projectile, producing a low-pressure bubble around the object. With an appropriate nose shape and a speed over 110 miles per hour, the entire projectile may reside in a vapor cavity.
Some estimates indicate that a supercavitating projectile, using rocket propulsion, could travel at speeds in excess of 230 miles per hour underwater. “
Note the analogy of a spatial ocean or sea with an H2O ocean. Creating a supercavity around a submarine would permit rapid travel through the medium.
Creating a supercavity in space fluid around a spacecraft would also permit rapid travel through the vacuum of space. The supercavity envisioned might be created as an electro-magnetic bubble that repels space particles or drastically reduces the density of the space foam so as to enable the spacecraft to achieve hyperlight velocities, especially if by this action it has a measurable effect on reducing inertia.
The magnetospheric plasma propulsion envisioned by Robert Winglee may be one step on the path toward a true magneto-gravitic propulsion system.
The method makes use of the ambient energy of the solar wind by coupling
to the solar wind through a large-scale (~ > 10 km) magnetic bubble or mini-magnetosphere. The magnetosphere is produced by the injection of plasma on to the magnetic field of a small (< 1 m) dipole coil tethered to the spacecraft. In this way, it is possible for a spacecraft to attain unprecedented speeds for minimal energy and mass requirements. Since the magnetic inflation is produced by electromagnetic processes, the material and deployment problems associated with the mechanical sails are eliminated.
Perhaps a magnetic vortex of extreme power drives flying saucers across our skies and through space. If such a magnetic vortex could be focused to tunnel through the space medium, then I believe hyperlight speeds are possible.
Lifter technology may demonstrate the utility of an HV-powered craft, but I believe that eventually we need a dynamic generator that will produce the powerful fields necessary to propel an interstellar craft. Perhaps some experiments with rotating electric or magnetic rings could be tried to test levitation effects.
The alleged recovery of a flying disk near Roswell, New Mexico in July 1947 has sparked discussions, opinions, and reports that the U.S. Army and Air Force studied the remnants of the disk, especially the methods and modes of its propulsion with the intent of “reverse engineering” the advanced technology found in the alien craft.
New technology such as morphing airplanes and carbon nanotubes may be the result of alien technology back-engineering studies conducted by the military-industrial complex in top secret unacknowledged special access programs.
The Bi-Field Theory:
From a scientist: “The primary propulsion system is electro-magnetic flux directional positive force generating system. The secondary propulsion system is a anti-gravity (using fluid plasma) directional negative force generating system. Remember, these are our terms. The entire craft can be a super conductor or a super capacitor depending on how the propulsion system is configured. Like I said, the system is extremely complicated. Unless you understand the entire system, which we don't, you won't understand what I am saying. The electrical system works on a vacuum vacated energy principle. This system generates an unlimited amount of power. The Visitors have determined that hydrogen has many more isotopes than we thought. H5 is one isotopes they harnessed and use as a catalyst inside the power device.”
What is a negative force generating system?
This was something proposed a long time ago by engineer Leonard G. Cramp in trying to explain flying saucer propulsion when he alluded to the bi-field theory, the G field and the R field. He says, “Of the G field-propelled craft discussed earlier (the use of an artificially created gravitational field), we could say that it was gravitationally moved towards its point source, or, due to a decreased gravitational field strength above it, it is repelled by the ‘denser’ space beneath, and either could be equally true…of the R-field (Repulsion Field) vehicle, we might say that was repulsively moved away from its point source or, due to the increased gravitational field strength below it, it is attracted to the less ‘dense’ space above, and either could be true..” Leonard goes on to propose that both G-field and R-field, one convertible into the other are in use in UFOs.
Paul Hill, retired and now deceased NASA scientist in his excellent technical analysis of UFOs in his book Unconventional Flying Objects considers that the UFO generates a repulsive field. He says, “it is shown that the UFO field is not of the static-electric or static-magnetic type. Rather, it appears to be a quasi-static field of a negative-gravity type. This is concluded because the data shows that the UFO field repels all mass, not just electrically charged or magnetic materials.”
He also mentions, like Cramp, before him that the field must have some degree of field focusing, that it goes out predominantly in one direction in order to give control.
These ideas now invoke a new idea in the field of cosmology termed dark energy. Dark energy is thought to be smoothly distributed throughout the universe. Dark energy has a strong negative pressure of the same order as its energy density. Dark energy interacts only through gravity. Dark energy produces a repulsive force and drives the expansion of the universe.
The point to be made here is that the extraterrestrial starship engineers have harnessed both gravitation and repulsion as a means of traveling through interplanetary and interstellar space. Perhaps they have even been able to artificially create a wormhole tunnel to distant parts of the galaxy through using their control of gravitation and dark energy.
Townsend Brown’s Electrogravitic Capacitors:
Strange as it seems it was during the 1950s that various aircraft companies started research projects on the control of gravity and electro-gravitational propulsion. It is possible that these projects constituted some of the first reverse engineering projects on extraterrestrial propulsion systems.
American physicist and inventor, T. Townsend Brown discovered an effect of highly charged disk-shaped capacitors. When the capacitors were charged in excess of 50 KV, they would have a tendency to accelerate in the direction of the positive pole. Suspending a number of these disk capacitors from a freely rotating carousel would cause, when charged, the entire assembly to rotate. These charged capacitors could also levitate.
According to the Air Force Manual from Wright-Patterson AFB on Electrogravitics we have this description on the Thomas Townsend Brown discovery.
Electrogravitics might be described as a synthesis of electrostatic energy used for propulsion - either vertical propulsion or horizontal or both - and gravitics, or dynamic counterbary, in which energy is also used to set up a local gravitational force independent of the earth’s.
Electrostatic energy for propulsion has been predicted as a possible means of propulsion in space when the thrust from a neutron motor or ion motor would be sufficient in a dragless environment to produce astronomical velocities. But the ion motor is not strictly a part of the science of electrogravitics, since barycentric control in an electrogravitics system is envisaged for a vehicle operating within the earth’s environment and it is not seen initially for space application. Probably large scale space operations would have to await the full development of electrogravitics to enable large pieces of equipment to be moved out of the region of the earth’s strongest gravity effects. So, though electrostatic motors were thought of in 1925, electrogravitics had its birth after the War, when Townsend Brown sought to improve on the various proposals that then existed for electrostatic motors sufficiently to produce some visible manifestation of sustained motion. Whereas earlier electrostatic tests were essentially pure research, Brown’s rigs were aimed from the outset at producing a flying article. As a private venture he produced evidence of motion using condensers in a couple of saucers suspended by arms rotating round a central tower with input running down the arms. The massive-k situation was summarized subsequently in a report, Project Winterhaven, in 1952. Using the data some conclusions were arrived at that might be expected from ten or more years of intensive development - similar to that, for instance, applied to the turbine engine. Using a number of assumptions as to the nature of gravity, the report postulated a saucer as the basis of a possible interceptor with Mach 3 capability. Creation of a local gravitational system would confer upon the fighter the sharp-edged changes of direction typical of motion in space.
The essence of electrogravitics thrust is the use of a very strong positive charge on one side of the vehicle and a negative on the other. The core of the motor is a condenser and the ability of the condenser to hold its charge (the k-number) is the yardstick of performance. With air as 1, current dielectrical materials can yield 6 and use of barium aluminate can raise this considerably, barium titanium oxide (a baked ceramic) can offer 6,000 and there is promise of 30,000, which would be sufficient for supersonic speed.
The original Brown rig produced 30 fps on a voltage of around 50,000 and a small amount of current in the milliamp range. There was no detailed explanation of gravity in Project Winterhaven, but it was assumed that particle dualism in the subatomic structure of gravity would coincide in its effect with the issuing stream of electrons from the electrostatic energy source to produce counterbary. The Brown work probably remains a realistic approach to the practical realization of electrostatic propulsion and sustentation. Whatever may be discovered by the Gravity Research Foundation of New Boston a complete understanding and synthetic reproduction of gravity is not essential for limited success. The electrogravitics saucer can perform the function of a classic lifting surface - it produces a pushing effect on the under surface and a suction effect on the upper, but, unlike the airfoil, it does not require a flow of air to produce the effect.
Many UFO contactees, those who have claimed to have been taken physically aboard a flying saucer, describe high-voltage electrostatic capacitors, a magnetic rotor, and a central column sometimes referred to as the magnetic pole of the ship. They also report that when the craft is in motion, there is no sensation of acceleration and yet, the apparent gravity in the cabin seems normal.
The description of the saucer seems as if the alien visitors had modeled their craft on the constituents of an atom with a central reversible magnetic pole, a positive core or nucleus in the craft, and capacitive electrons which aid in directional control. All report very high voltages are generated to produce a field around the craft allowing it to nullify gravitational and inertial forces.
Here is a brief description given by contactee George Adamski and reiterated by many other contactees since then:
“Within the craft there was not a single dark corner. I could not make out where the light was coming from. It seemed to permeate every cavity and corner with a soft pleasing glow. There is no way of describing that light exactly. It was not white, nor was it blue, nor was it exactly any other color that I could name. Instead, it seemed to consist of a mellow blend of all colors, though at times I fancied one or another seemed to predominate.
I was so engrossed in trying to solve this mystery, and at the same time to see and absorb every detail of this amazing little craft that I was quite unaware we had taken off, although I did suddenly register a slight feeling of movement. But there was no sensation of enormous acceleration, nor of changes in pressure and altitude as would be the case in one of our planes going at half the speed. Nor had we experienced any jerk as we broke contact with the ground. I had an impression of tremendous solidity and smoothness, with little more realization of movement than of the unnoticeable journey of the Earth itself as it revolves around the Sun at eighteen and one-half miles per second. Others who have been privileged to ride in these Saucers also have been struck by the same sensation of movement—or rather, the almost total lack of it. But the fact is, with so many wonders crowding my consciousness, it was only later, after I was back on Earth reviewing the night’s experiences in my own mind, that I could begin to sort them out.”
The Searl Disc:
New experiments attempting to replicate the Searl Disc and Searl Effect Generator (SEG) may yet vindicate its inventor who has stuck by his story of development over the years. I believe the SEG models the atom in that it uses self-impelled rollers that behave like electrons in orbit around the central plates. The plates develop a positive charge while the rollers develop a negative charge. The outer runner is moving through an electromagnetic field which tends to suppress dielectric breakdown and allows megavoltages to develop surrounding the disc with a vacuum layer that, as the runners achieve their velocity, produces a cooling effect around the disc, and the whole becomes a superconductor that generates a powerful electric field that decouples the disc from the earth’s gravity.
The SEG consists of a basic drive unit called the Gyro-Cell (GC) and, depending on the application, is either fitted with coils for generation of electricity or with a shaft for transfer of mechanical power. The GC can also be used as a high voltage source. Another and important quality of the GC 1s its ability to levitate.
The GC can be considered as an electric motor entirely consisting of permanent magnets in the shape of cylindrical bars and annular rings.
Figure 1 shows the basic GC in its simplest form, consisting of one stationary annular ring-shaped magnet, called the plate, and a number of moving cylinder-shaped rods called runners.
Figure 1 ~
During operation each runner is spinning about its axis and is simultaneously orbiting the plate in such a manner that a fixed point p on the curved runner surface traces out a whole number of cycloids during one revolution round the plate, as shown by the dotted lines in Figure 2.
Figure 2 ~
Measurements have revealed that an electric potential difference is produced in the radial direction between plate and runners; the plate being positively charged and the runners negatively charged, as shown in Figure 1.
In principle, no mechanical constraints are needed to keep the GC together since the runners are electromagnetically coupled to the plate. However, used as a torque producing device, shaft and casing must be fitted to transfer the power produced. Furthermore, in applications where the generator is mounted inside a framework, the runners should be made shorter than the height of the plate to prevent the runners from catching the frame or other parts.
When in operation, gaps are created by electromagnetic interaction and centrifugal forces preventing mechanical and galvanic contact between plate and runners and thereby reducing the friction to negligible values.
The experiments showed that the power output increases as the number of runners increase and to achieve smooth and even operation the ratio between external plate diameter Dp and runner diameter Dr should be a positive integer greater than or equal to 12. Thus:
(1) P/Dr = N > 12 (N = 12, 13, 14, &c)
The experiments also indicated that the gaps O between adjacent runners should be one runner diameter D as shown in Figure 1.
More complex Gyro-Cells can be formed by adding further plates and runners to the basic unit. Figure 3 illustrates a 3-plate GC consisting of three sections, A, B and C. Each section consists of one plate with corresponding runners.
Searl's original idea was that free electrons in spinning metal bodies may have a tendency to move in the radial direction due to inertial forces. If this hypothesis was correct then an electric potential difference should develop between the center and periphery of a rotating shaft and between the inner edge and the rim of a slip ring. He also held the view that the electromotive force induced in spinning bodies due to the earth's magnetic field could be used for generating electric energy. Accordingly, Searl's first series of experiments consisted of careful measurements on fast rotating steel shafts and slip rings made of brass and indeed he was able to show the existence of a minute electric voltage in the radial direction. If this voltage was due to the inertial properties of the electrons or induced by the magnetic field of the earth was never established. However, it soon became evident that this simple generator would only be practically useful if means could be found to increase the power output.
An aether-vortex model of the atom may be useful in devising means of producing a negative gravitational or repulsive force. The aether itself may be the source of dark energy. The vortex core may be a gateway into other dimensions and time travel. It is one thing to replicate or reverse-engineer an Alien machine, but there is no doubt that there are variations on gravity control and two main variations are supercapacitance and superconduction. I favor the idea of generating high-potential fields by rotating charges as in the spin of elementary particles and using these macro electrostatic fields for the control of gravitation and inertia.