August 1, 2010

PAGE 4

Perhaps, the best way to examine the legacy of the dialogue between science and religion in the debate over the implications of quantum non-locality is to examine the source of Einstein’s objections tp quantum epistemology in more personal terms. Einstein apparently lost faith in the God portrayed in biblical literature in early adolescence. But, as appropriated, . . . the ‘Autobiographical Notes’ give to suggest that there were aspects that carry over into his understanding of the foundation for scientific knowledge, . . . ‘Thus I came -despite the fact that I was the son of an entirely irreligious [Jewish] Breeden heritage, which is deeply held of its religiosity, which, however, found an abrupt end at the age of 12. Though the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence waw a positively frantic [orgy] of freethinking coupled with the impression that youth is intentionally being deceived by the stat through lies that it was a crushing impression. Suspicion against every kind of authority grew out of this experience. . . . It was clear to me that the religious paradise of youth, which was thus lost, was a first attempt ti free myself from the chains of the ‘merely personal’. The mental grasp of this extra-personal world within the frame of the given possibilities swam as highest aim half consciously and half unconsciously before the mind’s eye.’


What is more, was, suggested Einstein, belief in the word of God as it is revealed in biblical literature that allowed him to dwell in a ‘religious paradise of youth’ and to shield himself from the harsh realities of social and political life. In an effort to recover that inner sense of security that was lost after exposure to scientific knowledge, or to become free again of the ‘merely personal’, he committed himself to understanding the ‘extra-personal world within the frame of given possibilities’, or as seems obvious, to the study of physics. Although the existence of God as described in the Bible may have been in doubt, the qualities of mind that the architects of classical physics associated with this God were not. This is clear in the comments from which Einstein uses of mathematics, . . . ‘Nature is the realization of the simplest conceivable mathematical ideas. I am convinced that we can discover, by means of purely mathematical construction, those concepts and those lawful connections between them that furnish the key to the understanding of natural phenomena. Experience remains, of course, the sole criteria of physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed.’

This article of faith, first articulated by Kepler, that ‘nature is the realization of the simplest conceivable mathematical ideas’ allowed for Einstein to posit the first major law of modern physics much as it allows Galileo to posit the first major law of classical physics. During which time, when the special and then the general theories of relativity had not been confirmed by experiment. Many established physicists viewed them as at least minor theorises, Einstein remained entirely confident of their predictions. Ilse Rosenthal-Schneider, who visited Einstein shortly after Eddington’s eclipse expedition confirmed a prediction of the general theory(1919), described Einstein’s response to this news: ‘When I was giving expression to my joy that the results coincided with his calculations, he said quite unmoved, ‘but I knew the theory was correct’ and when I asked, ‘what if there had been no confirmation of his prediction,’ he countered: ‘Then I would have been sorry for the dear Lord - the theory is correct.’

Einstein was not given to making sarcastic or sardonic comments, particularly on matters of religion. These unguarded responses testify to his profound conviction that the language of mathematics allows the human mind access to immaterial and immutable truths existing outside the mind that conceived them. Although Einstein’s belief was far more secular than Galileo’s, it retained the same essential ingredients.

What continued in the twenty-three-year-long debate between Einstein and Bohr, least of mention? The primary article drawing upon its faith that contends with those opposing to the merits or limits of a physical theory, at the heart of this debate was the fundamental question, ‘What is the relationship between the mathematical forms in the human mind called physical theory and physical reality?’ Einstein did not believe in a God who spoke in tongues of flame from the mountaintop in ordinary language, and he could not sustain belief in the anthropomorphic God of the West. There is also no suggestion that he embraced ontological monism, or the conception of Being featured in Eastern religious systems, like Taoism, Hinduism, and Buddhism. The closest that Einstein apparently came to affirming the existence of the ‘extra-personal’ in the universe was a ‘cosmic religious feeling’, which he closely associated with the classical view of scientific epistemology.

The doctrine that Einstein fought to preserve seemed the natural inheritance of physics until the approach of quantum mechanics. Although the mind that constructs reality might be evolving fictions that are not necessarily true or necessary in social and political life, there was, Einstein felt, a way of knowing, purged of deceptions and lies. He was convinced that knowledge of physical reality in physical theory mirrors the preexistent and immutable realm of physical laws. And as Einstein consistently made clear, this knowledge mitigates loneliness and inculcates a sense of order and reason in a cosmos that might appear otherwise bereft of meaning and purpose.

What most disturbed Einstein about quantum mechanics was the fact that this physical theory might not, in experiment or even in principle, mirrors precisely the structure of physical reality. There is, for all the reasons we seem attested of, in that an inherent uncertainty in measurement made, . . . a quantum mechanical process reflects of a pursuit that quantum theory in itself and its contributive dynamic functionalities that there lay the attribution of a completeness of a quantum mechanical theory. Einstein’s fearing that it would force us to recognize that this inherent uncertainty applied to all of physics, and, therefore, the ontological bridge between mathematical theory and physical reality - does not exist. And this would mean, as Bohr was among the first to realize, that we must profoundly revive the epistemological foundations of modern science.

The world view of classical physics allowed the physicist to assume that communion with the essences of physical reality via mathematical laws and associated theories was possible, but it did not arrange for the knowing mind. In our new situation, the status of the knowing mind seems quite different. Modern physics distributively contributed its view toward the universe as an unbroken, undissectable and undivided dynamic whole. ‘There can hardly be a sharper contrast,’ said Melic Capek, ‘than that between the everlasting atoms of classical physics and the vanishing ‘particles’ of modern physics as Stapp put it: ‘Each atom turns out to be nothing but the potentialities in the behaviour of others. What we find, therefore, are not elementary space-time realities, but rather a web of relationships in which no part can stand alone, every part derives its meaning and existence only from its place within the whole’’

The characteristics of particles and quanta are not isolatable, given particle-wave dualism and the incessant exchange of quanta within matter-energy fields. Matter cannot be dissected from the omnipresent sea of energy, nor can we in theory or in fact observe matter from the outside. As Heisenberg put it decades ago, ‘the cosmos is a complicated tissue of events, in which connection of different kinds alternate or overlay or combine and by that determine the texture of the whole. This means that a pure reductionist approach to understanding physical reality, which was the goal of classical physics, is no longer appropriate.

While the formalism of quantum physics predicts that correlations between particles over space-like separated regions are possible, it can say nothing about what this strange new relationship between parts (quanta) and whole (cosmos) was by means an outside formalism. This does not, however, prevent us from considering the implications in philosophical terms, as the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one and another.’

Wholeness requires a complementary relationship between unity and differences and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that make up the whole, although the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really by itself. It is the way parts are organized and not another constituent addition to those that form the totality.’

In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ in the parts, as opposed to a mere spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collection of parts that would allegedly make up the whole in classical physics is an example of a spurious whole. Parts were some genuine wholes when the universal principle of order is inside the parts and by that adjusts each to all that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.

Modern physics also reveals, claims Harris, a complementary relationship between the differences between parts that constituted content representations that the universal ordering principle that is immanent in each part. While the whole cannot be finally revealed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each of the parts. The part can never, nonetheless, be finally isolated from the web of relationships that disclose the interconnections with the whole, and any attempt to do so results in ambiguity.

Much of the ambiguity in attempted to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. But order in complementary relationships between differences and sameness in any physical event is never external to that event -, the connections are immanent in the event. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum event apparent in observation or measurement, and the inseparable whole, revealed but not described by the instantaneous, and the inseparable whole, revealed but described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity to modern physics.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that shows of the ‘progressive principal order’ of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness shows self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, concluding it is reasonable, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this knowledge - there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative assumptions as its basis to be drawn the obvious freedom of which id firmly grounded in scientific theory and experiments there is, however, in the scientific description of nature, the belief in radical Cartesian division between mind and world sanctioned by classical physics. Seemingly clear, that this separation between mind and world was a macro-level illusion fostered by limited awarenesses of the actual character of physical reality and by mathematical idealization extended beyond the realm of their applicability.

Thus, the grounds for objecting to quantum theory, the lack of a one - to - one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strictly scientific terms. After all, the completeness of all previous physical theories was measured against the criterion with enormous success. Since it was this success that gave physics the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more comprehensive quantum theory will emerge to insist on these requirements.

All indications are, however, that no future theory can circumvent quantum indeterminancy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness or physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.

If a theory does so and continues to do so, which is certainly the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy per se is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationship in classical physics between ‘theory’ and ‘physical reality’.

In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave function, and then taking the square of the amplitude. In the two-slit experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the absolute value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, we would simply add the probabilities of the two alternate ways and let it go at that. The classical procedure does not work here, because we are not dealing with classical atoms. In quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’.

The superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum. As opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory


ψ1 + ψ2
2 ≠
ψ1
2 +
ψ2
2

Where ψ1 and ψ2 are the individual wave functions. On the left - hand side, the superposition principle results in extra terms that cannot be found on the right - hand side. The left - hand side of the above relations is the way a quantum physicist would compute probabilities, and the right - hand side is the classical analogue. In quantum theory, the right - hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left - hand side of the above relations would not be there, and the peculiar wave - like interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like a bullet, and the final probability would be the sum of the individual probabilities. But when we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.

In order to give a full account of quantum recipes for computing probabilities, one has to examine what would happen in events that are compound. Compound events are ‘events that can be broken down into a series of steps, or events that consists of a number of things happening independently.’ The recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.

The quantum recipe is
ψ1 • ψ2
2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus, the recipes of computing results in quantum theory and classical physics can be totally different. The quantum superposition effects are completely nonclassical, and there is no mathematical justification per se why the quantum recipes work. What justifies the use of quantum probability theory is the coming thing that justifies the use of quantum physics -it has allowed us in countless experiments to extend our ability to co-ordinate experience with the expansive nature of unity.

A departure from the classical mechanics of Newton involving the principle that certain physical quantities can only assume discrete values. In quantum theory, introduced by Planck (1900), certain conditions are imposed on these quantities to restrict their value; the quantities are then said to be ‘quantized’.

Up to 1900, physics was based on Newtonian mechanics. Large - scale systems are usually adequately described, however, several problems could not be solved, in particular, the explanation of the curves of energy against wavelengths for ‘black-body radiation’, with their characteristic maximum, as these attemptive efforts were afforded to endeavour upon the base-cases, on which the idea that the enclosure producing the radiation contained a number of ‘standing waves’ and that the energy of an oscillator if ‘kT’, where ‘k’ in the ‘Boltzmann Constant’ and ‘T’ the thermodynamic temperature. It is a consequence of classical theory that the energy does not depend on the frequency of the oscillator. This inability to explain the phenomenons has been called the ‘ultraviolet catastrophe’.

Planck tackled the problem by discarding the idea that an oscillator can attain or decrease energy continuously, suggesting that it could only change by some discrete amount, which he called a ‘quantum.’ This unit of energy is given by ‘hv’ where ‘v’ is the frequency and ‘h’ is the ‘Planck Constant,’ ‘h’ has dimensions of energy ‘x’ times of action, and was called the ‘quantum of action.’ According to Planck an oscillator could only change its energy by an integral number of quanta, i.e., by hv, 2hv, 3hv, etc. This meant that the radiation in an enclosure has certain discrete energies and by considering the statistical distribution of oscillators with respect to their energies, he was able to derive the Planck Radiation Formulas. The formulae contrived by Planck, to express the distribution of dynamic energy in the normal spectrum of ‘black-body’ radiation. It is usual form is:

8πchdλ/λ 5 (exp[ch/kλT] ‒ 1.

Which represents the amount of energy per unit volume in the range of wavelengths between λ and λ + dλ? ‘c’ = the speed of light and ‘h’ = the Planck constant, as ‘k’ = the Boltzmann constant with ‘T’ equalling thermodynamic temperatures.

The idea of quanta of energy was applied to other problems in physics, when in 1905 Einstein explained features of the ‘Photoelectric Effect’ by assuming that light was absorbed in quanta (photons). A further advance was made by Bohr(1913) in his theory of atomic spectra, in which he assumed that the atom can only exist in certain energy states and that light is emitted or absorbed as a result of a change from one state to another. He used the idea that the angular momentum of an orbiting electron could only assume discrete values, i.e., was quantized? A refinement of Bohr’s theory was introduced by Sommerfeld in an attempt to account for fine structure in spectra. Other successes of quantum theory were its explanations of the ‘Compton Effect’ and ‘Stark Effect.’ Later developments involved the formulation of a new system of mechanics known as ‘Quantum Mechanics.’

What is more, in furthering to Compton’s scattering was to an interaction between a photon of electromagnetic radiation and a free electron, or other charged particles, in which some of the energy of the photon is transferred to the particle. As a result, the wavelength of the photon is increased by amount Δλ. Where:

Δλ = ( 2h / m0 c ) sin 2 ½.

This is the Compton equation, ‘h’ is the Planck constant, m0 the rest mass of the particle, ‘c’ the speed of light, and the photon angle between the directions of the incident and scattered photons. The quantity ‘h/m0c’ and is known to be the ‘Compton Wavelength,’ symbol λC, which for an electron is equal to 0.002 43 nm.

The outer electrons in all elements and the inner ones in those of low atomic number have ‘binding energies’ negligible compared with the quantum energies of all except very soft X- and gamma rays. Thus most electrons in matter are effectively free and at rest and so cause Compton scattering. In the range of quantum energies 105 to 107 electro volts, this effect is commonly the most important process of attenuation of radiation. The scattering electron is ejected from the atom with large kinetic energy and the ionization that it causes plays an important part in the operation of detectors of radiation.

In the ‘Inverse Compton Effect’ there is a gain in energy by low-energy photons as a result of being scattered by free electrons of much higher energy. As a consequence, the electrons lose energy. Whereas, the wavelength of light emitted by atoms is altered by the application of a strong transverse electric field to the source, the spectrum lines being split up into a number of sharply defined components. The displacements are symmetrical about the position of the undisplaced lines, and are prepositional of the undisplaced line, and are propositional to the field strength up to about 100 000 volts per. cm. (The Stark Effect).

Adjoined alongside with quantum mechanics, is an unstretching constitution taken advantage of forwarded mathematical physical theories - growing from Planck’s ‘Quantum Theory’ and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject development in several mathematical forms, including ‘Wave Mechanics’ (Schrödinger) and ‘Matrix Mechanics’ (Born and Heisenberg), all of which are equivalent.

In quantum mechanics, it is often found that the properties of a physical system, such as its angular moment and energy, can only take discrete values. Where this occurs the property is said to be ‘quantized’ and its various possible values are labelled by a set of numbers called quantum numbers. For example, according to Bohr’s theory of the atom, an electron moving in a circular orbit could occupy any orbit at any distance from the nucleus but only an orbit for which its angular momentum (mvr) was equal to nh/2π, where ‘n’ is an integer (0, 1, 2, 3, etc.) and ‘h’ is the Planck’s constant. Thus the property of angular momentum is quantized and ‘n’ is a quantum number that gives its possible values. The Bohr theory has now been superseded by a more sophisticated theory in which the idea of orbits is replaced by regions in which the electron may move, characterized by quantum numbers ‘n’, ‘I’, and ‘m’.

Properties of [Standard] elementary particles are also described by quantum numbers. For example, an electron has the property known a ‘spin’, and can exist in two possible energy states depending on whether this spin set parallel or antiparallel to a certain direction. The two states are conveniently characterized by quantum numbers + ½ and ‒ ½. Similarly properties such as charge, Isospin, strangeness, parity and hyper-charge are characterized by quantum numbers. In interactions between particles, a particular quantum number may be conserved, i.e., the sum of the quantum numbers of the particles before and after the interaction remains the same. It is the type of interaction - strong electromagnetic, weak that determines whether the quantum number is conserved.

The energy associated with a quantum state of an atom or other system that is fixed, or determined, by given set quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect accorded to: (i) the energy of a given state may be changed by externally applied fields (ii) there may be a number of states of equal energy in the system.

The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effected of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime hence, the energy, in principle is exactly determinate, the energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculation. Due to de Broglie and extended by Schrödinger, Dirac and many others, it (wave mechanics originated in the suggestion that light consists of corpuscles as well as of waves and the consequent suggestion that all [standard] elementary particles are associated with waves. Wave mechanics are based on the Schrödinger wave equation describing the wave properties of matter. It relates the energy of a system to wave function, usually, it is found that a system, such as an atom or molecule can only have certain allowed wave functions (eigenfunction) and certain allowed energies (Eigenvalues), in wave mechanics the quantum conditions arise in a natural way from the basic postulates as solutions of the wave equation. The energies of unbound states of positive energy form a continuum. This gives rise to the continuum background to an atomic spectrum as electrons are captured from unbound states. The energy of an atom state sustains essentially by some changes by the ‘Stark Effect’ or the ‘Zeeman Effect’.

The vibrational energies of the molecule also have discrete values, for example, in a diatomic molecule the atom oscillates in the line joining them. There is an equilibrium distance at which the force is zero. The atoms repulse when closer and attract when further apart. The restraining force is nearly prepositional to the displacement hence, the oscillations are simple harmonic. Solution of the Schrödinger wave equation gives the energies of a harmonic oscillation as:



En = ( n + ½ ) h.



Where ‘h’ is the Planck constant,  is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is not zero but ½ h. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the ‘Morse Equation,’ which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.

The rotational energy of a molecule is quantized also, according to the Schrödinger equation, a body with the moment of inertial I about the axis of rotation have energies given by:

EJ = h2J ( J + 1 ) / 8π 2I.

Where J is the rotational quantum number, which can be zero or a positive integer. Rotational energies originate from band spectra.

The energies of the state of the nucleus are determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons because the interactions of nucleons are very complicated. The energies are very little affected by external influence but the ‘Mössbauer Effect’ has permitted the observations of some minute changes.

In quantum theory, introduced by Max Planck 1858-1947 in 1900, was the first serious scientific departure from Newtonian mechanics. It involved supposing that certain physical quantities can only assume discrete values. In the following two decades it was applied successfully by Einstein and the Danish physicist Neils Bohr (1885-1962). It was superseded by quantum mechanics in the tears following 1924, when the French physicist Louis de Broglie (1892-1987) introduced the idea that a particle may also be regarded as a wave. The Schrödinger wave equation relates the energy of a system to a wave function, the energy of a system to a wave function, the square of the amplitude of the wave is proportional to the probability of a particle being found in a specific position. The wave function expresses the lack of possibly of defining both the position and momentum of a particle, this expression of discrete representation is called as the ‘uncertainty principle,’ the allowed wave functions that have described stationary states of a system

Part of the difficulty with the notions involved is that a system may be in an indeterminate state at a time, characterized only by the probability of some result for an observation, but then ‘become’ determinate (the collapse of the wave packet) when an observation is made such as the position and momentum of a particle if that is to apply to reality itself, than to mere indetermincies of measurement. It is as if there is nothing but a potential for observation or a probability wave before observation is made, but when an observation is made the wave becomes a particle. The wave-particle duality seems to block any way of conceiving of physical reality - in quantum terms. In the famous two-slit experiment, an electron is fired at a screen with two slits, like a tennis ball thrown at a wall with two doors in it. If one puts detectors at each slit, every electron passing the screen is observed to go through exactly one slit. But when the detectors are taken away, the electron acts like a wave process going through both slits and interfering with itself. A particle such an electron is usually thought of as always having an exact position, but its wave is not absolutely zero anywhere, there is therefore a finite probability of it ‘tunnelling through’ from one position to emerge at another.

The unquestionable success of quantum mechanics has generated a large philosophical debate about its ultimate intelligibility and it’s metaphysical implications. The wave-particle duality is already a departure from ordinary ways of conceiving of tings in space, and its difficulty is compounded by the probabilistic nature of the fundamental states of a system as they are conceived in quantum mechanics. Philosophical options for interpreting quantum mechanics have included variations of the belief that it is at best an incomplete description of a better-behaved classical underlying reality ( Einstein ), the Copenhagen interpretation according to which there are no objective unobserved events in the micro - world (Bohr and W. K. Heisenberg, 1901-76), an ‘acausal’ view of the collapse of the wave packet (J. von Neumann, 1903-57), and a ‘many world’ interpretation in which time forks perpetually toward innumerable futures, so that different states of the same system exist in different parallel universes (H. Everett).

In recent tars the proliferation of subatomic particles, such as there are 36 kinds of quarks alone, in six flavours to look in various directions for unification. One avenue of approach is superstring theory, in which the four-dimensional world is thought of as the upshot of the collapse of a ten-dimensional world, with the four primary physical forces, one of gravity another is electromagnetism and the strong and weak nuclear forces, becoming seen as the result of the fracture of one primary force. While the scientific acceptability of such theories is a matter for physics, their ultimate intelligibility plainly requires some philosophical reflection.

A theory of gravitation that is consistent with quantum mechanics whose subject, still in its infancy, has no completely satisfactory theory. In controventional quantum gravity, the gravitational force is mediated by a massless spin-2 particle, called the ‘graviton’. The internal degrees of freedom of the graviton require hij (χ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a quantum field theory, which has a regrettable tendency to produce infinite for measurable qualitites. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalizations procedures to make sense of these infinites. It has been shown that renormalization procedures fail for theories, such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity is the Planck length,

Lp = ( Gh/c3 )½ ≡ 10 ‒35 m.

Supersymmetry has been suggested as a structure that could be free from these pathological infinities. Many theorists believe that an effective superstring field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appar only as low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstring) is an extension of the ideas of Supersymmetry to one - dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The normal modes of super-strings represent an infinite set of ‘normal’ elementary particles whose masses and spins are related in a special way. Thus, the graviton is only one of the string modes - when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of the set of ‘normal’ particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimensions being tightly ‘curled up’ in a small circle presumably of Planck length size.

In the quantum theory or quantum mechanics of an atom or other system fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that an atom can assume. The conceptual representation of an atom was first introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made very much more precisely by theory and excrement in the late-19th and 20th centuries.

Following the discovery of the electron (1897), it was recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly, but all the mass of an atom is concentrated at its centre in a region of positive charge, the nucleus, the radius of the order 10 -15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’ and is surrounded by ‘Z’ electrons (Z is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the development of the quantum theory.

The ‘Bohr Theory of the Atom,’ 1913, introduced the concept that an electron in an atom is normally in a state of lower energy, or ground state, in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with another particle the atom may be excited - that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes, typically nanoseconds and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics,’ after 1925.

According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum numbers, and, according to the Pauli exclusion principle, not more than one electron can be in a given state.

The Pauli exclusion principle states that no two identical ‘fermions’ in any system can be in the same quantum state that is have the same set of quantum numbers. The principle was first proposed (1925) in the form that not more than two electrons in an atom could have the same set of quantum numbers. This hypothesis accounted for the main features of the structure of the atom and for the periodic table. An electron in an atom is characterized by four quantum numbers, n, I, m, and S. A particular atomic orbital, which has fixed values of n, I, and m, can thus contain a maximum of two electrons, since the spin quantum number ‘s’ can only be +
or ‒
. In 1928 Sommerfeld applied the principle to the free electrons in solids and his theory has been greatly developed by later associates.

Additionally, an effect occurring when atoms emit or absorb radiation in the presence of a moderately strong magnetic field. Each spectral; Line is split into closely spaced polarized components, when the source is viewed at right angles to the field there are three components, the middle one having the same frequency as the unmodified line, and when the source is viewed parallel to the field there are two components, the undisplaced line being preoccupied. This is the ‘normal’ Zeeman Effect. With most spectral lines, however, the anomalous Zeeman effect occurs, where there are a greater number of symmetrically arranged polarized components. In both effects the displacement of the components is a measure of the magnetic field strength. In some cases the components cannot be resolved and the spectral line appears broadened.

The Zeeman effect occurs because the energies of individual electron states depend on their inclination to the direction of the magnetic field, and because quantum energy requirements impose conditions such that the plane of an electron orbit can only set itself at certain definite angles to the applied field. These angles are such that the projection of the total angular momentum on the field direction in an integral multiple of h/2π (h is the Planck constant). The Zeeman effect is observed with moderately strong fields where the precession of the orbital angular momentum and the spin angular momentum of the electrons about each other is much faster than the total precession around the field direction. The normal Zeeman effect is observed when the conditions are such that the Landé factor is unity, otherwise the anomalous effect is found. This anomaly was one of the factors contributing to the discovery of electron spin.

Statistics that are concerned with the equilibrium distribution of elementary particles of a particular type among the various quantized energy states. It is assumed that these elementary particles are indistinguishable. The ‘Pauli Exclusion Principle’ is obeyed so that no two identical ‘fermions’ can be in the same quantum mechanical state. The exchange of two identical fermions, i.e., two electrons, does not affect the probability of distribution but it does involve a change in the sign of the wave function. The ‘Fermi-Dirac Distribution Law’ gives E the average number of identical fermions in a state of energy E:



E = 1/[eα + E/kT + 1],



Where ‘k’ is the Boltzmann constant, ‘T’ is the thermodynamic temperature and α is a quantity depending on temperature and the concentration of particles. For the valences electrons in a solid, ‘α’ takes the form -E1/kT, where E1 is the Fermi level. Whereby, the Fermi level (or Fermi energy) E F the value of E is exactly one half. Thus, for a system in equilibrium one half of the states with energy very nearly equal to ‘E’ (if any) will be occupied. The value of EF varies very slowly with temperatures, tending to E0 as ‘T’ tends to absolute zero.

In Bose-Einstein statistics, the Pauli exclusion principle is not obeyed so that any number of identical ‘bosons’ can be in the same state. The exchanger of two bosons of the same type affects neither the probability of distribution nor the sign of the wave function. The ‘Bose-Einstein Distribution Law’ gives E the average number of identical bosons in a state of energy E:



E = 1/[eα + E/kT - 1].



The formula can be applied to photons, considered as quasi-particles, provided that the quantity α, which conserves the number of particles, is zero. Planck’s formula for the energy distribution of ‘Black-Body Radiation’ was derived from this law by Bose. At high temperatures and low concentrations both the quantum distribution laws tend to the classical distribution:



E = Ae-E/kT.



Additionally, the property of substances that have a positive magnetic ‘susceptibility’, whereby its quantity μr ‒ 1, and where μr is ‘Relative Permeability,’ again, that the electric-quantity presented as Єr ‒ 1, where Єr is the ‘Relative Permittivity,’ all of which has positivity. All of which are caused by the ‘spins’ of electrons, paramagnetic substances having molecules or atoms, in which there are paired electrons and thus, resulting of a ‘Magnetic Moment.’ There is also a contribution of the magnetic properties from the orbital motion of the electron, as the relative ‘permeability’ of a paramagnetic substance is thus greater than that of a vacuum, i.e., it is slightly greater than unity.

A ‘paramagnetic substance’ is regarded as an assembly of magnetic dipoles that have random orientation. In the presence of a field the magnetization is determined by competition between the effect of the field, in tending to align the magnetic dipoles, and the random thermal agitation. In small fields and high temperatures, the magnetization produced is proportional to the field strength, wherefore at low temperatures or high field strengths, a state of saturation is approached. As the temperature rises, the susceptibility falls according to Curie’s Law or the Curie-Weiss Law.

Furthering by Curie’s Law, the susceptibility (χ) of a paramagnetic substance is unversedly proportional to the ‘thermodynamic temperature’ (T): χ = C/T. The constant ’C is called the ‘Curie constant’ and is characteristic of the material. This law is explained by assuming that each molecule has an independent magnetic ‘dipole’ moment and the tendency of the applied field to align these molecules is opposed by the random moment due to the temperature. A modification of Curie’s Law, followed by many paramagnetic substances, where the Curie-Weiss law modifies its applicability in the form

χ = C/(T ‒ θ).

The law shows that the susceptibility is proportional to the excess of temperature over a fixed temperature θ: ‘θ’ is known as the Weiss constant and is a temperature characteristic of the material, such as sodium and potassium, also exhibit type of paramagnetic resulting from the magnetic moments of free, or nearly free electrons, in their conduction bands? This is characterized by a very small positive susceptibility and a very slight temperature dependence, and is known as ‘free-electron paramagnetism’ or ‘Pauli paramagnetism’.

A property of certain solid substances that having a large positive magnetic susceptibility having capabilities of being magnetized by weak magnetic fields. The chief elements are iron, cobalt, and nickel and many ferromagnetic alloys based on these metals also exist. Justifiably, ferromagnetic materials exhibit magnetic ‘hysteresis’, of which formidable combination of decaying within the change of an observed effect in response to a change in the mechanism producing the effect. (Magnetic) a phenomenon shown by ferromagnetic substances, whereby the magnetic flux through the medium depends not only on the existing magnetizing field, but also on the previous state or states of the substances, the existence of a phenomenon necessitates a dissipation of energy when the substance is subjected to a cycle of magnetic changes, this is known as the magnetic hysteresis loss. The magnetic hysteresis loops were acceding by a curved obtainability from ways of which, in themselves were of plotting the magnetic flux density ‘B’, of a ferromagnetic material against the responding value of the magnetizing field ’H’, the area to the ‘hysteresis loss’ per unit volume in taking the specimen through the prescribed magnetizing cycle. The general forms of the hysteresis loop fore a symmetrical cycle between ‘H’ and ‘- H’ and ‘H - h, having inclinations that rise to hysteresis.

The magnetic hysteresis loss commands the dissipation of energy as due to magnetic hysteresis, when the magnetic material is subjected to changes, particularly, the cycle changes of magnetization, as having the larger positive magnetic susceptibility, and are capable of being magnetized by weak magnetic fields. Ferro magnetics are able to retain a certain domain of magnetization when the magnetizing field is removed. Those materials that retain a high percentage of their magnetization are said to be hard, and those that lose most of their magnetization are said to be soft, typical examples of hard ferromagnetic are cobalt steel and various alloys of nickel, aluminium and cobalt. Typical soft magnetic materials are silicon steel and soft iron, the coercive force as acknowledged to the reversed magnetic field’ that is required to reduce the magnetic ‘flux density’ in a substance from its remnant value to zero in characteristic of ferromagnetisms and explains by its presence of domains. A ferromagnetic domain is a region of crystalline matter, whose volume may be 10-12 to 10-8 m3, which contains atoms whose magnetic moments are aligned in the same direction. The domain is thus magnetically saturated and behaves like a magnet with its own magnetic axis and moment. The magnetic moment of the ferrometic atom results from the spin of the electron in an unfilled inner shell of the atom. The formation of a domain depends upon the strong interactions forces (Exchange forces) that are effective in a crystal lattice containing ferrometic atoms.

In an unmagnetized volume of a specimen, the domains are arranged in a random fashion with their magnetic axes pointing in all directions so that the specimen has no resultant magnetic moment. Under the influence of a weak magnetic field, those domains whose magnetic saxes have directions near to that of the field flux at the expense of their neighbours. In this process the atoms of neighbouring domains tend to align in the direction of the field but the strong influence of the growing domain causes their axes to align parallel to its magnetic axis. The growth of these domains leads to a resultant magnetic moment and hence, magnetization of the specimen in the direction of the field, with increasing field strength, the growth of domains proceeds until there is, effectively, only one domain whose magnetic axis appropriates to the field direction. The specimen now exhibits tron magnetization. Further, increasing in field strength cause the final alignment and magnetic saturation in the field direction. This explains the characteristic variation of magnetization with applied strength. The presence of domains in ferromagnetic materials can be demonstrated by use of ‘Bitter Patterns’ or by ‘Barkhausen Effect.’

For ferromagnetic solids there are a change from ferromagnetic to paramagnetic behaviour above a particular temperature and the paramagnetic material then obeyed the Curie-Weiss Law above this temperature, this is the ‘Curie temperature’ for the material. Below this temperature the law is not obeyed. Some paramagnetic substances, obey the temperature ‘θ C’ and do not obey it below, but are not ferromagnetic below this temperature. The value ‘θ’ in the Curie-Weiss law can be thought of as a correction to Curie’s law reelecting the extent to which the magnetic dipoles interact with each other. In materials exhibiting ‘antiferromagnetism’ of which the temperature ‘θ’ corresponds to the ‘Néel temperature’.

Without discredited inquisitions, the property of certain materials that have a low positive magnetic susceptibility, as in paramagnetism, and exhibit a temperature dependence similar to that encountered in ferromagnetism. The susceptibility increased with temperatures up to a certain point, called the ‘Néel Temperature,’ and then falls with increasing temperatures in accordance with the Curie-Weiss law. The material thus becomes paramagnetic above the Néel temperature, which is analogous to the Curie temperature in the transition from ferromagnetism to paramagnetism. Antiferromagnetism is a property of certain inorganic compounds such as MnO, FeO, FeF2 and MnS. It results from interactions between neighbouring atoms leading and an antiparallel arrangement of adjacent magnetic dipole moments, least of mention. A system of two equal and opposite charges placed at a very short distance apart. The product of either of the charges and the distance between them is known as the ‘electric dipole moments. A small loop carrying a current I behave as a magnetic dipole and is equal to IA, where A being the area of the loop.

The energy associated with a quantum state of an atom or other system that is fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect by ways of: (1) the energy of a given state may be changed by externally applied fields, and (2) there may be a number of states of equal energy in the system.

The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effects of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime, hence the energy is if, in at all as a principle that is exactly determinate. The energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculating such a system that emit electromagnetic radiation continuously and consequently no permanent atom would be possible, hence this problem was solved by the developments of quantum theory. An exact calculation of the energies and other particles of the quantum state is only possible for the simplest atom but there are various approximate methods that give useful results as an approximate method of solving a difficult problem, if the equations to be solved, and depart only slightly from those of some problems already solved. For example, the orbit of a single planet round the sun is an ellipse, that the perturbing effect of other planets modifies the orbit slightly in a way calculable by this method. The technique finds considerable application in ‘wave mechanics’ and in ‘quantum electrodynamics’. Phenomena that are not amendable to solution by perturbation theory are said to be non-perturbative.

The energies of unbound states of positive total energy form a continuum. This gives rise to the continuos background to an atomic spectrum, as electrons are captured from unbound state, the energy of an atomic state can be changed by the ‘Stark Effect’ or the ‘Zeeman Effect.’

The vibrational energies of molecules also have discrete values, for example, in a diatomic molecule the atoms oscillate in the line joining them. There is an equilibrium distance at which the force is zero, and the atoms deflect when closer and attract when further apart. The restraining force is very nearly proportional to the displacement, hence the oscillations are simple harmonic. Solution of the ‘Schrödinger wave equation’ gives the energies of a harmonic oscillation as:

En = ( n + ½ ) hƒ

Where ‘h’ is the Planck constant, ƒ is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is thus not zero but ½hƒ. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the Morse equation, which shows that the oscillations are slightly anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.

The rotational energy of a molecule is quantized also, according to the Schrödinger equation a body with moments of inertia I about the axis of rotation have energies given by:

Ej = h2J(J + 1 )/8π2 I,

Where ‘J’ is the rotational quantum number, which can be zero or a positive integer. Rotational energies are found from ‘band spectra’.

The energies of the states of the ‘nucleus’ can be determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons in atoms because the interactions of nucleons are very complicated. The energies are very little affected by external influences, but the ‘Mössbauer Effect’ has permitted the observation of some minute changes.

When X - rays are scattered by atomic centres arranged at regular intervals, interference phenomena occur, crystals providing grating of a suitable small interval. The interference effects may be used to provide a spectrum of the beam of X - rays, since, according to ‘Bragg’s Law,’ the angle of reflection of X - rays from a crystal depends on the wavelength of the rays. For lower-energy X - rays mechanically ruled grating can be used. Each chemical element emits characteristic X - rays in sharply defined groups in more widely separated regions. They are known as the K, L’s, M, N. etc., promote lines of any series toward shorter wavelengths as the atomic number of the elements concerned increases. If a parallel beam of X - rays, wavelength λ, strikes a set of crystal planes it is reflected from the different planes, interferences occurring between X - rays reflect from adjacent planes. Bragg’s Law states that constructive interference takes place when the difference in path-lengths, BAC, is equal to an integral number of wavelengths

2d sin θ = nλ,

In which ‘n’ is an integer, ‘d’ is the interplanar distance, and ‘θ’ is the angle between the incident X - ray and the crystal plane. This angle is called the ‘Bragg’s Angle,’ and a bright spot will be obtained on an interference pattern at this angle. A dark spot will be obtained, however, if be 2d sin θ = mλ. Where ‘m’ is half-integral. The structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces.

A concept originally introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made experiment in the late-19th and early 20th century. Following the discovery of the electron (1897), they recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly all mass of the atom is concentrated at its centre in a region of positive charge, the nucleus is a region of positive charge, the nucleus, radiuses of the order 10-15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’, is surrounded by ‘Z’ electrons (‘Z’ is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the developments of the ‘Quantum Theory.’

The ‘Bohr Theory of the Atom’ (1913) introduced the notion that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed by absorption of electromagnetic radiation or collision with other particle the atom may be excited -that is, electrons moved into a state of higher energy. Such excited states usually have short life spans (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more ‘quanta’ of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Postulating elliptic orbits made attempts to improve the theory (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics’ 1925.

According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of the wave equation. This determines the ‘probability’ that the electron may be found in a given element of volume. A set of four quantum numbers has characterized each state, and according to the ‘Pauli Exclusion Principle,’ not more than one electron can be in a given state.

An exact calculation of the energies and other properties of the quantum states is possible for the simplest atoms, but various approximate methods give useful results, i.e., as an approximate method of solving a difficult problem if the equations to be solved and depart only slightly from those of some problems already solved. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. The outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. As administered by a small difference in energy between the energy levels of the 2 P½ states of hydrogen. In accord with Lamb Shift, these levels would have the same energy according to the wave mechanics of Dirac. The actual shift can be explained by a correction to the energies based on the theory of the interaction of electromagnetic fields with matter, in of which the fields themselves are quantized. Yet, other information may be obtained form magnetism and other chemical properties.

Its appearance potential concludes as, (1)the potential differences through which an electron must be accelerated from rest to produce a given ion from its parent atom or molecule. (2) This potential difference multiplied bu the electron charge giving the least energy required to produce the ion. A simple ionizing process gives the ‘ionization potential’ of the substance, for example:

Ar + e ➝ Ar + + 2e.

Higher appearance potentials may be found for multiplying charged ions:

Ar + e ➝ Ar + + + 3r.

The number of protons in a nucleus of an atom or the number of electrons resolving around the nucleus is among some concerns of atomic numbers. The atomic number determines the chemical properties of an element and the element’s position in the periodic table, because of which the clarification of chemical elements, in tabular form, in the order of their atomic number. The elements show a periodicity of properties, chemically similar recurring in a definite order. The sequence of elements is thus broken into horizontal ‘periods’ and vertical ‘groups’ the elements in each group showing close chemical analogies, i.e., in valency, chemical properties, etc. all the isotopes of an element have the same atomic number although different isotopes gave mass numbers.

An allowed ‘wave function’ of an electron in an atom obtained by a solution of the Schrödinger wave equation. In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is ‒e2, where ‘e’ is the electron charge. ‘r’ its distance from the nucleus, as a precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2dt, is the probability of finding the electron in the element of volume ‘dt’.

Solution of Schrödinger’s equation for hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunction). Each of these corresponds to a probability distribution in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the quantum theory of the atom: ‘n’, the ‘principle quantum number’, can have values of 1, 2, 3, etc. the orbital with n=1 has the lowest energy. The states of the electron with n=1, 2, 3, etc., are called ‘shells’ and designated the K, L, M shells, etc. ‘I’ the ‘azimuthal quanta number’ which for a given value of ‘n’ can have values of 0, 1, 2, . . . (n ‒1). Similarly, the ’M’ shell (n = 3) has three subshell with I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2, and 3 are called s, p, d, and  orbitals respectively. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital annular momentum of an electron is given by:

√[1(I + 1)(h2π)]

‘m’ the ‘magnetic quanta number’, which for a given value of ‘I’ can have values of ‒I, ‒(I ‒ 1), . . . , 0, . . . (I‒ 1). Thus for ‘p’ orbital for which I = 1, there is in fact three different orbitals with m = ‒ 1, 0, and 1. These orbitals with the same values of ‘n’ and ‘I ‘ but different ‘m’ values, have the same energy. The significance of this quantum number is that it shows the number of different levels that would be produced if the atom were subjected to an external magnetic field

According to wave theory the electron may be at any distance from the nucleus, but in fact there is only a reasonable chance of it being within a distance of ‒ 5 x 1011 metre. Indeed the maximum probability occurs when r = a0 where a0 is the radius of the first Bohr orbit. It is customary to represent an orbit that there is no arbitrarily decided probability (say 95%) of finding them an electron. Notably taken, is that although ‘s’ orbitals are spherical (I = 0), orbitals with I > 0, have an angular dependence. Finally. The electron in an atom can have a fourth quantum number, ‘M’ characterizing its spin direction. This can be + ½ or ‒ ½ and according to the Pauli Exclusion principle, each orbital can hold only two electrons. The fourth quantum numbers lead to an explanation of the periodic table of the elements.

The least distance in a progressive wave between two surfaces with the same phase arises to a wavelength. If ‘v’ is the phase speed and ‘v’ the frequency, the wavelength is given by v = vλ. For electromagnetic radiation the phase speed and wavelength in a material medium are equal to their values in a free space divided by the ‘refractive index’. The wavelengths of spectral lines are normally specified for free space.

Optical wavelengths are measure absolutely using interferometers or diffraction gratings, or comparatively using a prism spectrometer. The wavelength can only have an exact value for an infinite waver train if an atomic body emits a quantum in the form of a train of waves of duration τ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2cτ, where ‘c’ is the speed in free space. This is associated with the indeterminacy of the energy given by the uncertainty principle.

Whereas, a mathematical quantity analogous to the amplitude of a wave that appears in the equation of wave mechanics, particularly the Schrödinger waves equation. The most generally accepted interpretation is that
Ψ
2dV represents the probability that a particle is within the volume element dV. The wavelengths, as a set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a particle. The wavelength is given by the ‘de Broglie Equation.’ They are sometimes regarded as waves of probability, times the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer Experiment. Still, ‘Ψ’ is often a might complex quality.

The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which is expressed in terms of electric and magnetic field intensities.

Overall, there are an infinite number of functions satisfying a wave equation but only some of these will satisfy the boundary conditions. ‘Ψ’ must be finite and single-valued at every point, and the spatial derivative must be continuous at an interface? For a particle subject to a law of conservation of numbers, the integral of


Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere to satisfy this condition the wave equation must be of the first order in (dΨ/dt). Wave functions obtained when these conditions are applied from a set of characteristic functions of the Schrödinger wave equation. These are often called eigenfunctions and correspond to a set of fixed energy values in which the system may exist describe stationary states on the system. For certain bound states of a system the eigenfunctions do not charge the sign or reversing the co-ordinated axes. These states are said to have even parity. For other states the sign changes on space reversal and the parity is said to be odd.

It’s issuing case of eigenvalue problems in physics that take the form:

ΩΨ = λΨ

Where Ω is come mathematical operation (multiplication by a number, differentiation, etc.) on a function Ψ, which is called the ‘eigenfunction’. λ is called the ‘eigenvalue’, which in a physical system will be identified with an observable quantity, as, too, an atom to other systems that are fixed, or determined, by a given set of quantum numbers? It is one of the various quantum states that can be assumed by an atom

Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential equations. Each differential equation describes the motion of one of the oscillators in terms of the positions of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed as a simple harmonic motion in time. The differential equations then reduce to ‘3N’ linear equations with 3N unknowns. Where ‘N’ is the number of individual oscillators, each problem is from each one of three degrees of freedom. The whole problem I now easily recast as a ‘matrix’ equation of the form:

Mχ = ῳ2χ.

Where ‘M’ is an N x N matrix called the ‘a dynamic matrix, χ is an N x 1 column matrix, and ῳ2 of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions’ χ, where are the normal modes of the system, with corresponding eigenvalues ῳ2. As χ can be expressed as a column vector, χ is a vector in some - dimensional vector space. For this reason, χ is also often called an eigenvector.

When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes s and effective simplification of the system. The symmetry principles of group theory, the symmetry operations in any physical system must be posses the properties of the mathematical group. As the group of rotation, both finite and infinite, are important in the analysis of the symmetry of atoms and molecules, which underlie the quantum theory of angular momentum. Eigenvalue problems arise in the quantum mechanics of atomic arising in the quantum mechanics of atomic or molecular systems yield stationary states corresponding to the normal mode oscillations of either electrons in-an atom or atoms within a molecule. Angular momentum quantum numbers correspond to a labelling system used to classify these normal modes, analysing the transitions between them can lead and theoretically predict of atomic or a molecular spectrum. Whereas, the symmetrical principle of group theory can then be applied, from which allow their classification accordingly. In which, this kind of analysis requires an appreciation of the symmetry properties of the molecules (rotations, inversions, etc.) that leave the molecule invariant make up the point group of that molecule. Normal modes sharing the same ῳ eigenvalues are said to correspond to the irreducible representations of these molecules’ point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.

Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable as location momentum energy etc., are represented by operations (differentiations with respect to a variable, multiplication by a variable), which act on wave functions. Wave functioning differs from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measures its energy. For a wave function, the square modulus of its amplitude, at a location χ represents not energy bu probability, i.e., the probability that a particle - a localized packet of energy will be observed in a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detectors events have occurred. A measurement of position of a quantum particle may be written symbolically as:

X Ψ(χ) = χΨ(χ),

Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location ‘χ’,
Ψ (χ)
2 is the probability that the particle will be found in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear superposition of all Ψ (χ) for zero ≤χ ≥ ∞. These principles that hold generally in physics wherever linear phenomena occur. In elasticity, the principle stares that the same strains whether it acts alone accompany each stress or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. In vibrations and wave motion the principle asserts that one set is unaffected by the presence of another set. For example, two sets of ripples on water will pass through one anther without mutual interaction so that, at a particular instant, the resultant distribution at any point traverse by both sets of waves is the sum of the two component disturbances.’

The superposition of two vibrations, y1 and y2, both of frequency , produces a resultant vibration of the same frequency, its amplitude and phase functions of the component amplitudes and phases, that:

y1 = a1 sin(2πt + δ1)

y2 = a2 sin(sin(2πt + δ2)

Then the resultant vibration, y, is given by:

y1 + y2 = A sin(2πt + Δ),

Where amplitude A and phase Δ is both functions of, a1, a2, δ1, and δ2.

However, the eigenvalue problems in quantum mechanics therefore represent observable representations as made by possible states (position, in the case of χ) that the quantum system can have to stationary states, of which states that the product of the uncertainty of the resulting value of a component of momentum (pχ) and the uncertainties in the corresponding co-ordinate position (χ) is of the same order of magnitude as the Planck Constant. It produces an accurate measurement of position is possible, as a resultant of the uncertainty principle. Subsequently, measurements of the position acquire a spread themselves, which makes the continuos monitoring of the position impossibly.

As in, classical mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called wave mechanics (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that responding to stationary conditions. The matrix forms of quantum mechanics are often matrix mechanics: Born and Heisenberg. Matrices acting of eigenvectors represent the operators.

The relationship between matrix and wave mechanics is similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span on a vector space, which have a matrix representation.

Pauli, in 1925, suggested that each electron could exist in two states with the same orbital motion. Uhlenbeck and Goudsmit interpreted these states as due to the spin of the electron about an axis. The electron is assumed to have an intrinsic angular momentum on addition, to any angular momentum due to its orbital motion. This intrinsic angular momentum is called ‘spin’ It is quantized in values of

s(s + 1)h/2π,

Where ‘s’ is the ‘spin quantum number’ and ‘h’ the Planck constant. For an electron the component of spin in a given direction can have values of + ½ and – ½, leading to the two possible states. An electron with spin that is behaviourally likens too small magnetic moments, in which came alongside an intrinsic magnetic moment. A ‘magneton gives of a fundamental constant, whereby the intrinsic magnetic moment of an electron acquires the circulatory current created by the angular momentum ‘p’ of an electron moving in its orbital produces a magnetic moment μ = ep/2m, where ‘e and ‘m’ are the charge and mass of the electron, by substituting the quantized relation p = jh/2π(h = the Planck constant; j = magnetic quantum number), μ - jh/4πm. When j is taken as unity the quantity eh/4πm is called the Bohr magneton, its value is:

9.274 0780 x 10-24 Am2.

According to the wave mechanics of Dirac, the magnetic moment associated with the spin of the electron would be exactly one Bohr magnetron, although quantum electrodynamics show that a small difference can v=be expected. The nuclear magnetron, ‘μN’ is equal to (me/mp)μB. Where mp is the mass of the proton. The value of μN is:

5.050 8240 x 10-27 A m2

The magnetic moment of a proton is, in fact, 2.792 85 nuclear magnetos. The two states of different energy result from interactions between the magnetic field due to the electron’s spin and that caused by its orbital motion. These are two closely spaced states resulting from the two possible spin directions and these lead to the two limes in the doublet.

In an external magnetic field the angular momentum vector of the electron precesses. For an explicative example, if a body is of a spin, it holds about its axis of symmetry OC (where O is a fixed point) and C is rotating round an axis OZ fixed outside the body, the body is said to be precessing round OZ. OZ is the precession axis. A gyroscope precesses due to an applied torque called the precessional torque. If the moment of inertia a body about OC is I and its angular momentum velocity is ω, a torque ‘K’, whose axis is perpendicular to the axis of rotation will produce an angular velocity of precession Ω about an axis perpendicular to both ῳ and the torque axis where:

Ω = K/Iω.

It is . . . , wholly orientated of the vector to the field direction are allowed, there is a quantization so that the component of the angular momentum along the direction I restricted of certain values of h/2π. The angular momentum vector has allowed directions such that the component is mS(h2π), where mS is the magnetic so in quantum number. For a given value of s, mS has the value’s, ( s - 1), . . . –s. For example, formerly the s = 1, mS is I, O, and – 1. The electron has a spin of ½ and thus mS is + ½ and – ½. Thus, the components of its spin of angular momentum along the field direction are ± ½(h/2π). These phenomena are called ‘a space quantization’.

The resultant spin of a number of particles is the vector sum of the spins ( s ) of the individual particles and is given by symbol S. for example, in an atom two electrons with spin of ½ could combine to give a resultant spin of S = ½ + ½ = 1 or a resultant of S = ½ – ½ =1 or a resultant of S = ½ – ½ =0.

Alternative symbols used for spin is J, for elementary particles or standard theory and I (for a nucleus). Most elementary particles have a non-zero spin, which either be integral of half integral. The spin of a nucleus is the resultant of the spin of its constituent’s nucleons.

For most generally accepted interpretations is that
ψ
2dV represents the probability that particle is located within the volume element dV, as well, ‘Ψ’ is often a complex quantity. The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which are expressed in terms of electric and magnetic field intensities. There are an infinite number of functions satisfying a wave equation, but only some of these will satisfy the boundary condition. ‘Ψ’ must be finite and single-valued at each point, and the spatial derivatives must be continuous at an interface? For a particle subject to a law of conservation of numbers; The integral of
Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere. To satisfy this condition the wave equation must be of the first order in (dΨdt). Wave functions obtained when these conditions are applied form of set of ‘characteristic functions’ of the Schrödinger wave equation. These are often called ‘eigenfunctions’ and correspond to a set of fixed energy values in which the system may exist, called ‘eigenvalues’. Energy eigenfunctions describe stationary states of a system. For example, bound states of a system the eigenfunctions do not change signs on reversing the co-ordinated axes. These states are said to have ‘even parity’. For other states the sign changes on space reversal and the parity is said to be ‘odd’.

The least distance in a progressive wave between two surfaces with the same phase. If ‘v’ is the ‘phase speed’ and ‘v’ the frequency, the wavelength is given by v = vλ. For ‘electromagnetic radiation’ the phase speed and wavelength in a material medium are equal to their values in free space divided by the ‘refractive index’. The wavelengths are spectral lines are normally specified for free space. Optical wavelengths are measured absolutely using interferometers or diffraction grating, or comparatively using a prism spectrometer.

The wavelength can only have an exact value for an infinite wave train. If an atomic body emits a quantum in the form of a train of waves of duration ‘τ’ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2πcτ, where ‘c’ is the speed of free space. This is associated with the indeterminacy of the energy given by the ‘uncertainty principle’.

A moment of momentum about an axis, represented as Symbol: L, the product of the moment of inertia and angular velocity (Iѡ) angular momentum is a ‘pseudo vector quality’. It is conserved in an isolated system, as the moment of inertia contains itself of a body about an axis. The sum of the products of the mass of each particle of a body and square of its perpendicular distance from the axis: This addition is replaced by an integration in the case of continuous body. For a rigid body moving about a fixed axis, the laws of motion have the same form as those of rectilinear motion, with moments of inertia replacing mass, angular velocity replacing linear momentum, etc. hence the ‘energy’ of a body rotating about a fixed axis with angular velocity ѡ is ½Iѡ2, which corresponds to ½mv2 for the kinetic energy of a body mass ‘m’ translated with Velocity ‘v’.

The linear momentum of a particle ‘p’ bears the product of the mass and the velocity of the particle. It is a ‘vector’ quality directed through the particle of a body or a system of particles is the vector sum of the linear momentums of the individual particles. If a body of mass ‘M’ is translated (the movement of a body or system in which a way that all points are moved in parallel directions through equal distances), with a velocity ‘V’, it has its mentum as ‘MV’, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. The product of ‘moment of inertia and angular velocity’. Angular momentum is a ‘pseudo vector quality and is conserved in an isolated system, and equal to the linear velocity divided by the radial axes per. sec.

If the moment of inertia of a body of mass ‘M’ about an axis through the centre of mass is I, the moment of inertia about a parallel axis distance ‘h’ from the first axis is I + Mh2. If the radius of gyration is ‘k’ about the first axis, it is (k2 + h2 ) about the second. The moment of inertia of a uniform solid body about an axis of symmetry is given by the product of the mass and the sum of squares of the other semi-axes, divided by 3, 4, 5 according to whether the body is rectangular, elliptical or ellipsoidal.

The circle is a special case of the ellipse. The Routh’s rule works for a circular or elliptical cylinder or elliptical discs it works for all three axes of symmetry. For example, for a circular disk of the radius ‘an’ and mass ‘M’, the moment of inertia about an axis through the centre of the disc and lying (a) perpendicular to the disc, (b) in the plane of the disc is

(a) ¼M( a2 + a2 ) = ½Ma2

(b) ¼Ma2.

A formula for calculating moments of inertia I:

I = mass x (a2 /3 + n) + b2 /(3 + nʹ ),

Where n and nʹ are the numbers of principal curvatures of the surface that terminates the semiaxes in question and ‘a’ and ‘b’s’ are the lengths of the semiaxes. Thus, if the body is a rectangular parallelepiped, n = nʹ = 0, and:

I = - mass x (a2 / 3 + b2 /3).

If the body is a cylinder then, for an axis through its centre, perpendicular to the cylinder axis, n = 0 and nʹ = 1, it substantiates that if,

I = mass x (a2 / 3 + b2 /4). If ‘I’ is desired about the axis of the cylinder, then n= nʹ = 1 and a = b = r (the cylinder radius) and; I = mass x (r2 /2).

An array of mathematical concepts, which is similar to a determinant but differ from it in not having a numerical value in the ordinary sense of the term is called a matrix. It obeys the same rules of multiplication, addition. Etc. an array of ‘mn’ numbers set out in ‘m’ rows and ‘n’ columns are a matrix of the order of m x n. the separate numbers are usually called elements, such arrays of numbers, tarted as single entities and manipulated by the rules of matrix algebra, are of use whenever simultaneous equations are found, e.g., changing from one set of Cartesian axes to another set inclined the first: Quantum theory, electrical networks. Matrixes are very prominent in the mathematical expression of quantum mechanics.

A mathematical form of quantum mechanics that was developed by Born and Heisenberg and originally simultaneously with but independently of wave mechanics. It is equivalent to wave mechanics, but in it the wave function of wave mechanics is replaced by ‘vectors’ in a seemly space (Hilbert space) and observable things of the physical world, such as energy, momentum, co-ordinates, etc., is represented by ‘matrices’.

The theory involves the idea that a maturement on a system disturbs, to some extent, the system itself. With large systems this is of no consequence, and the system this is of no classical mechanics. On the atomic scale, however, the results of the order in which the observations are made. Tote up if ‘p’ denotes an observation of a component of momentum and ‘q. An observer of the corresponding co-ordinates pq ≠ qp. Here ‘p’ and ‘q’ are not physical quantities but operators. In matrix mechanics and obey the relationship where ‘h’ is the Planck constant that equals to 6.626•076 x 10 34 j s.

pq ‒ qp = ih/2π

The matrix elements are connected with the transition probability between various states of the system.

A quantity with magnitude and direction. It can be represented by a line whose length is propositional to the magnitude and whose direction is that of the vector, or by three components in rectangular co-ordinate system. Their angle between vectors is 90%, that the product and vector product base a similarity to unit vectors such, are to either be equated to being zero or one.

A true vector, or polar vector, involves the displacement or virtual displacement. Polar vectors include velocity, acceleration, force, electric and magnetic strength. The deigns of their components are reversed on reversing the co-ordinated axes. Their dimensions include length to an odd power.

A Pseudo vector, or axial vector, involves the orientation of an axis in space. The direction is conventionally obtained in a right-handed system by sighting along the axis so that the rotation appears clockwise, Pseudo-vectors includes angular velocity, vector area and magnetic flux density. The signs of their components are unchanged on reversing the co-ordinated axes. Their dimensions include length to an even power.

Polar vectors and axial vectors obey the same laws of the vector analysis (a) Vector addition: If two vectors ‘A’ and ‘B’ are represented in magnitude and direction by the adjacent sides of a parallelogram, the diagonal represents the vector sun (A + B) in magnitude and direction, forces, velocity, etc., combine in this way. (b) Vector multiplying: There are two ways of multiplying vectors (i) the ‘scalar product’ of two vectors equals the product of their magnitudes and the co-sine of the angle between them, and is scalar quantity. It is usually written

A • B ( reads as A dot B )

(ii) The vector product of two vectors: A and B are defined as a pseudo vector of magnitude AB sin θ, having a direction perpendicular to the plane containing them. The sense of the product along this perpendicular is defined by the rule: If ‘A’ is turned toward ‘B’ through the smaller angle, this rotation appears of the vector product. A vector product is usually written:

A x B ( reads as A cross B ).

Vectors should be distinguished from scalars by printing the symbols in bold italic letters.

A theory that seeks to unite the properties of gravitational, electromagnetic, weak, and strong interactions to predict all their characteristics. At present it is not known whether such a theory can be developed, or whether the physical universe is amenable to a single analysis about the current concepts of physics. There are unsolved problems in using the framework of a relativistic quantum field theory to encompass the four elementary particles. It may be that using extended objects, as superstring and super-symmetric theories, but, still, this will enable a future synthesis for achieving obtainability.

A unified quantum field theory of the electromagnetic, weak and strong interactions, in most models, the known interactions are viewed as a low-energy manifestation of a single unified interaction, the unification taking place at energies (Typically 1015 GeV) very much higher than those currently accessible in particle accelerations. One feature of the Grand Unified Theory is that ‘baryon’ number and ‘lepton’ number would no longer be absolutely conserved quantum numbers, with the consequences that such processes as ‘proton decay’, for example, the decay of a proton into a positron and a π0, and p → e+π0 would be expected to be observed. Predicted lifetimes for proton decay are very long, typically 1035 years. Searchers for proton decay are being undertaken by many groups, using large underground detectors, so far without success.

One of the mutual attractions binding the universe of its owing totality, but independent of electromagnetism, strong and weak nuclear forces of interactive bondages is one of gravitation. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetric so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law consistent with Kepler’s laws? Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.

The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’. Gravitational field strength is measured in ‘newtons’ per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass.

χ

V = Gm  ∞ dχ / χ2 = ‒ Gm / χ.

V is a scalar measurement in joules per kilogram. The following special cases are also important (a) Potential at a point distance χ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:

V = ‒Gm/χ.

The potential is the same as if the mass of the shell is assumed concentrated at the centre (b) At any point inside the spherical shell the potential is equal to its value at the surface:

V = ‒Gm/r

Where ‘r’ is the radius of the shell. Thus, there is no resultant force acting at any point inside the shell, since no potential difference acts between any two points, then, the potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere and outside the spheres the same as that for a shell:

V = ‒Gm/χ

(d) At a point inside the sphere, of radius ‘r’.

V = ‒Gm(3r2 ‒ χ2)/2r3.

The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space-timer, causing it to become curved. It is this curvature that is geometrically responsible for an inseparability of the continuum of ‘space-time’ and its forbearing product is to a vicinity mass, entrapped by the universality of space-time, that in ways described by the pressures of their matter, that controls the natural motions of fording bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black-holes’ and ‘neutron stars’, or when very accurate measurements can be made.

Another binding characteristic embodied universally is the interaction between elementary particle arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because of the uncertainty principle it is possible for the law of conservation of mass and energy to be broken by an amount ΔE providing this only occurring for a time such that:

ΔEΔt ≤ h/4π.

This makes it possible for particles to be created for short periods of time where their creation would normally violate conservation laws of energy. These particles are called ‘virtual particles’. For example, in a complete vacuum - that no ‘real’ particle’s exist, as pairs of virtual electrons and positron are continuously forming and rapidly disappearing (in less than 10-23 seconds). Other conservation laws such as those applying to angular momentum, Isospin, etc., cannot be violated even for short periods of time.

Because its strength lies between strong and weak nuclear interactions, the exchanging electromagnetic interaction of particles decaying by electromagnetic interaction, do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying under the influence of strong interaction. For example, of electromagnetic decay is:

π0 → γ + γ.

This decay process, with a mean lifetime covering 8.4 x 10-17, may be understood as the annihilation of the quark and the antiquark, making up the π0, into a pair of photons. The quantum numbers having to be conserved in electromagnetic interactions are, angular momentum, charge, baryon number, Isospin quantum number I3, strangeness, charm, parity and charge conjugation parity are unduly influenced.

Quanta’s electrodynamic descriptions of the photon-mediated electromagnetic interactions have been verified over a great range of distances and have led to highly accurate predictions. Quantum electrodynamics are a ‘gauge theory; as in quantum electrodynamics, the electromagnetic force can be derived by requiring that the equation describing the motion of a charged particle remain unchanged in the course of local symmetry operations. Specifically, if the phase of the wave function, by which charged particle is described is alterable independently, at which point in space, quantum electrodynamics require that the electromagnetic interaction and its mediating photon exist in order to maintain symmetry.

A kind of interaction between elementary particles that is weaker than the strong interaction force by a factor of about 1012. When strong interactions can occur in reactions involving elementary particles, the weak interactions are usually unobserved. However, sometimes strong and electromagnetic interactions are prevented because they would violate the conservation of some quantum number, e.g., strangeness, that has to be conserved in such reactions. When this happens, weak interactions may still occur.

The weak interaction operates over an extremely short range (about 2 x 10-18 m) it is mediated by the exchange of a very heavy particle (a gauge boson) that may be the charged W+ or W‒ particle (mass about 80 GeV / c2) or the neutral Z0 particles (mass about 91 GeV/c2). The gauge bosons that mediate the weak interactions are analogous to the photon that mediates the electromagnetic interaction. Weak interactions mediated by W particles involve a change in the charge and hence the identity of the reacting particle. The neutral Z0 does not lead to such a change in identity. Both sorts of weak interaction can violate parity.

Most of the long-lived elementary particles decay as a result of weak interactions. For example, the kaon decay K+ ➝ μ+ vμ may be thought of for being due to the annihilation of the u quark and antiquark in the K+ to produce a virtual W+ boson, which then converts into a positive muon and a neutrino. This decay action or and electromagnetic interaction because strangeness is not conserved, Beta decay is the most common example of weak interaction decay. Because it is so weak, particles that can only decay by weak interactions do so relatively slowly, i.e., they have relatively long lifetimes. Other examples of weak interactions include the scattering of the neutrino by other particles and certain very small effects on electrons within the atom.

Understanding of weak interactions is based on the electroweak theory, in which it is proposed that the weak and electromagnetic interactions are different manifestations of a single underlying force, known as the electroweak force. Many of the predictions of the theory have been confirmed experimentally.

A gauge theory, also called quantum flavour dynamics, that provides a unified description of both the electromagnetic and weak interactions. In the Glashow-Weinberg-Salam theory, also known as the standard model, electroweak interactions arise from the exchange of photons and of massive charged W+ and neutral Z0 bosons of spin 1 between quarks and leptons. The extremely massive charged particle, symbol W+ or W‒, that mediates certain types of weak interaction. The neutral Z-particle, or Z boson, symbol Z0, mediates the other types. Both are gauge bosons. The W- and Z-particles were first detected at CERN (1983) by studying collisions between protons and antiprotons with total energy 540 GeV in centre-of-mass co-ordinates. The rest masses were determined as about 80 GeV/c2 and 91 GeV/c2 for the W- and Z- particles, respectively, as had been predicted by the electroweak theory.

The interaction strengths of the gauge bosons to quarks and leptons and the masses of the W and Z bosons themselves are predicted by the theory, the Weinberg Angle θW, which must be determined by experiment. The Glashow-Weinberg-Salam theory successfully describes all existing data from a wide variety of electroweak processes, such as neutrino-nucleon, neutrino-electron and electron-nucleon scattering. A major success of the model was the direct observation in 1983-84 of the W± and Z0 bosons with the predicted masses of 80 and 91 GeV / c2 in high energy proton-antiproton interactions. The decay modes of the W± and Z0 bosons have been studied in very high pp and e+ e‒ interactions and found to be in good agreement with the Standard model. The six known types (or flavours) of quarks and the six known leptons are grouped into three separate generations of particles as follows:

1st generation: e‒ ve u d

2nd generation: μ‒ vμ c s

3rd generation: τ‒ vτ t b

The second and third generations are essentially copies of the first generation, which contains the electron and the ‘up’ and ‘down’ quarks making up the proton and neutron, but involve particles of higher mass. Communication between the different generations occurs only in the quark sector and only for interactions involving W± bosons. Studies of Z0 bosons production in very high energy electron-positron interactions has shown that no further generations of quarks and leptons can exist in nature (an arbitrary number of generations is a priori possible within the standard model) provided only that any new neutrinos are approximately massless.

The Glashow Weinberg-Salam model also predicts the existence of a heavy spin 0 particle, not yet observed experimentally, known as the Higgs boson. The spontaneous symmetry-breaking mechanism used to generate non-zero masses for W± and Z bosons in the electroweak theory, whereby the mechanism postulates the existence of two new complex fields, φ (χμ) = φ1 + I φ2 and Ψ (χμ) = Ψ1 + I Ψ2 that are functional distributors to χμ = χ, y, z and t, and form a doublet? (φ, Ψ) this doublet of complex fields transforms in the same way as leptons and quarks under electroweak gauge transformations. Such gauge transformations rotate φ1, φ2, Ψ1, Ψ2 into each other without changing the nature of the physical science.

The vacuum does not share the symmetry of the fields (φ, Ψ) and a spontaneous breaking of the vacuum symmetry occurs via the Higgs mechanism. Consequently, the fields φ and Ψ have non-zero values in the vacuum. A particular orientation of φ1, φ2, Ψ1, Ψ2 may be chosen so that all the components of φ ( φ1 ). This component responds to electroweak fields in a way that is analogous to the response of a plasma to electromagnetic fields. Plasmas oscillate in the presence of electromagnetic waves, however, electromagnetic waves can only propagate at a frequency above the plasma frequency ωp2 given by the expression:

ωp2 = ne2 / mε

Where ‘n’ is the charge number density, ‘e’ the electrons charge. ‘m’ the electrons mass and ‘ε’ is the Permittivity of the plasma. In quantum field theory, this minimum frequency for electromagnetic waves may be thought of as a minimum energy for the existence of a quantum of the electromagnetic field (a photon) within the plasma. This minimum energy or mass for the photon, which becomes a field quantum of a finite ranged force. Thus, in its plasma, photons acquire a mass and the electromagnetic interaction has a finite range.

The vacuum field φ1 responds to weak fields by giving a mass and finite range to the W± and Z bosons, however, the electromagnetic field is unaffected by the presence of φ1 so the photon remains massless. The mass acquired by the weak interaction bosons is proportional to the vacuum of φ1 and to the weak charge strength. A quantum of the field φ1 is an electrically neutral particle called the Higgs boson. It interacts with all massive particles with a coupling that is proportional to their mass. The standard model does not predict the mass of the Higgs boson, but it is known that it cannot be too heavy. Not much more than about 1000 proton masses. Since this would lead to complicated self-interaction, such self-interaction is not believed to be present, because the theory does not account for them, but nevertheless successfully predicts the masses of the W± and Z bosons. These of the particle results from the so-called spontaneous symmetry breaking mechanisms, and used to generate non-zero masses for the W± and Z0 bosons and is presumably too massive to have been produced in existing particle accelerators.

We now turn our attentions belonging to the third binding force of unity, in, and of itself, its name implicates a physicality in the belonging nature that holds itself the binding of strong interactions that portray of its owing universality, simply because its universal. Interactions between elementary particles involving the strong interaction force. This force is about one hundred times greater than the electromagnetic force between charged elementary particles. However, it is a short range force -it is only important for particles separated by a distance of less than abut 10-15 - and is the force that holds protons and neutrons together in atomic nuclei for ‘soft’ interactions between hadrons, where relatively small transfers of momentum are involved, the strong interactions may be described in terms of the exchange of virtual hadrons, just as electromagnetic interactions between charged particles may be described in terms of the exchange of virtual photons. At a more fundamental level, the strong interaction arises as the result of the exchange of Gluons between quarks and/and antiquarks as described by quantum chromodynamics.

In the hadron exchange picture, any hadron can act as the exchanged particle provided certain quantum numbers are conserved. These quantum numbers are the total angular momentum, charge, baryon number, Isospin (both I and I3), strangeness, parity, charge conjugation parity, and G-parity. Strong interactions are investigated experimentally by observing how beams of high - energy hadrons are scattered when they collide with other hadrons. Two hadrons colliding at high energy will only remain near to each other for a very short time. However, during the collision they may come sufficiently close to each other for a strong interaction to occur by the exchanger of a virtual particle. As a result of this interaction, the two colliding particles will be deflected (scattered) from their original paths. ‘I’ the virtual hadron exchanged during the interaction carries some quantum numbers from one particle to the other, the particles found after the collision may differ from those before it. Sometimes the number of particles is increased in a collision.

In hadron-hadron interactions, the number of hadrons produced increases approximately logarithmically with the total centre of mass energy, reaching about 50 particles for proton-antiproton collisions at 900 GeV, for example in some of these collisions, two oppositely-directed collimated ‘jets’ of hadrons are produced, which are interpreted as due to an underlying interaction involving the exchange of an energetic gluon between, for example, a quark from the proton and an antiquark from the antiproton. The scattered quark and antiquark cannot exist as free particles, but instead ‘fragments’ into a large number of hadrons (mostly pions and kaon) travelling approximately along the original quark or antiquark direction. This results in collimated jets of hadrons that can be detected experimentally. Studies of this and other similar processes are in good agreement with quantum chromodynamics predictions.

The interaction between elementary particles arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because its strength lies between strong and weak interactions, particles decaying by electromagnetic interaction do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying by strong interaction. An example of electromagnetic decay is:

π0 ➝ ϒ + ϒ.

This decay process (mean lifetime 8.4 x 10-17 seconds) may be understood as the ‘annihilation’ of the quark and the antiquark making up the π0, into a pair of photons. The following quantum numbers have to be conserved in electromagnetic interactions: Angular momentum, charm, baryon number, Isospin quantum number I3, strangeness, charm, parity, and charge conjugation parity.

A particle that, as far as is known, is not composed of other simpler particles. Elementary particles represent the most basic constituents of matter and are also the carriers of the fundamental forces between particles, namely the electromagnetic, weak, strong, and gravitational forces. The known elementary particles can be grouped into three classes, leptons, quarks, and gauge bosons, hadrons, such strongly interacting particles as the proton and neutron, which are bound states of quarks and/or antiquarks, are also sometimes called elementary particles.

Leptons undergo electromagnetic and weak interactions, but not strong interactions. Six leptons are known, the negatively charged electron, muon, and tauons plus three associates neutrinos: ve, vμ and vτ. The electron is a stable particle but the muon and tau leptons decay through the weak interactions with lifetimes of about 10-8 and 10-13 seconds. Neutrinos are stable neutral leptons, which interact only through the weak interaction.

Corresponding to the leptons are six quarks, namely the up (u), charm (one c) and top (t) quarks with electric charge equal to +⅔ that of the proton and the down (d), strange (s), and bottom (b) quarks of charge -⅓ the proton charge. Quarks have not been observed experimentally as free particles, but reveal their existence only indirectly in high-energy scattering experiments and through patterns observed in the properties of hadrons. They are believed to be permanently confined within hadrons, either in baryons, half integer spin hadrons containing three quarks, or in mesons, integer spin hadrons containing a quark and an antiquark. The proton, for example, is a baryon containing two ‘up’ quarks and an ‘anti-down (d) quark, while the π+ is a positively charged meson containing an up quark and an anti-down (d) antiquark. The only hadron that is stable as a free particle is the proton. The neutron is unstable when free. Within a nucleus, proton and neutrons are generally both stable but either particle may bear into a transformation into the other, by ‘Beta Decay or Capture’.

Interactions between quarks and leptons are mediated by the exchange of particles known as ‘gauge bosons’, specifically the photon for electromagnetic interactions, W± and Z0 bosons for the weak interaction, and eight massless Gluons, in the case of the strong integrations.

A class of eigenvalue problems in physics that take the form: ΩΨ = λΨ. Where ‘Ω’ is some mathematical operation (multiplication by a number, differentiation, etc.) on a function ‘Ψ’, which is called the ‘eigenfunction’. ‘λ’ is called the eigenvalue, which in a physical system will be identified with an observable quantity analogous to the amplitude of a wave that appears in the equations of wave mechanics, particularly the Schrödinger wave equation, the most generally accepted interpretation is that
Ψ
2dV, representing the probability that a particle is located within the volume element dV, mass in which case a particle of mass ‘m’ moving with a velocity ‘v’ will, under suitable experimental conditions exhibit the characteristics of a wave of wave length λ, given by the equation? λ = h/mv, where ‘h’ is the Planck constant that equals to 6.626 076 x 10-34 J s.? This equation is the basis of wave mechanics. However, a set of weaves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a crystal lattice. The wave length is given by the ‘de Broglie equation.’ They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by Broglie in 1924 and in 1927 in the Davisson-Germer experiment.

Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential educations. Each differential equation describes the motion of one of the oscillators in terms of the position of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed to have a ‘simple harmonic motion’ in time. The differential equations then reduce to 3N linear equations with 3N unknowns, where ‘N’ is the number of individual oscillators, each with three degrees of freedom. The whole problem is now easily recast as a ‘matrix education’ of the form:

Mχ = ω2χ

Where ‘M’ is an N x N matrix called the ‘dynamical matrix’, and χ is an N x 1 ‘a column matrix, and ω2 is the square of an angular frequency of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions ‘χ’ which is the normal mode of the system, with corresponding eigenvalues ω2. As ‘χ’ can be expressed as a column vector, χ is a vector in some N-dimensional vector space. For this reason, χ is often called an eigenvector.

When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes is an effective simplification of the system. The symmetry principles of ‘group theory’ can then be applied, which classify normal modes according to their ‘ω’ eigenvalues (frequencies). This kind of analysis requires an appreciation of the symmetry properties of the molecule. The sets of operations (Rotations, inversions, etc.) that leave the molecule invariant make up the ‘point group’ of that molecule. Normal modes sharing the same ‘ω’ eigenvalues are said to correspond to the ‘irreducible representations’ of the molecule’s point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.

Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable (location, momentum, energy, etc.) are represented by operations (differentiation with respect to a variable, multiplication by a variable), which act on wave functions. Wave functions differ from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measure its energy. For a wave function, the square modulus of its amplitude (at a location χ) represent not energy but probability, i.e., the probability that a particle -a localized packet of energy will be observed if a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detection events have occurred. A measurement of position on a quantum particle may be written symbolically as:

X Ψ( χ ) = χΨ( χ )

Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location χ,
Ψ(χ)
2 is the probability that the particle will be located in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear super-position of all Ψ (χ) for 0 ≤ χ ≤ ∞ that occur, its principle states that each stress is accompanied by the same strains whether it acts alone or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. Also, in vibrations and wave motion the principle asserts that one set of vibrations or waves are unaffected by the presence of another set. For example, two sets of ripples on water will pass through one another without mutual interactions so that, at a particular instant, the resultant disturbance at any point traversed by both sets of waves is the sum of the two component disturbances.

The eigenvalue problem in quantum mechanics therefore represents the act of measurement. Eigenvectors of an observable presentation were the possible states (Position, in the case of χ) that the quantum system can have. Stationary states of a quantum non - demolition attribute of a quantum system, such as position and momentum, are related by the Heisenberg Uncertainty Principle, which states that the product of the uncertainty of the measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of position (χ) is of the same order of magnitude as the Planck constant. Attributes related in this way are called ‘conjugate’ attributes. Thus, while an accurate measurement of position is possible, as a result of the uncertainty principle it produces a large momentum spread. Subsequent measurements of the position acquire a spread themselves, which makes the continuous monitoring of the position impossible.

The eigenvalues are the values that observables take on within these quantum states. As in classical mechanics, eigenvalue problems in quantum mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called ‘wave mechanics’ (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that satisfy some set of boundary conditions. The matrix form of quantum mechanics is often called matrix mechanics (Bohr and Heisenberg). Matrix acting on eigenvectors represents the operators.

The relationship between matrix and wave mechanics is very similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span a vector space, which have a matrix representation.

Once, again, the Heisenberg uncertainty relation, or indeterminacy principle of ‘quantum mechanics’ that associate the physical properties of particles into pairs such that both together cannot be measured to within more than a certain degree of accuracy. If ‘A’ and ‘V’ form such a pair is called a conjugate pair, then: ΔAΔV > k, where ‘k’ is a constant and ΔA and ΔV’s are a variance in the experimental values for the attributes ‘A’ and ‘V’. The best-known instance of the equation relates the position and momentum of an electron: ΔpΔχ > h, where ‘h’ is the Planck constant. This is the Heisenberg uncertainty principle. Still, the usual value given for Planck’s constant is 6.6 x 10-27 ergs’ sec. Since Planck’s constant is not zero, mathematical analysis reveals the following: The ‘spread’, or uncertainty, in position times the ‘spread’, or uncertainty of momentum is greater than, or possibly equal to, the value of the constant or, or accurately, Planck’s constant divided by 2π, if we choose to know momentum exactly, then us knowing nothing about position, and vice versa.

The presence of Plank’s constant calls that we approach quantum physics a situation in which the mathematical theory does not allow precise prediction of, or exist in exact correspondences with, the physical reality. If nature did not insist on making changes or transitions in precise chunks of Planck’s quantum of action, or in multiples of these chunks, there would be no crisis. But whether it is of our own determinacy, such that a cancerous growth in the body of an otherwise perfect knowledge of the physical world or the grounds for believing, in principle at least, in human freedom, one thing appears certain - it is an indelible feature of our understanding of nature.

In order too further explain how fundamental the quantum of action is to our present understanding of the life of nature, let us attempt to do what quantum physics says we cannot do and visualize its role in the simplest of all atoms - the hydrogen atom. It can be thought that standing at the centre of the Sky Dome at roughly where the pitcher’s mound is. Place a grain of salt on the mound, and picture a speck of dust moving furiously around the orbital’s outskirts of the Sky Dome’s fulfilling circle, around which the grain of salt remains referential of the topic. This represents, roughly, the relative size of the nucleus and the distance between electron and nucleus inside the hydrogen atom when imaged in its particle aspect.

In quantum physics, however, the hydrogen atom cannot be visualized with such macro-level analogies. The orbit of the electron is not a circle, in which a plantlike object moves, and each orbit is described in terms of a probability distribution for finding the electron in an average position corresponding to each orbit as opposed to an actual position. Without observation or measurement, the electron could be in some sense anywhere or everywhere within the probability distribution, also, the space between probability distributions is not empty, it is infused with energetic vibrations capable of manifesting itself as the befitting quanta.

The energy levels manifest at certain distances because the transition between orbits occurs in terms of precise units of Planck’s constant. If any attentive effects to comply with or measure where the particle-like aspect of the electron is, in that the existence of Planck’s constant will always prevent us from knowing precisely all the properties of that electron that we might presume to be they’re in the absence of measurement. Also, the two-split experiment, as our presence as observers and what we choose to measure or observe are inextricably linked to the results obtained. Since all complex molecules are built from simpler atoms, what is to be done, is that liken to the hydrogen atom, of which case applies generally to all material substances.

The grounds for objecting to quantum theory, the lack of a one - to - one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strict scientific terms. After all, the completeness of all previous physical theories was measured against that criterion with enormous success. Since it was this success that gave physicists the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more complex quantum theory will emerge by continuing to insist on this requirement.

All indications are, however, that no future theory can circumvent quantum indeterminacy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness of physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.

If a theory does so and continues to do so, which is certainly the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy perse is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationships in classical physics between physical theory and physical reality. Another measure of success in physical theory is also met by quantum physics -eloquence and simplicity. The quantum recipe for computing probabilities given by the wave function is straightforward and can be successfully employed by any undergraduate physics student. Take the square of the wave amplitude and compute the probability of what can be measured or observed with a certain value. Yet there is a profound difference between the recipe for calculating quantum probabilities and the recipe for calculating probabilities in classical physics.

In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave functions, and then taking the square of the amplitude. In the two-split experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function if it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the obsolete value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, one would simply add the probabilities of the two alternative ways and let it go at that. That classical procedure does not work here because we are not dealing with classical atoms in quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’. That the superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum, as opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:


Ψ1 + Ψ2
2 ≠
Ψ1
2 +
Ψ2
2

Where Ψ1 and Ψ2 are the individual wave functions on the left - hand side, the superposition principle results in extra terms that cannot be found on the right - handed side the left - hand faction of the above relation is the way a quantum physicists would compute probabilities and the right - hand side is the classical analogue. In quantum theory, the right - hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left - hand side of the above relation would not be there, and the peculiar wave - like interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like bullets, and the final probability would be the sum of the individual probabilities. But when we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.

In order to give a full account of quantum recipes for computing probabilities, one ‘g’ has to examine what would happen in events that are compounded. Compound events are events that can be broken down into a series of steps, or events that consist of a number of things happening independently the recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.

The quantum recipe is
Ψ1 • Ψ2
2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus the recipes of computing results in quantum theory and classical physics can be totally different from quantum superposition effects are completely non-classical, and there is no mathematical justification to why the quantum recipes work. What justifies the use of quantum probability theory is the same thing that justifies the use of quantum physics - it has allowed us in countless experiments to extend our ability to co-ordinate experience with nature vastly.

The view of probability in the nineteenth century was greatly conditioned and reinforced by classical assumptions about the relationships between physical theory and physical reality. In this century, physicists developed sophisticated statistics to deal with large ensembles of particles before the actual character of these particles was understood. Classical statistics, developed primarily by James C. Maxwell and Ludwig Boltzmann, was used to account for the behaviour of a molecule in a gas and to predict the average speed of a gas molecule in terms of the temperature of the gas.

The presumption was that the statistical average were workable approximations those subsequent physical theories, or better experimental techniques, would disclose with precision and certainty. Since nothing was known about quantum systems, and since quantum indeterminacy is small when dealing with micro - level effects, this presumption was quite reasonable. We know, however, that quantum mechanical effects are present in the behaviour of gasses and that the choice to ignore them is merely a matter of convincing in getting workable or practical resulted. It is, therefore, no longer possible to assume that the statistical averages are merely higher-level approximations for a more exact description.

Perhaps the best - known defence of the classical conception of the relationship between physical theory ands physical reality is the celebrated animal introduced by the Austrian physicist Erin Schrödinger (1887-1961) in 1935, in a ‘thought experiment’ showing the strange nature of the world of quantum mechanics. The cat is thought of as locked in a box with a capsule of cyanide, which will break if a Geiger counter triggers. This will happen if an atom in a radioactive substance in the box decays, and there is a chance of 50% of such an event within an hour. Otherwise, the cat is alive. The problem is that the system is in an indeterminate state. The wave function of the entire system is a ‘superposition’ of states, fully described by the probabilities of events occurring when it is eventually measured, and therefore ‘contains equal parts of the living and dead cat’. When we look and see we will find either a breathing cat or a dead cat, but if it is only as we look that the wave packet collapses, quantum mechanic forces us to say that before we looked it was not true that the cat was dead and not true that it was alive, the thought experiment makes vivid the difficulty of conceiving of quantum indetermincies when these are translated to the familiar world of everyday objects.

The ‘electron,’ is a stable elementary particle having a negative charge, e, equal to:

1.602 189 25 x 10-19 C

And a rest mass, m0 equal to:

9.109 389 7 x 10-31 kg

Equivalent to: 0.511 0034 MeV / c2

It has a spin of ½ and obeys Fermi-Dirac Statistics. As it does not have strong interactions, it is classified as a ‘lepton’.

The discovery of the electron was reported in 1897 by Sir J.J. Thomson, following his work on the rays from the cold cathode of a gas-discharge tube, it was soon established that particles with the same charge and mass were obtained from numerous substances by the ‘photoelectric effect’, ‘thermionic emission’ and ‘beta decay’. Thus, the electron was found to be part of all atoms, molecules, and crystals.

Free electrons are studied in a vacuum or a gas at low pressure, whereby beams are emitted from hot filaments or cold cathodes and are subject to ‘focussing’, so that the particles in which an electron beam in, for example, a cathode - ray tube, where in principal methods as (i) Electrostatic focussing, the beam is made to converge by the action of electrostatic fields between two or more electrodes at different potentials. The electrodes are commonly cylinders coaxial with the electron tube, and the whole assembly forms an electrostatic electron lens. The focussing effect is usually controlled by varying the potential of one of the electrodes, called the focussing electrode. (ii) Electromagnetic focussing, by way that the beam is made to converge by the action of a magnetic field that is produced by the passage of direct current, through a focussing coil. The latter are commonly a coil of short axial length mounted so as to surround the electron tube and to be coaxial with it.

The force FE on an electron or magnetic field of strengths is given by FE = Ee and is in the direction of the field. On moving through a potential difference V, the electron acquires a kinetic energy eV, hence it is possible to obtain beams of electrons of accurately known kinetic energy. In a magnetic field of magnetic flux density ‘B’, an electron with speed ‘v’ is subject to a force, FB = Bev sin θ, where θ is the angle between ‘B’ and ‘v’. This force acts at right angles to the plane containing ‘B’ and ‘v’.

The mass of any particle increases with speed according to the theory of relativity. If an electron is accelerated from rest through 5kV, the mass is 1% greater than it is at rest. Thus, accountably, must be taken of relativity for calculations on electrons with quite moderate energies.

According to ‘wave mechanics’ a particle with momentum ‘mv’ exhibits’ diffraction and interference phenomena, similar to a wave with wavelength λ = h/mv, where ‘h’ is the Planck constant. For electrons accelerated through a few hundred volts, this gives wavelengths rather less than typical interatomic spacing in crystals. Hence, a crystal can act as a diffraction grating for electron beams.

Owing to the fact that electrons are associated with a wavelength λ given by λ = h/mv, where ‘h’ is the Planck constant and (mv) the momentum of the electron, a beam of electrons suffers diffraction in its passage through crystalline material, similar to that experienced by a beam of X - rays. The diffraction pattern depends on the spacing of the crystal planes, and the phenomenon can be employed to investigate the structure of surface and other films, and under suitable conditions exhibit the characteristics of a wave of the wavelength given by the equation λ = h/mv, which is the basis of wave mechanics. A set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a crystal lattice, that is given the ‘de Broglie equation.’ They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point.

The first experiment to demonstrate ‘electron diffraction’, and hence the wavelike nature of particles. A narrow pencil of electrons from a hot filament cathode was projected ‘in vacua’ onto a nickel crystal. The experiment showed the existence of a definite diffracted beam at one particular angle, which depended on the velocity of the electrons, assuming this to be the Bragg angle, stating that the structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces, least of mention, the wavelength of the electrons was calculated and found to be in agreement with the ‘de Broglie equation.’

At kinetic energies less than a few electro-volts, electrons undergo elastic collision with atoms and molecules, simply because of the large ratio of the masses and the conservation of momentum, only an extremely small transfer of kinetic energy occurs. Thus, the electrons are deflected but not slowed appreciatively. At slightly higher energies collisions are inelastic. Molecules may be dissociated, and atoms and molecules may be excited or ionized. Thus it is the least energy that causes an ionization:

A ➝ A+ + e‒

Where the ION and the electron are far enough apart for their electrostatic interaction to be negligible and no extra kinetic energy removed is that in the outermost orbit, i.e., the level strongly bound electrons. It is also possible to consider removal of electrons from inner orbits, in which their binding energy is greater. As an excited particle or recombining, ions emit electromagnetic radiation mostly in the visible or ultraviolet.

For electron energies of the order of several GeV upwards, X - rays are generated. Electrons of high kinetic energy travel considerable distances through matter, leaving a trail of positive ions and free electrons. The energy is mostly lost in small increments

( about 30 eV ) with only an occasional major interaction causing X - ray emissions. The range increases at higher energies. The positron - the antiparticle of the electron, i.e., an elementary particle with electron mass and positive charge equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positive energy and suggested itself observably. The vacant state of negativity behaves as a positive particle of positive energy, which is observed as a positron.

The simultaneous formation of a positron and an electron from a photon is called ‘pair production’, and occurs when the annihilation of Gamma-rays photons with an energy of 1.02 MeV passes close to an atomic nucleus, whereby the interaction between the particle and its antiparticle disappear and photons or other elementary particles or antiparticles are so created, as accorded to energy and momentum conservation.

At low energies, an electron and a positron annihilate to produce electromagnetic radiation. Usually the particles have little kinetic energy or momentum in the laboratory system before interaction, hence the total energy of the radiation is nearly 2m0c2, where m0 is the rest mass of an electron. In nearly all cases two photons are generated. Each of 0.511 MeV, in almost exactly opposite directions to conserve momentum. Occasionally, three photons are emitted all in the same plane. Electron-positron annihilation at high energies has been extensively studied in particle accelerators. Generally, the annihilation results in the production of a quark, and an antiquark, fort example, e+ e‒ ➝ μ+ μ‒ or a charged lepton plus an antilepton ( e+e‒ ➝ μ+μ‒ ). The quarks and antiquarks do not appear as free particles but convert into several hadrons, which can be detected experimentally. As the energy available in the electron-positron interaction increases, quarks and leptons of progressively larger rest mass can be produced. In addition, striking resonances are present, which appear as large increases in the rate at which annihilations occur at particular energies. The I / PSI particle and similar resonances containing an antiquark are produced at an energy of about 3 GeV, for example, giving rise to abundant production of charmed hadrons. Bottom (b) quark production occurs at greater energies than about 10 GeV. A resonance at an energy of about 90 GeV, due to the production of the Z0 gauge boson involved in weak interaction is currently under intensive study at the LEP and SLC e+ e‒ colliders. Colliders are the machines for increasing the kinetic energy of charged particles or ions, such as protons or electrons, by accelerating them in an electric field. A magnetic field is used to maintain the particles in the desired direction. The particle can travel in a straight, spiral, or circular paths. At present, the highest energies are obtained in the proton synchrotron.

The Super Proton Synchrotron at CERN (Geneva) accelerates protons to 450 GeV. It can also cause proton-antiproton collisions with total kinetic energy, in centre-of-mass co-ordinates of 620 GeV. In the USA the Fermi National Acceleration Laboratory proton synchrotron gives protons and antiprotons of 800 GeV, permitting collisions with total kinetic energy of 1600 GeV. The Large Electron Positron (LEP) system at CERN accelerates particles to 60 GeV.

All the aforementioned devices are designed to produce collisions between particles travelling in opposite directions. This gives effectively very much higher energies available for interaction than our possible targets. High - energy nuclear reaction occurs when the particles, either moving in a stationary target collide. The particles created in these reactions are detected by sensitive equipment close to the collision site. New particles, including the tauon, W, and Z particles and requiring enormous energies for their creation, have been detected and their properties determined.

While, still, a ‘nucleon’ and ‘anti - nucleon’ annihilating at low energy, produce about half a dozen pions, which may be neutral or charged. By definition, mesons are both hadrons and bosons, justly as the pion and kaon are mesons. Mesons have a substructure composed of a quark and an antiquark bound together by the exchange of particles known as Gluons.

The conjugate particle or antiparticle that corresponds with another particle of identical mass and spin, but has such quantum numbers as charge (Q), baryon number (B), strangeness (S), charms, and Isospin (I3) of equal magnitude but opposite signs. Examples of a particle and its antiparticle include the electron and positron, proton and antiproton, the positive and negatively charged pions, and the ‘up’ quark and ‘up’ antiquark. The antiparticle corresponding to a particle with the symbol ‘an’ is usually denoted ‘ā’. When a particle and its antiparticle are identical, as with the photon and neutral pion, this is called a ‘self-conjugate particle’.

The critical potential to excitation energy required to change am atom or molecule from one quantum state to another of higher energy, is equal to the difference in energy of the states and is usually the difference in energy between the ground state of the atom and a specified excited state. Which the state of a system, such as an atom or molecule, when it has a higher energy than its ground state.

The ground state contributes the state of a system with the lowest energy. An isolated body will remain indefinitely in it, such that it is possible for a system to have possession of two or more ground states, of equal energy but with different sets of quantum numbers. In the case of atomic hydrogen there are two states for which the quantum numbers n, I, and m are 1, 0, and 0 respectively, while the spin may be + ½ with respect to a defined direction. An allowed wave function of an electron in an atom obtained by a solution of the ‘Schrödinger wave equation’ in which a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is ‒e2 / r, where ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2 dt is the probability of locating the electron in the element of volume ‘dt’.

Solution of Schrödinger’s equation for the hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distribution in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of the energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterized the allowed orbits in the earlier quantum theory of the atom: ‘n’, the ‘principal quantum number, can have values of 1, 2, 3, etc. the orbital with n =1 has the lowest energy. The states of the electron with n = 1, 2, 3, etc., are called ‘shells’ and designate the K, L, M shells, etc. ‘I’, the ‘azimuthal quantum numbers’, which for a given value of ‘n’ can have values of 0, 1, 2, . . . (n‒1). An electron in the ‘L’ shell of an atom with n = 2 can occupy two sub-shells of different energy corresponding to I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s, p, d, and ƒ orbitals respectively. The significance of I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:

[I( I + 1 )( h/2π).

the ‘magnetic quantum number, which for a given value of ‘I’ can have values’ represented by a ‘p’ orbital for orbits with m = 1, 0,

and 1. These orbitals, with the same values of ‘n’ and ‘I’ but different ‘m’ values, have the same energy. The significance of this quantum number is that it indicates the number of different levels that would be produced if the atom were subjected to an external magnetic field.

According to wave theory the electron may be at any distance from the nucleus, but in fact, there is only a reasonable chance of it being within a distance of - 5 x 10-11 metre. Indeed the maximum probability occurs when r - a0 where a0 is the radius of the first Bohr orbit. It is customary to represent an orbital by a surface enclosing a volume within which there is an arbitrarily decided probability (say 95%) of finding the electron.

Finally, the electron in an atom can have a fourth quantum number MS, characterizing its spin direction. This can be + ½ or ‒ ½, and according to the ‘Pauli Exclusion Principle,’ each orbital can hold only two electrons. The four quantum numbers lead to an explanation of the periodic table of the elements.

In earlier mention, the concerns referring to the ‘moment’ had been to our exchanges to issue as, i.e., the moment of inertia, moment of momentum. The moment of a force about an axis is the product of the perpendicular distance of the axis from the line of action of the force, and the component of the force in the plane perpendicular to the axis. The moment of a system of coplanar forces about an axis perpendicular to the plane containing them is the algebraic sum of the moments of the separate forces about that axis of a anticlockwise moment appear taken controventionally to be positive and clockwise of ones Uncomplementarity. The moment of momentum about an axis, symbol L is the product to the moment of inertia and angular velocity (Iω). Angular momentum is a pseudo-vector quality, as it is connected in an isolated system. It is a scalar and is given a positive or negative sign as in the moment of force. When contending to systems, in which forces and motions do not all lie in one plane, the concept of the moment about a point is needed. The moment of a vector P, e.g., forces or momentous pulsivity, from which a point ‘A’ is a pseudo-vector M equal to the vector product of r and P, where r is any line joining ‘A’ to any point ‘B’ on the line of action of P. The vector product M = r x p is independent of the position of ‘B’ and the relation between the scalar moment about an axis and the vector moment about which a point on the axis is that the scalar is the component of the vector in the direction of the axis.

The linear momentum of a particle ‘p’ is the product of the mass and the velocity of the particle. It is a vector quality directed through the particle in the direction of motion. The linear momentum of a body or of a system of particles is the vector sum of the linear momenta of the individual particle. If a body of mass ‘M’ is translated with a velocity ‘V’, its momentum is MV, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. (1) In any system of mutually interacting or impinging particles, the linear momentum in any fixed direction remains unaltered unless there is an external force acting in that direction. (2) Similarly, the angular momentum is constant in the case of a system rotating about a fixed axis provided that no external torque is applied.

Subatomic particles fall into two major groups: The elementary particles and the hadrons. An elementary particle is not composed of any smaller particles and therefore represents the most fundamental form of matter. A hadron is composed of panicles, including the major particles called quarks, the most common of the subatomic particles, includes the major constituents of the atom - the electron is an elementary particle, and the proton and the neutron (hadrons). An elementary particle with zero charge and a rest mass equal to:

1.674 9542 x 10-27 kg,

i.e., 939.5729 MeV / c2.

It is a constituent of every atomic nucleus except that of ordinary hydrogen, free neutrons decay by ‘beta decay’ with a mean life of 914 s. the neutron has spin ½, Isospin ½, and positive parity. It is a ‘fermion’ and is classified as a ‘hadron’ because it has strong interaction.

Neutrons can be ejected from nuclei by high-energy particles or photons, the energy required is usually about 8 MeV, although sometimes it is less. The fission is the most productive source. They are detected using all normal detectors of ionizing radiation because of the production of secondary particles in nuclear reactions. The discovery of the neutron (Chadwick, 1932) involved the detection of the tracks of protons ejected by neutrons by elastic collisions in hydrogenous materials.

Unlike other nuclear particles, neutrons are not repelled by the electric charge of a nucleus so they are very effective in causing nuclear reactions. When there is no ‘threshold energy’, the interaction ‘cross sections’ become very large at low neutron energies, and the thermal neutrons produced in great numbers by nuclear reactions cause nuclear reactions on a large scale. The capture of neutrons by the (n, ϒ) process produces large quantities of radioactive materials, both useful nuclides such as 66Co for cancer therapy and undesirable by - product. The least energy required to cause a certain process, in particular a reaction in nuclear or particle physics. It is often important to distinguish between the energies required in the laboratory and in centre-of-mass co-ordinates. In ‘fission’ the splitting of a heavy nucleus of an atom into two or more fragments of comparable size usually as the result of the impact of a neutron on the nucleus. It is normally accompanied by the emission of neutrons or gamma rays. Plutonium, uranium, and thorium are the principle fissionable elements

In nuclear reaction, a reaction between an atonic nucleus and a bombarding particle or photon leading to the creation of a new nucleus and the possible ejection of one or more particles. Nuclear reactions are often represented by enclosing brackets and symbols for the incoming and final nuclides being shown outside the brackets. For example: 14N (α, p)17O.

Energy from nuclear fissions, on the whole, the nucleuses of atoms of moderate size are more tightly held together than the largest nucleus, so that if the nucleus of a heavy atom can be induced to split into two nuclei and moderate mass, there should be considerable release of energy. By Einstein’ s law of the conservation of mass and energy, this mass and energy difference is equivalent to the energy released when the nucleons binding differences are equivalent to the energy released when the nucleons bind together. Y=this energy is the binding energy, the graph of binding per nucleon, EB/A increases rapidly up to a mass number of 50-69 (iron, nickel, etc., and then decreases slowly. There are therefore two ways in which energy can be released from a nucleus, both of which can be released from the nucleus, both of which entail a rearrangement of nuclei occurring in the lower as having to curve into form its nuclei, in the upper, higher-energy part of the curve. The fission is the splitting of heavy atoms, such as uranium, into lighter atoms, accompanied by an enormous release of energy. Fusion of light nuclei, such as deuterium and tritium, releases an even greater quantity of energy.

The work that must be done to detach a single particle from a structure of free electrons of an atom or molecule to form a negative ion. The process is sometimes called ‘electron capture, but the term is more usually applied to nuclear processes. As many atoms, molecules and free radicals from stable negative ions by capturing electrons to atoms or molecules to form a negative ion. The electron affinity is the least amount of work that must be done to separate from the ion. It is usually expressed in electro-volts

The uranium isotope 235U will readily accept a neutron but one-seventh of the nuclei stabilized by gamma emissions while six-sevenths split into two parts. Most of the energy released amounts to about 170 MeV, in the form of the kinetic energy of these fission fragments. In addition an averaged of 2.5 neutrons of average energy 2 MeV and some gamma radiation is produced. Further energy is released later by radioactivity of the fission fragments. The total energy released is about 3 x 10-11 joule per atom fissioned, i.e., 6.5 x 1013 joule per kg conserved.

To extract energy in a controlled manner from fissionable nuclei, arrangements must be made for a sufficient proportion of the neutrons released in the fissions to cause further fissions in their turn, so that the process is continuous, the minium mass of a fissile material that will sustain a chain reaction seems confined to nuclear weaponry. Although, a reactor with a large proportion of 235U or plutonium 239Pu in the fuel uses the fast neutrons as they are liberated from the fission, such a rector is called a ‘fast reactor’. Natural uranium contains 0.7% of 235U and if the liberated neutrons can be slowed before they have much chance of meeting the more common 238U atom and then cause another fission. To slow the neutron, a moderator is used containing light atoms to which the neutrons will give kinetic energy by collision. As the neutrons eventually acquire energies appropriate to gas molecules at the temperatures of the moderator, they are then said to be thermal neutrons and the reactor is a thermal reactor.

Then, of course, the Thermal reactors, in typical thermal reactors, the fuel elements are rods embedded as a regular array in which the bulk of the moderator that the typical neutron from a fission process has a good chance of escaping from the relatively thin fuel rod and making many collisions with nuclei in the moderator before again entering a fuel element. Suitable moderators are pure graphite, heavy water (D2O), are sometimes used as a coolant, and ordinary water (H2O). Very pure materials are essential as some unwanted nuclei capture neutrons readily. The reactor core is surrounded by a reflector made of suitable material to reduce the escape of neutrons from the surface. Each fuel element is encased, e.g., in magnesium alloy or stainless steel, to prevent escape of radioactive fission products. The coolant, which may be gaseous or liquid, flows along the channels over the canned fuel elements. There is an emission of gamma rays inherent in the fission process and, many of the fission products are intensely radioactive. To protect personnel, the assembly is surrounded by a massive biological shield, of concrete, with an inner iron thermal shield to protect the concrete from high temperatures caused by absorption of radiation.

To keep the power production steady, control rods are moved in or out of the assembly. These contain material that captures neutrons readily, e.g., cadmium or boron. The power production can be held steady by allowing the currents in suitably placed ionization chambers automatically to modify the settings of the rods. Further absorbent rods, the shut - down rods, are driven into the core to stop the reaction, as in an emergence if the control mechanism fails. To attain high thermodynamic efficiency so that a large proportion of the liberated energy can be used, the heat should be extracted from the reactor core at a high temperature.

In fast reactors no mediator is used, the frequency of collisions between neutrons and fissile atoms being creased by enriching the natural uranium fuel with 239Pu or additional 235U atoms that are fissioned by fast neutrons. The fast neutrons are thus built up a self - sustaining chain reaction. In these reactions the core is usually surrounded by a blanket of natural uranium into which some of the neutrons are allowed to escape. Under suitable conditions some of these neutrons will be captured by 238U atoms forming 239U atoms, which are converted to 239Pu. As more plutonium can be produced than required to enrich the fuel in the core, these are called ‘fast breeder reactors’.

Thus and so, a neutral elementary particle with spin ½, that only takes part in weak interactions. The neutrino is a lepton and exists in three types corresponding to the three types of charged leptons, that is, there are the electron neutrinos (ve) tauon neutrinos (vμ) and tauon neutrinos (vτ). The antiparticle of the neutrino is the antineutrino.

Neutrinos were originally thought to have a zero mass, but recently there have been some advances to an indirect experiment that evince to the contrary. In 1985 a Soviet team reported a measurement for the first time, of a non - zero neutrino mass. The mass measured was extremely small, some 10 000 times smaller than the mass of the electron. However, subsequent attempts to reproduce the Soviet measurement were unsuccessful. More recent (1998-99), the Super-Kamiokande experiment in Japan has provided indirect evidence for massive neutrinos. The new evidence is based upon studies of neutrinos, which are created when highly energetic cosmic rays bombard the earth’s upper atmosphere. By classifying the interaction of these neutrinos according to the type of neutrino involved (an electron neutrino or muon neutrino), and counting their relative numbers as a function: An oscillatory behaviour may be shown to occur. Oscillation in this sense is the charging back and forth of the neutrino’s type as it travels through space or matter. The Super-Kamiokande result indicates that muon neutrinos are changing into another type of neutrino, e.g., sterile neutrinos. The experiment does not, however, determine directly the masses, though the oscillations suggest very small differences in mass between the oscillating types.

The neutrino was first postulated (Pauli 1930) to explain the continuous spectrum of beta rays. It is assumed that there is the same amount of energy available for each beta decay of a particle nuclide and that energy is shared according to a statistical law between the electron and a light neutral particle, now classified as the anti-neutrino, ύe Later it was shown that the postulated particle would also conserve angular momentum and linear momentum in the beta decays.

In addition to beta decay, the electron neutrino is also associated with, for example, positron decay and electron capture:

22Na → 22Ne + e+ + ve

55Fe + e‒ → 55Mn + ve

The absorption of anti-neutrinos in matter by the process

1H + ΰe ➝ n + e+

Was first demonstrated by Reines and Cowan? The muon neutrino is generated in such processes as:

π+ → μ+ + vμ

Although the interactions of neutrinos are extremely weak the cross sections increase with energy and reaction can be studied at the enormous energies available with modern accelerators in some forms of ‘grand unification theories’, neutrinos are predicted to have a non-zero mass. Nonetheless, no evidences have been found to support this prediction.

The antiparticle of an electron, i.e., an elementary particle with electron mass and positive charge and equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positivity and become observable. The vacant state of negativity seems to behave as a positive particle of positive energy, which is observed as a positron.

A theory of elementary particles based on the idea that the fundamental entities are not point-like particles, but finite lines (strings) or closed loops formed by stings. The original idea was that an elementary particle was the result of a standing wave in a string. A considerable amount of theoretical effort has been put into development string theories. In particular, combining the idea of strings with that of super-symmetry, which has led to the idea with which correlation holds strongly with super-strings. This theory may be a more useful route to a unified theory of fundamental interactions than quantum field theory, simply because it’s probably by some unvoided infinites that arise when gravitational interactions are introduced into field theories. Thus, superstring theory inevitably leads to particles of spin 2, identified as gravitons. String theory also shows why particles violate parity conservation in weak interactions.

Superstring theories involve the idea of higher dimensional spaces: 10 dimensions for fermions and 26 dimensions for bosons. It has been suggested that there are the normal 4 space-time dimensions, with the extra dimension being tightly ‘curved’. Still, there are no direct experimental evidences for super-strings. They are thought to have a length of about 10-35 m and energies of 1014 GeV, which is well above the energy of any accelerator. An extension of the theory postulates that the fundamental entities are not one-dimensional but two-dimensional, i.e., they are super-membranes.

Allocations often other than what are previous than in time, awaiting the formidable combinations of what precedes the presence to the future, because of which the set of invariance of a system, a symmetry operation on a system is an operation that does not change the system. It is studied mathematically using ‘Group Theory.’ Some symmetries are directly physical, for instance the reelections and rotations for molecules and translations in crystal lattices. More abstractively the implicating inclinations toward abstract symmetries involve changing properties, as in the CPT Theorem and the symmetries associated with ‘Gauge Theory.’ Gauge theories are now thought to provide the basis for a description in all elementary particle interactions. The electromagnetic particle interactions are described by quantum electrodynamics, which is called Abelian gauge theory.

Quantum field theory for which measurable quantities remain unchanged under a ‘group transformation’. All these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mills in 1954, describe the interaction between two quantum fields of fermions. In which particles represented by fields whose normal modes of oscillation are quantized. Elementary particle interactions are described by relativistically invariant theories of quantized fields, ie. , By relativistic quantum field theories. Gauge transformations can take the form of a simple multiplication by a constant phase. Such transformations are called ‘global gauge transformations’. In local gauge transformations, the phase of the fields is alterable by amounts that vary with space and time; i.e., Ψ ➝ eiθ (χ) Ψ, Where θ (χ) is a function of space-time. As, in Abelian gauge theories, consecutive field transformations commute, i.e.,

Ψ ➝ ei θ ( χ ) ei φ Ψ = ei φ ( χ ) ei φ ( χ ) Ψ,

Where φ (χ) is another function of space and time. Quantum chromodynamics (the theory of the strong interaction) and electroweak and grand unified theories are all non-Abelian. In these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mils, as Einstein’s theory of general relativity can also be formulated as a local gauge theory.

A symmetry including both boson and fermions, in theories based on super-symmetry every boson has a corresponding boson. The boson partners of existing fermions have names formed by prefacing the names of the fermion with an ‘s’ (e.g., selection, squark, lepton). The names of the fermion partners of existing bosons are obtained by changing the terminal - on of the boson to -into (e.g., photons, Gluons, and zino). Although, super-symmetries have not been observed experimentally, they may prove important in the search for a Unified Field Theory of the fundamental interactions.

The quark is a fundamental constituent of hadrons, i.e., of particles that take part in strong interactions. Quarks are never seen as free particles, which is substantiated by lack of experimental evidence for isolated quarks. The explanation given for this phenomenon in gauge theory is known a quantum chromodynamics, by which quarks are described, is that quark interaction become weaker as they come closer together and fall to zero once the distance between them is zero. The converse of this proposition is that the attractive forces between quarks become stronger s they move, as this process has no limited, quarks can never separate from each other. In some theories, it is postulated that at very high-energy temperatures, as might have prevailed in the early universe, quarks can separate, te temperature at which this occurs is called the ‘deconfinement temperatures’. Nevertheless, their existence has been demonstrated in high-energy scattering experiments and by symmetries in the properties of observed hadrons. They are regarded s elementary fermions, with spin ½, baryon number ⅓, strangeness 0 or = 1, and charm 0 or + 1. They are classified in six flavours [up (u), charm and top (t), each with charge ⅔ the proton charge, down (d), strange (s) and bottom (b), each with ‒ ⅓ the proton charge]. Each type has an antiquark with reversed signs of charge, baryon number, strangeness, and charm. The top quark has not been observed experimentally, but there are strong theoretical arguments for its existence. The top quark mass is known to be greater than about

90 GeV / c2.

The fractional charges of quarks are never observed in hadrons, since the quarks form combinations in which the sum of their charges is zero or integral. Hadrons can be either baryons or mesons, essentially, baryons are composed of three quarks while mesons are composed of a quark-antiquark pair. These components are bound together within the hadron by the exchange of particles known as Gluons. Gluons are neutral massless gauge bosons, the quantum field theory of electromagnetic interactions discriminate themselves against the gluon as the analogue of the photon and with a quantum number known as ‘colour’ replacing that of electric charge. Each quark type (or flavour) comes in three colours (red, blue and green, say), where colour is simply a convenient label and has no connection with ordinary colour. Unlike the photon in quantum chromodynamics, which is electrically neutral, Gluons in quantum chromodynamics carry colour and can therefore interact with themselves. Particles that carry colour are believed not to be able to exist in free particles. Instead, quarks and Gluons are permanently confined inside hadrons (strongly interacting particles, such as the proton and the neutron).

The gluon self-interaction leads to the property known as ‘asymptotic freedom’, in which the interaction strength for the strong interaction decreases as the momentum transfer involved in an interaction increase. This allows perturbation theory to be used and quantitative comparisons to be made with experiment, similar to, but less precise than those possibilities of quantum chromodynamics. Quantum chromodynamics the being tested successfully in high energy muon-nucleon scattering experiments and in proton-antiproton and electron-positron collisions at high energies. Strong evidence for the existence of colour comes from measurements of the interaction rates for e+e‒ ➝ hadrons and e+e‒ ➝ μ+ μ‒. The relative rate for these two processes is a factor of three larger than would be expected without colour, this factor measures directly the number of colours, i.e., for each quark flavour.

The quarks and antiquarks with zero strangeness and zero charm are the u, d, û and . They form the combinations:

protons (uud), antiprotons (ūū)

Neutrons (uud), antineutron (ū)

pion: π+ (u), π‒ (ūd), π0 (d, uū).

The charge and spin of these particles are the sums of the charge and spin of the component quarks and/or antiquarks.

In the strange baryon, e.g., the Λ and Σ meons, either the quark or antiquark is strange. Similarly, the presence of one or more ‘c’ quarks leads to charm baryons’ ‘a’ ‘c’ or ‘č’ to the charmed mesons. It has been found useful to introduce a further subdivision of quarks, each flavour coming in three colours (red, green, blue). Colour as used here serves simply as a convenient label and is unconnected with ordinary colour. A baryon comprises a red, a green, and a blue quark and a meson comprised a red and ant-red, a blue and ant-blue, or a green and Antigreen quark and antiquark. In analogy with combinations of the three primary colours of light, hadrons carry no net colour, i.e., they are ‘colourless’ or ‘white’. Only colourless objects can exist as free particles. The characteristics of the six quark flavours are shown in the table.

The cental feature of quantum field theory, is that the essential reality is a set of fields subject to the rules of special relativity and quantum mechanics, all else is derived as a consequence of the quantum dynamics of those fields. The quantization of fields is essentially an exercise in which we use complex mathematical models to analyse the field in terms of its associated quanta. And material reality as we know it in quantum field theory is constituted by the transformation and organization of fields and their associated quanta. Hence, this reality. Reveals a fundamental complementarity, in which particles are localized in space/time, and fields, which are not. In modern quantum field theory, all matter is composed of six strongly interacting quarks and six weakly interacting leptons. The six quarks are called up, down, charmed, strange, top, and bottom and have different rest masses and functional changes. The up and own quarks combine through the exchange of Gluons to form protons and neutrons.

The ‘lepton’ belongs to the class of elementary particles, and does not take part in strong interactions. They have no substructure of quarks and are considered indivisible. They are all; fermions, and are categorized into six distinct types, the electron, muon, and tauon, which are all identically charged, but differ in mass, and the three neutrinos, which are all neutral and thought to be massless or nearly so. In their interactions the leptons appear to observe boundaries that define three families, each composed of a charged lepton and its neutrino. The families are distinguished mathematically by three quantum numbers, Ie, Iμ, and Iv lepton numbers called ‘lepton numbers. In weak interactions their IeTOT, IμTOT and Iτ for the individual particles are conserved.

In quantum field theory, potential vibrations at each point in the four fields are capable of manifesting themselves in their complemtarity, their expression as individual particles. And the interactions of the fields result from the exchange of quanta that are carriers of the fields. The carriers of the field, known as messenger quanta, are the ‘coloured’ Gluons for the strong-binding-force, of which the photon for electromagnetism, the intermediate boson for the weak force, and the graviton or gravitation. If we could re-create the energies present in the fist trillionths of trillionths of a second in the life o the universe, these four fields would, according to quantum field theory, become one fundamental field.

The movement toward a unified theory has evolved progressively from super-symmetry to super-gravity to string theory. In string theory the one-dimensional trajectories of particles, illustrated in the Feynman lectures, seem as if, in at all were possible, are replaced by the two-dimensional orbits of a string. In addition to introducing the extra dimension, represented by a smaller diameter of the string, string theory also features another mall but non-zero constant, with which is analogous to Planck’s quantum of action. Since the value of the constant is quite small, it can be generally ignored but at extremely small dimensions. But since the constant, like Planck’s constant is not zero, this results in departures from ordinary quantum field theory in very small dimensions.

Part of what makes string theory attractive is that it eliminates, or ‘transforms away’, the inherent infinities found in the quantum theory of gravity. And if the predictions of this theory are proven valid in repeatable experiments under controlled coeditions, it could allow gravity to be unified with the other three fundamental interactions. But even if string theory leads to this grand unification, it will not alter our understanding of wave-particle duality. While the success of the theory would reinforce our view of the universe as a unified dynamic process, it applies to very small dimensions, and therefore, does not alter our view of wave-particle duality.

While the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one-another.’

Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.’

In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ ion the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.

Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insight

into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.

Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the indivisible whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.

If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can b viewed as an emergent property of the whole, it is unreasonable to conclude, in philosophical terms at least, that the universe is conscious.

Nevertheless, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.

While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point - there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a micro - level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.

Nonetheless, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientists-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.

As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.

Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics - figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and usually legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton’s universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)

After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between pats using mathematical models. Within these models, the behaviour of pats in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts n the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.

These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures - such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical micro - analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics - the model for understanding economic reality that is widely used today.

Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory - with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.

One could argue that the fact that our economic models are assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro - level micro - level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with micro - level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the micro - level behaviour of economic systems.

The obvious problem, . . . acceded peripherally, . . . nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the hole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveals in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the really economy ae obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum - short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it performs strongly in three general areas: Abundant productive inputs from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.

The prescription for medium-term growth of economies ion countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. But the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.

In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old - world industrial plants that burn soft coal. Not to forget, . . . the victual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.

In ‘Consilience,’ Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. But his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture’ evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.

Wilson argued that these instincts evolved in our hunter-gatherers accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our dependable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the ‘innate epigenetic rules of moral reasoning,’ it seems probable that the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.

Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson’s attempt is more admirable than most. In our view, however, there is little or no prospect that will prove as successful for any number of reasons. While the probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mentions, inconsistently been reduced to a given set classification of ‘epigenetic ruled of moral reasoning.’

Also, moral codes may derive in part from instincts that confer a survival advantage, but when we are t examine these codes, it also seems clear that they are primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules of ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, ‘Oh how we hate one another for the love of God.’

According to Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. ‘Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as I, will be the secularization of the human epic and of religion itself.

Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human’s behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects attributed to reality, including those associated with the alleged ‘epigenetic rules of moral reasoning.’

Once, again, Wilson’s view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the ‘bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.’ The intent is not to denigrate Wilson’s attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution - and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson’s program for uncovering these mechanisms could have merit. But for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to accommodate the complementary relationships between cultural and biological principles those governing evaluations do indeed have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realization and undivided wholeness.

Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of these cultures is now critically important to human survival. It is also clear, however, that dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific word view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.

However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent ad absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a ‘part of the whole’. It is this spared awareness that allows for the freedom, or existential choice of self-decision of choosing our free-will and the power to differentiate a direct care to free ourselves of the ‘optical illusion’ of our present conception of self as a ‘part limited in space and time’, and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty’. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feedings.’ Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of ours is to experience the self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?

Those who have this capacity will hopefully be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universes in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which ‘man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality’. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within te limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphorical and mythical provisions as comprehensive guides to living. In this way. Man’s afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.

It is time, if not, only, concluded from evidence in its suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementary truths of science in fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. And one is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists -there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in some ontology yet remains in what, and it has always been - a question, and the physical universe on the most basic level remains what it always been a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle is, and probably will always be, a matter of personal choice and conviction.

The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an illusion. Yet it was probably necessary for the Western mind to go through the acceptance of such a paradigm.

The overwhelming success of Newtonian physics led most scientists and most philosophers of the Enlightenment to rely on it exclusively. As far as the quest for knowledge about reality was concerned, they regarded all of the other mode’s of expressing human experience, such as accounts of numinous emergences, poetry, art, and so on, as irrelevant. This reliance on science as the only way to the truth about the universe s clearly obsoletes. Science has to give up the illusion of its self - sufficiency and self - sufficiency of human reason. It needs to unite with other modes of knowing, n particular with contemplation, and help each of us move to higher levels of being and toward the Experience of Oneness.

If this is indeed the direction of the emerging world-view, then the paradigm shifts we are presently going through will prove to e nourishing to the human spirit and in correspondences with its deepest conscious or unconscious yearning - the yearning to emerge out of Plato’s shadows and into the light of luminosity. The Big Bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hope that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.

Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. It may be that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang. According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory also to incorporate the strong nuclear force. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).

One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980s by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory solves a number of problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.

Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.

The theory is based on the mathematical equations, known as the field equations, of the general theory of relativity set forth in 1915 by Albert Einstein. In 1922 Russian physicist Alexander Friedmann provided a set of solutions to the field equations. These solutions have served as the framework for much of the current theoretical work on the big bang theory. American astronomer Edwin Hubble provided some of the greatest supporting evidence for the theory with his 1929 discovery that the light of distant galaxies was universally shifted toward the red end of the spectrum. Once ‘tired light’ theories-that light slowly loses energy naturally, becoming more red over time-were dismissed, this shift proved that the galaxies were moving away from each other. Hubble found that galaxies farther away were moving away proportionally faster, showing that the universe is expanding uniformly. However, the universe’s initial state was still unknown.

In the 1940's Russian-American physicist George Gamow worked out a theory that fit with Friedmann’s solutions in which the universe expanded from a hot, dense state. In 1950 British astronomer Fred Hoyle, in support of his own opposing steady-state theory, referred to Gamow’s theory as a mere ‘big bang,’ but the name stuck.

The overall framework of the big bang theory came out of solutions to Einstein’s general relativity field equations and remains unchanged, but various details of the theory are still being modified today. Einstein himself initially believed that the universe was static. When his equations seemed to imply that the universe was expanding or contracting, Einstein added a constant term to cancel out the expansion or contraction of the universe. When the expansion of the universe was later discovered, Einstein stated that introducing this ‘cosmological constant’ had been a mistake.

After Einstein’s work of 1917, several scientists, including the Abbé Georges Lemaître in Belgium, Willem de Sitter in Holland, and Alexander Friedmann in Russia, succeeded in finding solutions to Einstein’s field equations. The universes described by the different solutions varied. De Sitter’s model had no matter in it. This model is actually not a bad approximation since the average density of the universe is extremely low. Lemaître’s universe expanded from a ‘primeval atom.’ Friedmann’s universe also expanded from a very dense clump of matter, but did not involve the cosmological constant. These models explained how the universe behaved shortly after its creation, but there was still no satisfactory explanation for the beginning of the universe.

In the 1940's George Gamow was joined by his students Ralph Alpher and Robert Herman in working out details of Friedmann’s solutions to Einstein’s theory. They expanded on Gamow’s idea that the universe expanded from a primordial state of matter called ylem consisting of protons, neutrons, and electrons in a sea of radiation. They theorized the universe was very hot at the time of the big bang (the point at which the universe explosively expanded from its primordial state), since elements heavier than hydrogen can be formed only at a high temperature. Alpher and Hermann predicted that radiation from the big bang should still exist. Cosmic background radiation roughly corresponding to the temperature predicted by Gamow’s team was detected in the 1960s, further supporting the big bang theory, though the work of Alpher, Herman, and Gamow had been forgotten.

The big bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hopes that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.

Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. It may be that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang.

According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory to incorporate the strong nuclear force also. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).

One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980's by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory solves several problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.

Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed, depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.

The universe cooled as it expanded. After about one second, protons formed. In the following few minutes-often referred to as the ‘first three minutes’ - combinations of protons and neutrons formed the isotope of hydrogen known as deuterium and some of the other light elements, principally helium, and some lithium, beryllium, and boron. The study of the distribution of deuterium, helium, and the other light elements is now a major field of research. The uniformity of the helium abundance around the universe supports the big bang theory and the abundance of deuterium can be used to estimate the density of matter in the universe.

From about 380,000 too about one million years after the big bang, the universe cooled to about 3000°C’s (about 5000°F’s) and protons and electrons combined to hydrogen atoms. Hydrogen atoms can only absorb and emit specific colours, or wavelengths, of light. The formation of atoms allowed many other wavelengths of light, wavelengths that had been interfering with the free electrons, to travel much farther than before. This change sets free radiation that we can detect today. After billions of years of cooling, this cosmic background radiation is at 3 K (-270°C/-454°F). The cosmic background radiation was first detected and identified in 1965 by American astrophysicists Arno Penzias and Robert Wilson.

The Cosmic Background Explorer (COBE) spacecraft, a project of the National Aeronautics and Space Administration (NASA), mapped the cosmic background radiation between 1989 and 1993. It verified that the distribution of intensity of the background radiation precisely matched that of matter that emits radiation because of its temperature, as predicted for the big bang theory. It also showed that cosmic background radiation is not uniform that it varies slightly. These variations are thought to be the seeds from which galaxies and other structures in the universe grew.

Evidence suggests that the matter that scientists detect in the universe be only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter-matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be nonradiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).

An alternative theory that explains the dark-matter model involves hot dark matter, where hot implies that the particles are moving very fast. Neutrinos, fundamental particles that travel at nearly the speed of light, are the prime example of hot dark matter. However, scientists think that the mass of a neutrino is so low that neutrinos can only account for a small portion of dark matter. If the inflationary version of big bang theory is correct, then the amount of dark matter and of whatever else might exist is just enough to bring the universe to the boundary between open and closed.

Scientists develop theoretical models to show how the universe’s structures, such as clusters of galaxies, have formed. Their models invoke hot dark matter, cold dark matter, or a mixture of the two. This unseen matter would have provided the gravitational force needed to bring large structures such as clusters of galaxies together. The theories that include dark matter match the observations, although there is no consensus on the type or types of dark matter that must be included. Supercomputers are important for making such models.

Astronomers continue to make new observations that are also interpreted within the framework of the big bang theory. No major problems with the big bang theory have been found, but scientists constantly adjust the theory to match the observed universe. In particular, a ‘standard model’ of the big bang has been established by results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001. The probe studied the anisotropies, or rippled, in the temperature of cosmic background radiation at a higher resolution than COBE was capable of. These ripples suggest that regions of the young universe were very slightly hotter or cooler, by a factor of about 1/1000, than adjacent regions. WMAP’s observations suggest that the rate of expansion of the universe, called Hubble’s constant, is about 71 km/s/Mpc (kilometres per second per million parsec, where a parsec is about 3.26 light - years). In other words, the distance between any two objects in space that are separated by a million parsec increases by about 71 km every second in addition to any other motion they may have compared with one another. In combination with previously existing observations, this rate of expansion tells cosmologists that the universe is ‘flat,’ though flatness here does not refer to the actual shape of the universe but rather that the geometric laws that apply to the universe match those of a flat plane.

To be flat, the universe must contain a certain amount of matter and energy, known as the critical density. The distribution of sizes of ripples detected by WMAP show that ordinary matter - like that making up objects and living things on Earth-accounts for only 4.4 percent of the critical density. Dark matter makes up an additional 23 percent. Astoundingly, the remaining 73 percent of the universe is composed of something else-of a substance so mysterious that nobody knows much about it. Called ‘dark energy,’ this substance provides the antigravity-like negative pressure that causes the universe’s expansion to accelerate rather than slow. This ‘accelerating universe’ was detected independently by two competing groups of astronomers in the last years of the 20th century. The ideas of an accelerating universe and the existence of dark energy have caused astronomers to modify previous ideas of the big bang universe substantially.

WMAP's results also show that cosmic background radiation was set free about 380,000 years after the big bang, later than was previously thought, and that the first stars formed only 200,000 years after the big bang, earlier than anticipated. Further refinements to the big bang theory are expected from WMAP, which continues to collect data. An even more precise mission to study the beginnings of the universe, the European Space Agency’s Planck spacecraft, is scheduled to be launched in 2007.

In the 1950's cosmologists (scientists who study the evolution of the universe) were considering two theories for the origin of the universe. The first, the currently accepted big bang theory, held that the universe was created from one enormous explosion. The second, known as the steady state theory, suggested that the universe had always existed. Russian-American theoretical physicist George Gamow advanced the big bang theory and its underpinnings in a 1956 Scientific American article. Gamow’s estimate of a five -billion-year-old universe is no longer considered accurate; the universe is now thought to be much older.

Most cosmologists believe that the universe began as a dense kernel of matter and radiant energy that started to expand about five billion years ago and later coalesced into galaxies.

Cosmology is the study of the general nature of the universe in space and in time - what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.

The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds - a Russian named Nikolai Lobachevski and a Hungarian named János Bolyai - discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self - consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: Namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.

To illustrate the differences between Euclidean geometry and their non - Euclidean system it is simplest to consider just two dimensions - that is, the geometry of surfaces. In our schoolbooks this is known as ‘plane geometry,’ because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two - dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative,’ which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we shall define a ‘straight line,’ equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic.’

Considering a surface curved like a saddle, we find that, given a ‘straight’ line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel.

As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. But on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameters.

After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non - Euclidean geometry whose two - dimensional model is a surface of positive, rather than negative, curvature - that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.

Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat,’ as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two - dimensional creatures living in a two - dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three - dimensional beings living in three - dimensional space we should be able, by studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he continued to apply Riemann's formulas to test his idea.

Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space-time continuum,’ by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number

Á, or í, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.

In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of force that impels the earth to deviate from straight - line motion and to move in a circle around the sun,’ Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space-time continuum in its neighbourhood.’

The motion of an object in the space-time continuum can be represented by a curve called the object's ‘world line.’. . . . Einstein declared, in effect: ‘The world line of the earth is a geodesic in the curved four-dimensional space around the sun.’ In other words, the . . . [earth’s ‘world line’]. . . . Corresponds to the shortest four-dimensional distance between the position of the earth in January. . . . And its position in October . . . Einstein's idea of the gravitational curvature of space-time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large - scale view the galaxies are spread fairly uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out’ its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.

But if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two - dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? And, since we cannot consider space alone, how is this space curvature related to time?

Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun!) and to gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).

Einstein's new force, called ‘cosmic repulsion,’ allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as ‘Einstein's spherical universe,’ gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two - dimensional analogy of Einstein's space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.

The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time.

Unhappily astronomical observations contradicted both Einstein's and de Sitter's static models of the universe, and they were soon abandoned.

In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One pictured the universe as expanding with time; the other, contracting.

Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: ‘The mathematician Friedman found a way out of this dilemma. He showed that it is possible, according to the field equations, to have a finite density in the whole (three - dimensional) space, without enlarging these field equations value orientation.’ Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder he had made in his entire life.

Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the redshift as the Doppler effect - the well - known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' redshift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart.

Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom.’ (Modern physicists would prefer the term ‘primeval nucleus.’) As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we know it today.

Until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five-billion-year-old rocks! Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.

Friedman's solution of Einstein's cosmological equation, as I mentioned, permits two kinds of universes. We can call one the ‘pulsating’ universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million million times denser than water; that it will then begin to expand again - and so on through the cycle ad infinitum. The other model is a ‘hyperbolic’ one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.

The question whether our universe is actually ‘pulsating’ or ‘hyperbolic’ should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second - the ‘escape velocity’ - the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, etc., etc.). On the other hand, a rocket shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies (the rocket and the earth, but we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.

Thus we may conclude that our universe corresponds to the ‘hyperbolic’ model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust

Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative - closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three - dimensional space, just as we examined the properties of figures on two - dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius. We saw that, in the two - dimensional case, the area of a circle increases with increasing radiuses at a faster rate on a negatively curved surface than on a Euclidean or flat surface. That on a positively curved surface the relatives rate of increases is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in the radius. In negatively curved space the volume would increase faster than this; in positively curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature - closed and finite.

Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubbles’ calculations produced the conclusion that the universe is a closed system - a small universe only a few billion light - years in radius!

We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. But there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past - 500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light - years away cannot be put on the same distance scale with a galaxy 500 million light - years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.

When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are changed so drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. But if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.

Actually there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever-expanding. Mathematical solutions of fundamental cosmological equations show that such a universe is open and infinite.

We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four-dimensional space-time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. But in 1951 a group at the University of Cambridge, whose chief representative has been Fred Hoyle, advanced the steady-state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. But the issue is many - sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it, and, at best, an attempt will sum up the evolutionary theory.

We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well - known equation E=mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus, we can speak of the ‘mass density’ of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. But in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.

Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250-millionth birthday.

This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviour. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. But as soon as matter became gravitationally more important than the radiant energy,

it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls,’ the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A. B. E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans’ mathematical formula for the process to the gas filling the universe at that time, I have found that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies’ - cold, dark and chaotic. But their gas soon condensed into stars and formed the galaxies as we see them now.

A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., and the chemical elements . . . My belief is that at the start, matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about 30 minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element-building or been converted to protons.

To many a reader the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. But consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million-million microseconds later, the site is still ‘hot’ with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?

The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. I hold to the opinion that some of them were built by capture of neutrons. However, since the absence of any stable nucleus of atomic weight five makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, I would agree that the lion's share of the heavy elements might have been formed later in the hot interiors of stars.

All the theories - of the origin, age, extent, composition and nature of the universe - are becoming ever more subject a to test by new instruments and new techniques . . . But we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies actually diminish in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe

After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.

Physicists had known since the early 19th century that light is propagated as a transverse wave (a wave in which the vibrations move in a direction perpendicular to the direction of the advancing wave front). They assumed, however, that the wave required some material medium for its transmission, so they postulated an extremely diffuse substance, called ether, as the unobservable medium. Maxwell's theory made such an assumption unnecessary, but the ether concept was not abandoned immediately, because it fit in with the Newtonian concept of an absolute space-time frame for the universe. A famous experiment conducted by the American physicist Albert Abraham Michelson and the American chemist Edward Williams Morley in the late 19th century served to dispel the ether concept and was important in the development of the theory of relativity. This work led to the realization that the speed of electromagnetic radiation in a vacuum is an invariant.

At the beginning of the 20th century, however, physicists found that the wave theory did not account for all the properties of radiation. In 1900 the German physicist Max Planck demonstrated that the emission and absorption of radiation occur in finite units of energy, known as quanta. In 1904, German-born American physicist Albert Einstein was able to explain some puzzling experimental results on the external photoelectric effect by postulating that electromagnetic radiation can behave like a particle.

Other phenomena, which occur in the interaction between radiation and matter, can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can sometimes behave like a particle, and sometimes behave like a wave. The parallel concept - that matter also exhibits the same duality of having particle-like and wavelike characteristics - was developed in 1923 by the French physicist Louis Victor, Prince de Broglie.

Planck’s Constant is the fundamental physical constant, symbol h. It was first discovered (1900) by the German physicist Max Planck. Until that year, light in all forms had been thought to consist of waves. Planck noticed certain deviations from the wave theory of light on the part of radiations emitted by so-called ‘black bodies’, or perfect absorbers and emitters of radiation. He came to the conclusion that these radiations were emitted in discrete units of energy, called quanta. This conclusion was the first enunciation of the quantum theory. According to Planck, the energy of a quantum of light is equal to the frequency of the light multiplied by a constant. His original theory has since had abundant experimental verification, and the growth of the quantum theory has brought about a fundamental change in the physicist's concept of light and matter, both of which are now thought to combine the properties of waves and particles. Thus, Planck's constant has become as important to the investigation of particles of matter as to quanta of light, now called photons. The first successful measurement (1916) of Planck's constant was made by the American physicist Robert Millikan. The present accepted value of the constant is h = 6.626 × 10-34 joule-second in the metre-kilogram-second system.

As each photon, particle of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.

Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X rays doctors use to view a person’s bones.

The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force, and one of the four fundamental forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.

Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.

The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.

Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.

Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.

Most scientists did not pay attention to Planck’s theory until 1905, when German-born American physicist Albert Einstein used the idea of photons to explain an interaction he had studied called the photoelectric effect. In this interaction, light shining on the surface of a metal causes the metal to emit electrons. Electrons escape the metal by absorbing energy from the light. Einstein showed that light behaves as particles in this situation. If the light behaved like waves, each electron could absorb many light waves and gain ever more energy. He found, however, that a more intense beam of light, with more light waves, did not give each electron more energy. Instead, more light caused the metal to release more electrons, each of which had the same amount of energy. Each electron had to be absorbing a small piece of the light beam, or a particle of light, and all these pieces had the same amount of energy. A beam of light with a higher frequency contained pieces of light with more energy, so when electrons absorbed these particles, they too had more energy. This could only be explained using the photon view of radiation, in which each electron absorbs a single photon and gains enough energy to escape the metal.

Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.

Photon particles of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.

Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X rays doctors use to view a person’s bones.

The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force. One of the four fundamental forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.

Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.

The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the

1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.

Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.

Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.

Most scientists did not pay attention to Planck’s theory until 1905, when German-born American physicist Albert Einstein used the idea of photons to explain an interaction he had studied called the photoelectric effect. In this interaction, light shining on the surface of a metal causes the metal to emit electrons. Electrons escape the metal by absorbing energy from the light. Einstein showed that light behaves as particles in this situation. If the light behaved like waves, each electron could absorb many light waves and gain ever more energy. He found, however, that a more intense beam of light, with more light waves, did not give each electron more energy. Instead, more light caused the metal to release more electrons, each of which had the same amount of energy. Each electron had to be absorbing a small piece of the light beam, or a particle of light, and all these pieces had the same amount of energy. A beam of light with a higher frequency contained pieces of light with more energy, so when electrons absorbed these particles, they too had more energy. This could only be explained using the photon view of radiation, in which each electron absorbs a single photon and gains enough energy to escape the metal.

Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.

Most synonymous with quantum theory is the Uncertainty Principle, in quantum mechanics, theory states that it is impossible to specify simultaneously the position and momentum of a particle, such as an electron, with precision. Also called the indeterminacy principle, the theory further states that a more accurate determination of one quantity will result in a less precise measurement of the other, and that the product of both uncertainties is never less than Planck's constant, named after the German physicist Max Planck. Of very small magnitude, the uncertainty results from the fundamental nature of the particles being observed. In quantum mechanics, probability calculations therefore replace the exact calculations of classical mechanics.

Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle was of great significance in the development of quantum mechanics. Its philosophic implications of indeterminacy created a strong trend of mysticism among scientists who interpreted the concept as a violation of the fundamental law of cause and effect. Other scientists, including Albert Einstein, believed that the uncertainty involved in observation in no way contradicted the existence of laws governing the behaviour of the particles or the ability of scientists to discover these laws.

Of a final summation, science is a systematic study of anything that can be examined, tested, and verified. The word science is derived from the Latin word scire, meaning ‘to know.’ From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that can be observed or detected, and science as a whole shapes the way we understand the universe, our planet, ourselves, and other living things.

Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building on work performed earlier. Some of this knowledge—such as our understanding of numbers—stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge - such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter) - dates back less than 50 years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what is known.

During scientific investigations, scientists put together and compare new discoveries and existing knowledge. In most cases, new discoveries extend what is currently accepted, providing further evidence that existing ideas are correct. For example, in 1676 the English physicist Robert Hooke discovered that elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances that have been made in physics since 1676, this simple law still holds true.

Scientists utilize existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events that are not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the elements were discovered several years later, his predictions proved to be correct.

In science, important advances can also be made when current ideas are shown to be wrong. A classic case of this occurred early in the 20th century, when the German geologist Alfred Wegener suggested that the continents were at one time connected, a theory known as continental drift. At the time, most geologists discounted Wegener's ideas, because the Earth's crust seemed to be fixed. But following the discovery of plate tectonics in the 1960s, in which scientists found that the Earth’s crust is actually made of moving plates, continental drift became an important part of geology.

Through advances like these, scientific knowledge is constantly added to and refined. As a result, science gives us an ever more detailed insight into the way the world around us works.

For a large part of recorded history, science had little bearing on people's everyday lives. Scientific knowledge was gathered for its own sake, and it had few practical applications. However, with the dawn of the Industrial Revolution in the 18th century, this rapidly changed. Today, science has a profound effect on the way we live, largely through technology - the use of scientific knowledge for practical purposes.

Some forms of technology have become so well established that it is easy to forget the great scientific achievements that they represent. The refrigerator, for example, owes its existence to a discovery that liquids take in energy when they evaporate, a phenomenon known as latent heat. The principle of latent heat was first exploited in a practical way in 1876, and the refrigerator has played a major role in maintaining public health ever since. The first automobile, dating from the 1880's, made use of many advances in physics and engineering, including reliable ways of generating high - voltage sparks, while the first computers emerged in the 1940's from simultaneous advances in electronics and mathematics.

Other fields of science also play an important role in the things we use or consume every day. Research in food technology has created new ways of preserving and flavouring what we eat. Research in industrial chemistry has created a vast range of plastics and other synthetic materials, which have thousands of uses in the home and in industry. Synthetic materials are easily formed into complex shapes and can be used to make machine, electrical, and automotive parts, scientific and industrial instruments, decorative objects, containers, and many other items.

Alongside these achievements, science has also brought about technology that helps save human life. The kidney dialysis machine enables many people to survive kidney diseases that would once have proved fatal, and artificial valves allow sufferers of coronary heart disease to return to active living. Biochemical research is responsible for the antibiotics and vaccinations that protect us from infectious diseases, and for a wide range of other drugs used to combat specific health problems. As a result, the majority of people on the planet now live longer and healthier lives than ever before.

However, scientific discoveries can also have a negative impact in human affairs. Over the last hundred years, some of the technological advances that make life easier or more enjoyable have proved to have unwanted and often unexpected long - term effects. Industrial and agricultural chemicals pollute the global environment, even in places as remote as Antarctica, and city air is contaminated by toxic gases from vehicle exhausts. The increasing pace of innovation means that products become rapidly obsolete, adding to a rising tide of waste. Most significantly of all, the burning of fossil fuels such as coal, oil, and natural gas releases into the atmosphere carbon dioxide and other substances known as greenhouse gases. These gases have altered the composition of the entire atmosphere, producing global warming and the prospect of major climate change in years to come.

Science has also been used to develop technology that raises complex ethical questions. This is particularly true in the fields of biology and medicine. Research involving genetic engineering, cloning, and in vitro fertilization gives scientists the unprecedented power to bring about new life, or to devise new forms of living things. At the other extreme, science can also generate technology that is deliberately designed to harm or to kill. The fruits of this research include chemical and biological warfare, and nuclear weapons, by far the most destructive weapons that the world has ever known.

Scientific research can be divided into basic science, also known as pure science, and applied science. In basic science, scientists working primarily at academic institutions pursue research simply to satisfy the thirst for knowledge. In applied science, scientists at industrial corporations conduct research to achieve some kind of practical or profitable gain.

In practice, the division between basic and applied science is not always clear - cut. This is because discoveries that initially seem to have no practical use often develop one as time goes by. For example, superconductivity, the ability to conduct electricity with no resistance, was little more than a laboratory curiosity when Dutch physicist Heike Kamerlingh Onnes discovered it in 1911. Today superconducting electromagnets are used in an ever-increasing number of important applications, from diagnostic medical equipment to powerful particle accelerators.

Scientists study the origin of the solar system by analysing meteorites and collecting data from satellites and space probes. They search for the secrets of life processes by observing the activity of individual molecules in living cells. They observe the patterns of human relationships in the customs of aboriginal tribes. In each of these varied investigations the questions asked and the means employed to find answers are different. All the inquiries, however, share a common approach to problem solving known as the scientific method. Scientists may work alone or they may collaborate with other scientists. In all cases, a scientist’s work must measure up to the standards of the scientific community. Scientists submit their findings to science forums, such as science journals and conferences, in order to subject the findings to the scrutiny of their peers.

Whatever the aim of their work, scientists use the same underlying steps to organize their research: (1) they make detailed observations about objects or processes, either as they occur in nature or as they take place during experiments; (2) they collect and analyse the information observed; and (3) they formulate a hypothesis that explains the behaviour of the phenomena observed.

A scientist begins an investigation by observing an object or an activity. Observation typically involves one or more of the human senses - hearing, sights, smells, taste, and touch. Scientists typically use tools to aid in their observations. For example, a microscope helps view objects too small to be seen with the unaided human eye, while a telescope views objects too far away to be seen by the unaided eye.

Scientists typically apply their observation skills to an experiment. An experiment is any kind of trial that enables scientists to control and change at will the conditions under which events occur. It can be something extremely simple, such as heating a solid to see when it melts, or something highly complex, such as bouncing a radio signal off the surface of a distant planet. Scientists typically repeat experiments, sometimes many times, in order to be sure that the results were not affected by unforeseen factors.

Most experiments involve real objects in the physical world, such as electric circuits, chemical compounds, or living organisms. However, with the rapid progress in electronics, computer simulations can now carry out some experiments instead. If they are carefully constructed, these simulations or models can accurately predict how real objects will behave.

One advantage of a simulation is that it allows experiments to be conducted without any risks. Another is that it can alter the apparent passage of time, speeding up or slowing natural processes. This enables scientists to investigate things that happen very gradually, such as evolution in simple organisms, or ones that happen almost instantaneously, such as collisions or explosions.

During an experiment, scientists typically make measurements and collect results as they work. This information, known as data, can take many forms. Data may be a set of numbers, such as daily measurements of the temperature in a particular location or a description of side effects in an animal that has been given an experimental drug. Scientists typically use computers to arrange data in ways that make the information easier to understand and analyse. Data may be arranged into a diagram such as a graph that shows how one quantity (body temperature, for instance) varies in relation to another quantity (days since starting a drug treatment). A scientist flying in a helicopter may collect information about the location of a migrating herd of elephants in Africa during different seasons of a year. The data collected maybe in the form of geographic coordinates that can be plotted on a map to provide the position of the elephant herd at any given time during a year.

Scientists use mathematics to analyse the data and help them interpret their results. The types of mathematics used include statistics, which is the analysis of numerical data, and probability, which calculates the likelihood that any particular event will occur.

Once an experiment has been carried out and data collected and analysed, scientists look for whatever pattern their results produce and try to formulate a hypothesis that explains all the facts observed in an experiment. In developing a hypothesis, scientists employ methods of induction to generalize from the experiment’s results to predict future outcomes, and deduction to infer new facts from experimental results.

Formulating a hypothesis may be difficult for scientists because there may not be enough information provided by a single experiment, or the experiment’s conclusion may not fit old theories. Sometimes scientists do not have any prior idea of a hypothesis before they start their investigations, but often scientists start out with a working hypothesis that will be proved or disproved by the results of the experiment. Scientific hypotheses can be useful, just as hunches and intuition can be useful in everyday life. But they can also be problematic because they tempt scientists, either deliberately or unconsciously, to favour data that support their ideas. Scientists generally take great care to avoid bias, but it remains an ever - present threat. Throughout the history of science, numerous researchers have fallen into this trap, either in the hope of self-advancement or because they firmly believe their ideas to be true.

If a hypothesis is borne out by repeated experiments, it becomes a theory—an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler’s theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. But studies performed by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.

If other scientists do not have access to scientific results, the research may as well not have been performed at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.

In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.

Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations - bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is soundly based.

Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected for publication.

The peer-review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other’s work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.

Once a paper has been accepted and published, it becomes part of the vast and ever-expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.

When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940's. McClintock discovered a new phenomenon in corn cells known as transposable genes, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960's when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than 35 years after performing the research.

In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.

The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and statesman Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.

In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special-interest groups. Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, assesses research on how climate change occurs, and what affects change is likely to have on humans and their environment.

Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research. For additional information on individual sciences, refer to separate articles highlighted in the text.

The mathematical sciences investigate the relationships between things that can be measured or quantified in either a real or abstract form. Pure mathematics differs from other sciences because it deals solely with logic, rather than with nature's underlying laws. However, because it can be used to solve so many scientific problems, mathematics is usually considered to be a science itself.

Central to mathematics is arithmetic, the use of numbers for calculation. In arithmetic, mathematicians combine specific numbers to produce a result. A separate branch of mathematics, called algebra, works in a similar way, but uses general expressions that apply to numbers as a whole. For example, if there are three separate items on a restaurant bill, simple arithmetic produces the total amount to be paid. But the total can also be calculated by using an algebraic formula. A powerful and flexible tool, algebra enables mathematicians to solve highly complex problems in every branch of science.

Geometry investigates objects and the spaces around them. In its simplest form, it deals with objects in two or three dimensions, such as lines, circles, cubes, and spheres. Geometry can be extended to cover abstractions, including objects in many dimensions. Although we cannot perceive these extra dimensions ourselves, the logic of geometry still holds.

In geometry, it is easy to work out the exact area of a rectangle or the gradient (slope) of a line, but there are some problems that geometry cannot solve by conventional means. For example, geometry cannot calculate the exact gradient at a point on a curve, or the area that the curve bounds. Scientists find that calculating quantities like this helps them understand physical events, such as the speed of a rocket at any particular moment during its acceleration.

To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determination of Newton’s laws of motion and the theory of electromagnetism.

The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.

Other branches of physics focus on energy and its large - scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X rays - forms of energy that are closely related and that all obey the same set of rules.

Chemistry is the study of the composition of matter and the way different substances interact—subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with less harmful side effects.

The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions they use to build themselves up. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology - one of the fastest-growing sciences today.

Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens in general, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today.

The earth sciences examine the structure and composition of our planet, and the physical processes that have helped to shape it. Geology focuses on the structure of Earth, while geography is the study of everything on the planet's surface, including the physical changes that humans have brought about from, for example, farming, mining, or deforestation. Scientists in the field of geomorphology study Earth's present landforms, while mineralogists investigate the minerals in Earth's crust and the way they formed.

Water dominates Earth's surface, making it an important subject for scientific research. Oceanographers carry out research in the oceans, while scientists working in the field of hydrology investigate water resources on land, a subject of vital interest in areas prone to drought. Glaciologists study Earth's icecaps and mountain glaciers, and the effects that ice have when it forms, melts, or moves. In atmospheric science, meteorology deals with day - to - day changes in weather, but climatology investigates changes in weather patterns over the longer term.

When living things die their remains are sometimes preserved, creating a rich store of scientific information. Palaeontology is the study of plant and animal remains that have been preserved in sedimentary rock, often millions of years ago. Paleontologists study things long dead and their findings shed light on the history of evolution and on the origin and development of humans. A related science, called palynology, is the study of fossilized spores and pollen grains. Scientists study these tiny structures to learn the types of plants that grew in certain areas during Earth’s history, which also helps identify what Earth’s climates were like in the past.

The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.

Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes found in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.

While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.

Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life - the fact that most living things maintain a steady internal state when the environment around them constantly changes.

Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.

As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.

The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.

Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long - term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share and those that are the products of local culture, learned and handed on from generation to generation.

The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well.

In technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. In general, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.

Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.

In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine.

Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.

During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. But with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.

The earliest writers were the people of Mesopotamia, who lived in a part of present - day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge - shaped marks.

Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared that the Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.

Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc show that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians - a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.

For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. But in ancient Greece, often recognized as the birthplace of Western science, a new kind of scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.

Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.

Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit - not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the Centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.

As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens - the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle - students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.

In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing a figure accurate to within 1 percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.

By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the Centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in ad 529, bringing the first flowering of rationalism to an end.

For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300s, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.

Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end in itself, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about ad 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of pi to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940.

The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al - Khw-arizm-i introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in 30. Al - Khw-arizm-i also wrote on algebra (itself derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.

In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used - alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.

In Europe, historians often attribute the rebirth of science to a political event—the capture of Constantinople (now İIstanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.

The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. But in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, 7 volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body that had persisted since the time of Galen more than 1,300 years before. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.

The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer Nicolaus Copernicus. In it, Copernicus rejected the idea that Earth was the Centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion that has dogged Western thought ever since.

In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.

These observations of Venus helped to convince Galileo that Copernicus’s Sun-centred view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.

In less contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.

Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full - fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.

But the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.

Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.

Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long - head notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.

By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.

With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.

In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions - a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.

Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.

In 1828 the German chemist Friedrich Wöhler showed that it was possible to make carbon-containing, organic compounds from inorganic ingredients, a breakthrough that opened up an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.

In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he performed led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's famous equations, devised in 1864, uses mathematics to explain the interactions between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves, created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well. With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long - head notion that atoms were the basic unit of matter.

As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.

In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimates ranged from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm (72-in) reflecting telescope, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.

In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.

Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. But the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that still has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.

In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.

At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940s American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, it became clear that DNA is the chemical that makes up genes and thus the key to heredity.

After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, side - stepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.

At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than 10 a year by the 21st century.

By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. But by the 1980s the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause hemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.

In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertions of normal or genetically altered genes into a patient’s cells replace nonfunctional or missing genes.

Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection. Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high - speed fibreoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as telemedicine, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.

In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind.’ In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best - selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.

The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.

In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.

In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.

During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. But the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second.

Further miniaturization led in 1971 to the first microprocessor - a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950’s. Once used only by large businesses, computers are now used by professionals, small retailers, and students to perform a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to interface with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E - mail, to shop, or to find information on just about any subject.

During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.

When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.

In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.

In 1900 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off in set amounts, or quanta. Five years later, German-born American physicist Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein's special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.

Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known - an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927. But while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world - that is, the one in which we live.

In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both an energy source and a weapon.

These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.

Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.

Particle Accelerators, in physics, are the devices used to accelerate charged elementary particles or ions to high energies. Particle accelerators today are some of the largest and most expensive instruments used by physicists. They all have the same three basic parts: a source of elementary particles or ions, a tube pumped to a partial vacuum in which the particles can travel freely, and some means of speeding up the particles.

Charged particles can be accelerated by an electrostatic field. For example, by placing electrodes with a large potential difference at each end of an evacuated tube, British scientists’ John D. Cockcroft and Ernest Thomas Sinton Walton were able to accelerate protons to 250,000 eV. Another electrostatic accelerator is the Van de Graaff accelerator, which was developed in the early 1930's by the American physicist Robert Jemison Van de Graaff. This accelerator uses the same principles as the Van de Graaff Generator. The Van de Graaff accelerator builds up a potential between two electrodes by transporting charges on a moving belt. Modern Van de Graaff accelerators can accelerate particles to energies as high as 15 MeV (15 million electron volts).

Another machine, first conceived in the late 1920's, is the linear accelerator, or linac, which uses alternating voltages of high magnitude to push particles along in a straight line. Particles pass through a line of hollow metal tubes enclosed in an evacuated cylinder. An alternating voltage is timed so that a particle is pushed forward each time it goes through a gap between two of the metal tubes. Theoretically, a linac of any energy can be built. The largest linac in the world, at Stanford University, is 3.2 km. (2 mi.) long. It is capable of accelerating electrons to an energy of 50 GeV (50 billion, or giga, electron volts). Stanford's linac is designed to collide two beams of particles accelerated on different tracks of the accelerator.

The American physicist Ernest O. Lawrence won the 1939 Nobel Prize in physics for a breakthrough in accelerator design in the early 1930's. He developed the cyclotron, the first circular accelerator. A cyclotron is somewhat like a linac wrapped into a tight spiral. Instead of many tubes, the machine had only two hollow vacuum chambers, called dees, that are shaped like capital letter Ds back to back. A magnetic field, produced by a powerful electromagnet, keeps the particles moving in a circle. Each time the charged particles pass through the gap between the dees, they are accelerated. As the particles gain energy, they spiral out toward the edge of the accelerator until they gain enough energy to exit the accelerator. The world's most powerful cyclotron, the K1200, began operating in 1988 at the National Superconducting Cyclotron Laboratory at Michigan State University. The machine is capable of accelerating nuclei to an energy approaching 8 GeV.

When nuclear particles in a cyclotron gain an energy of 20 MeV or more, they become appreciably more massive, as predicted by the theory of relativity. This tends to slow them and throws the acceleration pulses at the gaps between the dees out of phase. A solution to this problem was suggested in 1945 by the Soviet physicist Vladimir I. Veksler and the American physicist Edwin M. McMillan. The solution, the synchrocyclotron, is sometimes called the frequency-modulated cyclotron. In this instrument, the oscillator (radio - frequency generator) that accelerates the particles around the dees is automatically adjusted to stay in step with the accelerated particles; as the particles gain mass, the frequency of accelerations is lowered slightly to keep in step with them. As the maximum energy of a synchrocyclotron increases, so must its size, for the particles must have more space in which to spiral. The largest synchrocyclotron is the 600-cm. (236-in.) phasotron at the Dubna Joint Institute for Nuclear Research in Russia; it accelerates protons to more than 700 MeV and has magnets weighing 6984 metric tons (7200 tons).

When electrons are accelerated, they undergo a large increase in mass at a relatively low energy. At 1 MeV energy, an electron weighs two and one - half times as much as an electron at rest. Synchrocyclotrons cannot be adapted to make allowance for such large increases in mass. Therefore, another type of cyclic accelerator, the betatron, is employed to accelerate electrons. The betatron consists of a doughnut-shaped evacuated chamber placed between the poles of an electromagnet. The electrons are kept in a circular path by a magnetic field called a guide field. By applying an alternating current to the electromagnet, the electromotive force induced by the changing magnetic flux through the circular orbit accelerates the electrons. During operation, both the guide field and the magnetic flux are varied to keep the radius of the orbit of the electrons constant.

The synchrotron is the most recent and most powerful member of the accelerator family. A synchrotron consists of a tube in the shape of a large ring through which the particles travel; the tube is surrounded by magnets that keep the particles moving through the centre of the tube. The particles enter the tube after having already been accelerated to several million electron volts. Particles are accelerated at one or more points on the ring each time the particles make a complete circle around the accelerator. To keep the particles in a rigid orbit, the strengths of the magnets in the ring are increased as the particles gain energy. In a few seconds, the particles reach energies greater than 1 GeV and are ejected, either directly into experiments or toward targets that produce a variety of elementary particles when struck by the accelerated particles. The synchrotron principle can be applied to either protons or electrons, although most of the large machines are proton-synchrotrons.

The first accelerator to exceed the 1 GeV mark was the cosmotron, a proton-synchrotron at Brookhaven National Laboratory, in Brookhaven, New York. The cosmotron was operated at 2.3 GeV in 1952 and later increased to 3 GeV. In the mid-1960's, two operating synchrotrons were regularly accelerating protons to energies of about 30 GeV. These were the Alternating Gradient Synchrotron at Brookhaven National Laboratory, and a similar machine near Geneva, Switzerland, operated by CERN (also known as the European Organization for Nuclear Research). By the early 1980s, the two largest proton-synchrotrons were a 500-GeV device at CERN and a similar one at the Fermi National Accelerator Laboratory (Fermilab) near Batavia, Illinois. The capacity of the latter, called Tevatron, was increased to a potential 1 TeV (trillion, or tera, eV) in 1983 by installing superconducting magnets, making it the most powerful accelerator in the world. In 1989, CERN began operating the Large-Electron Positron Collider (LEP), a 27-km (16.7-mi) rings that can accelerate electrons and positrons to an energy of 50 GeV.

A storage ring collider accelerator is a synchrotron that produces more energetic collisions between particles than a conventional synchrotron, which slams accelerated particles into a stationary target. A storage ring collider accelerates two sets of particles that rotate in opposite directions in the ring, then collides the two set of particles. CERN's Large Electron-Positron Collider is a storage ring collider. In 1987, Fermilab converted the Tevatron into a storage ring collider and installed a three-story-high detector that observed and measured the products of the head - on particle collisions.

As powerful as today's storage ring colliders are, physicists need even more powerful devices to test today's theories. Unfortunately, building larger rings is extremely expensive. CERN is considering building the Large Hadron Collider (LHC) in the existing 27-km (16.7-mi) tunnel that currently houses the Large Electron-Positron Collider. In 1988, the United States began planning for the construction of the Superconducting Super Collider (SSC) near Waxahachie, Texas. The SSC was to be an enormous storage ring collider accelerator 87 km (54 mi) long. However, after about one - fifth of the tunnel had been completed, the Congress of the United States voted to cancel the project in October 1993, as a result of the accelerator's projected cost of more than $10 billion.

Accelerators are used to explore atomic nuclei, thereby allowing nuclear scientists to identify new elements and to explain phenomena that affect the entire nucleus. Machines exceeding 1 GeV are used to study the fundamental particles that compose the nucleus. Several hundred of these particles have been identified. High - energy physicists hope to discover rules or principles that will permit an orderly arrangement of the proportion of subnuclear particles. Such an arrangement would be as useful to nuclear science as the periodic table of the chemical elements is to chemistry. Fermilab's accelerator and collider detector permit scientists to study violent particle collisions that mimic the state of the universe when it was just microseconds old. Continued study of their findings should increase scientific understanding of the makeup of the universe.

In addition, Particle Detectors, are described as instruments used to detect and study fundamental nuclear particles, as these detectors range in complexity from the well - known portable Geiger counter to room-sized spark and bubble chambers.

One of the first detectors to be used in nuclear physics was the ionization chamber, which consists essentially of a closed vessel containing a gas and equipped with two electrodes at different electrical potentials. The electrodes, depending on the type of instrument, may consist of parallel plates or coaxial cylinders, or the walls of the chamber may act as one electrode and a wire or rod inside the chamber act as the other. When ionizing particles of radiation enter the chamber they ionize the gas between the electrodes. The ions that are thus produced migrate to the electrodes of opposite sign (negatively charged ions move toward the positive electrode, and vice versa), creating a current that may be amplified and measured directly with an electrometer - an electroscope equipped with a scale - or amplified and recorded by means of electronic circuits.

Ionization chambers adapted to detect individual ionizing particles of radiation are called counters. The Geiger-Müller counter is one of the most versatile and widely used instruments of this type. It was developed by the German physicist Hans Geiger from an instrument first devised by Geiger and the British physicist Ernest Rutherford; it was improved in 1928 by Geiger and by the German American physicist Walther Müller. The counting tube is filled with a gas or a mixture of gases at low pressure, the electrodes being the thin metal wall of the tube and a fine wire, usually made of tungsten, stretched lengthwise along the axis of the tube. A strong electric field maintained between the electrodes accelerates the ions; these then collide with atoms of the gas, detaching electrons and thus producing more ions. When the voltage was raised sufficiently, the rapidly increasing current produced by a single particle sets off a discharge throughout the counter. The pulse caused by each particle is amplified electronically and then actuates a loudspeaker or a mechanical or electronic counting device.

Detectors that enable researchers to observe the tracks that particles leave behind are called track detectors. Spark and bubble chambers are track detectors, as are the cloud chamber and nuclear emulsions. Nuclear emulsions resemble photographic emulsions but are thicker and not as sensitive to light. A charged particle passing through the emulsion ionizes silver grains along its track. These grains become black when the emulsion is developed and can be studied with a microscope.

The fundamental principle of the cloud chamber was discovered by the British physicist C. T. R. Wilson in 1896, although an actual instrument was not constructed until 1911. The cloud chamber consists of a vessel several centimetres or more in diameter, with a glass window on one side and a movable piston on the other. The piston can be dropped rapidly to expand the volume of the chamber. The chamber is usually filled with dust-free air saturated with water vapour. Dropping the piston causes the gas to expand rapidly and causes its temperature to fall. The air is now supersaturated with water vapour, but the excess vapour cannot condense unless ions are present. Charged nuclear or atomic particles produce such ions, and any such particles passing through the chamber leave behind them a trail of ionized particles upon which the excess water vapour will condense, thus making visible the course of the charged particle. These tracks can be photographed and the photographs then analysed to provide information on the characteristics of the particles.

Because the paths of electrically charged particles are bent or deflected by a magnetic field, and the amount of deflection depends on the energy of the particle, a cloud chamber is often operated within a magnetic field. The tracks of negatively and positively charged particles will curve in opposite directions. By measuring the radius of curvature of each track, its velocity can be determined. Heavy nuclei such as alpha particles form thick and dense tracks, protons form tracks of medium thickness, and electrons form thin and irregular tracks. In a later refinement of Wilson's design, called a diffusion cloud chamber, a permanent layer of supersaturated vapour is formed between warm and cold regions. The layer of supersaturated vapour is continuously sensitive to the passage of particles, and the diffusion cloud chamber does not require the expansion of a piston for its operation. Although the cloud chamber has now been supplanted almost entirely by the bubble chamber and the spark chamber, it was used in making many important discoveries in nuclear physics.

The bubble chamber, invented in 1952 by the American physicist Donald Glaser, is similar in operation to the cloud chamber. In a bubble chamber a liquid is momentarily superheated to a temperature just above its boiling point. For an instant the liquid will not boil unless some impurity or disturbance is introduced. High - energy particles provide such a disturbance. Tiny bubbles form along the tracks as these particles pass through the liquid. If a photograph is taken just after the particles have crossed the chamber, these bubbles will make visible the paths of the particles. As with the cloud chamber, a bubble chamber placed between the poles of a magnet can be used to measure the energies of the particles. Many bubble chambers are equipped with superconducting magnets instead of conventional magnets. Bubble chambers filled with liquid hydrogen allow the study of interactions between the accelerated particles and the hydrogen nuclei.

In a spark chamber, incoming high - energy particles ionize the air or a gas between plates and wire grids that are kept alternately positively and negatively charged. Sparks jump along the paths of ionization and can be photographed to show particle tracks. In some spark-chamber installations, information on particle tracks is fed directly into electronic computer circuits without the necessity of photography. A spark chamber can be operated quickly and selectively. The instrument can be set to record particle tracks only when a particle of the type that the researchers want to study is produced in a nuclear reaction. This advantage is important in studies of the rarer particles; spark-chamber pictures, however, lack the resolution and detail of bubble-chamber pictures.

The scintillation counter functions by the ionization produced by charged particles moving at high speed within certain transparent solids and liquids, known as scintillating materials, causing flashes of visible light. The gases’ argon, krypton, and xenon produces ultraviolet light, and hence are used in scintillation counters. A primitive scintillation device, known as the spinthariscope, was invented in the early 1990s and was of considerable importance in the development of nuclear physics. The spinthariscope required, however, the counting of the scintillations by eye. Because of the uncertainties of this method, physicists turned to other detectors, including the Geiger-Müller counter. The scintillation method was revived in 1947 by placing the scintillating material in front of a photo multiplier tube, a type of photoelectric cell. The light flashes are converted into electrical pulses that can be amplified and recorded electronically.

Various organic and inorganic substances such as plastic, zinc sulfide, sodium iodide, and anthracene are used as scintillating materials. Certain substances react more favourably to specific types of radiation than others, making possible highly diversified instruments. The scintillation counter is superior to all other radiation-detecting devices in a number of fields of current research. It has replaced the Geiger-Müller counter in the detection of biological tracers and as a surveying instrument in prospecting for radioactive ores. It is also used in nuclear research, notably in the investigation of such particles as the antiproton, the meson Elementary Particles, and the neutrino. One such counter, the Crystal Ball, has been in use since 1979 for advanced particle research, first at the Stanford Linear Accelerator Centre and, since 1982, at the German Electron Synchrotron Laboratory (DESY) in Hamburg, Germany. The Crystal Ball is a hollow crystal sphere, about 2.1 m. (7 ft.) wide, that is surrounded by 730 sodium iodide crystals.

Many other types of interactions between matter and elementary particles are used in detectors. Thus in semiconductor detectors, electron-hole pairs that elementary particles produce in a semiconductor junction momentarily increase the electric conduction across the junction. The Cherenkov detector, on the other hand, makes use of the effect discovered by the Russian physicist Pavel Alekseyevich Cherenkov in 1934: A particle emits light when it passes through a nonconducting medium at a velocity higher than the velocity of light in that medium (the velocity of light in glass, for example, is lower than the velocity of light in vacuum). In Cherenkov detectors, materials such as glass, plastic, water, or carbon dioxide serve as the medium in which the light flashes are produced. As in scintillation counters, the light flashes are detected with photo multiplier tubes.

Neutral particles such as neutrons or neutrinos can be detected by nuclear reactions that occur when they collide with nuclei of certain atoms. Slow neutrons produce easily detectable alpha particles when they collide with boron nuclei in borontrifluoride. Neutrinos, which barely interact with matter, are detected in huge tanks containing perchloroethylene (C2CI4, a dry - cleaning fluid). The neutrinos that collide with chlorine nuclei produce radioactive argon nuclei. The perchloroethylene tank is flushed at regular intervals, and the newly formed argon atoms, presents in minute amounts, is counted. This type of neutrino detector, placed deep underground to shield against cosmic radiation, is currently used to measure the neutrino flux from the sun. Neutrino detectors may also take the form of scintillation counters, the tank in this case being filled with an organic liquid that emits light flashes when traversed by electrically charged particles produced by the interaction of neutrinos with the liquid's molecules.

The detectors now being developed for use with the storage rings and colliding particle beams of the most recent generation of accelerators are bubble-chamber types known as time-projection chambers. They can measure three-dimensionally the tracks produced by particles from colliding beams, with supplementary detectors to record other particles resulting from the high - power collisions. The Fermi National Accelerator Laboratory's CDF (Collision Detector Fermilab) is used with its colliding-beam accelerator to study head - on particle collisions. CDF's three different systems can capture or account for nearly all of the subnuclear fragments released in such violent collisions.

High - energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called superstrings. Superstrings may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, superstrings may tell us about the largest event of all time: the big bang, and the creation of the universe!

These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself. Read on to follow their progress.

The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging slightly more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.

The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element - that is, how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.

In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.

Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best - known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.

Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. And so is everything else.

It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is actually the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed.

What differentiates radio waves, visible light, and X ray is their energy. This energy is directly related to the wave’s length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.

How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.

To understand really what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.

While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.

Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is rather ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated.

The proposal and acceptance of quarks were a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.

Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.

Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultrareliable lasers and transistors.

The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. But they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. And physicists cannot yet provide a satisfactory description of gravity.

The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton’s explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity? No one has yet proposed a satisfactory answer to this question. Physicists have been trying to find one for a long time.

At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn’t we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.

There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics is subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology - for example, how did the universe get to be the way it is, and why did galaxies form?

Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.

We can't build an accelerator to probe that distance using today’s technology, because the proportions of size and energy show that it would stretch from here to the stars! But we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity!

These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model - such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicists’ questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.

Among the most promising new theories is the idea that everything is made of fundamental ‘strings,’ rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle - at quantum-length distances - we would see a tiny, vibrating loop!

In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.

This is a very clever idea, since it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.

Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. The end of the story has not yet been written, however. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.

Elementary Particles, in physics, are particles that cannot be broken down into any other particles. The term elementary particles also are used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.

Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analysing information about the world, they showed that these materials were not fundamental but were made of other substances.

In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible’). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.

Scientists now believe that quarks and three other types of particles - leptons, force-carrying bosons, and the Higgs boson - are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.

Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.

In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have slightly different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.

Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.

Classifying particles as either fermions or bosons are similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yield a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.

For example, a hydrogen atom contains two fermions: an electron and a proton. But the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.

A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.

The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.

The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter is bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms radioactively to decay). Scientists believed another type of fundamental boson, called the Higgs boson, give matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.

Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs. Leptons have electric charges of either -1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or -? . Antiquarks have electric charges of either -? or +? . Leptons interact rather weakly with one another and with other particles, while quarks interact strongly with one another.

Leptons and quarks each come in 6 varieties. Scientists divided these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.

Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino (ν? e); the second generation, the muon (µ) and the muon neutrino (ν? µ); and the third generation, the tau (t) and the tau neutrino (ν? t;).

The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of –1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.

Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.

Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth’s atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one - millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into Muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.

The third generation charged lepton is the tau. The tau has an electric charge of -1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short - only 0.3 picoseconds (a picosecond is one-trillionth of a second). Scientists believe the tau has an electrically neutral partner called the tau neutrino. While scientists have never detected a tau neutrino directly, they believe they have seen the effects of tau neutrinos during experiments. Like the other neutrinos, the tau neutrino has a very small mass or no mass at all.

The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavours,’ divided into three generations. Unlike leptons, however, quarks never exist alone - they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.

Quarks are unique among all elementary particles in that they have fractional electric charges - either +? or -? . In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.

The first generation quarks are designated up (u) and down (d); the second generation, charm and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.

The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +? , and the down quark has a charge of -? . The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +? , and the strange quark has a charge of -? . The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +? , and the bottom quark has a charge of -? The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.

Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).

Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +? , so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is -? Besides -? , or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).

Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the kaons (pronounced KAY-ons) and the D particles. Kaons (Κ?) and Ds comes in several different varieties, just as pions do. All varieties of kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contains either a charm quark or a charm antiquark.

Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? In addition +? Moreover -?-, produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.

The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus -? plus -? for a net charge of 0, making the neutron electrically neutral. Neutrons have a slightly greater mass than protons and an average lifetime of 930 seconds.

Many other baryons exist, and many contain quarks other than the up and down flavours. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.

British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.

All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks, and two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.

When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.

Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned, magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.

All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.

Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.

Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent’s hands never come into contact with the atoms of the child. The electrons in the parent’s atoms repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.

Theories about elementary particles, however, require forces to be local - that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.

Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle - any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.

For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are actually different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.

Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.

Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers - they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.

In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.

Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei of atoms. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.

A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of antired (also called cyan), antiblue (also called yellow), or Antigreen (also called magenta). Quark types and colours are not linked - up quarks, for example, may be red, green, or blue.

All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anticolour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour’s red, blue, and green combine to produce white, so the baryon is white, or colourless.

The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.

Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.

The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.

In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of atoms contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.

While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.

All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. But the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.

The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.

One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.

The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.

When the weak force affects a particle, the particle emits one of the three weak vector bosons -W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.

A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.

The gravitational force is probably the most familiar force, yet it is the only force not described by the standard model of particle physics. In 1915 German-born American physicist Albert Einstein developed a significant new approach to the concept of gravity: the general theory of relativity. While general relativity successfully described many phenomena, the theory was framed differently than were theories of particle physics, making relativity difficult to reconcile with particle physics. Through the end of the 20th century, all efforts to develop a theory of gravitation entirely consistent with particle physics failed.

Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.

Gravitation is the weakest of the four forces on the atomic scale, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.

Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).

The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.

Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.

Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.

The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.

Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.

Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.

The American physicists’ Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.

The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.

One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.

Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time-some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstrings. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.

Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles - leptons, quarks, force-carrying bosons, and the Higgs boson - appear to be ‘point particles.’ A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.

In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.

Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.

Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.

When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).

Particle accelerators come in two basic types-linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not a flat screen) television sets and computer monitors use this method to accelerate electrons.

Even so, that nevertheless, we cannot be to remove obstructions from whether we have related this to our deliberate technical interventions or intentional aspects drawn upon the conceptual interactions. As for reasons that are useful and necessary to distinguish between theory of techniques, which the interconnectivity established through the conjunctive relationships have in relation of what seemed allowable for us to expand our knowledge of the complex and subtle factors that account for therapeutic action. This, however, can ultimately become the most effective basis for refining and developing our understanding of how best to serve of ourselves to advance the analytic situation and too aculeate more profound and very acute satisfactory depictions in the psychoanalytic engagements, no matter whatever our accountable resultants may be of our theoretical orientation.

An appreciation of the power of interactive forces in the analytic field not only challenges many traditionally held beliefs about the nature of therapeutic action. However, these take upon the requirement for us to recognize the untenability of the traditional view that analysts can be an objective source in the work. They have better to understand it, for example, where patients and analysts may express as a quantity that which the analyst is of a position to be an objective interpreter of the patient's experiential processes. That in this may reflect a form of collusive enactment and a convergence of the needs of both to see the analyst as an authority, and if the patient and analysts' both submit to needs to believe that the analyst is the omniscient other or the benevolent authority to which one can entrust ones' own. As the functional structure of the relationship might serve to obscure recognition of the fact that it is inclined to encourage the belief that, as once put, that wherever a coordinative system is complicating and hardens of its complexities, as recognized of the mind or brain, immediately 'indeterminacy' so then arises, not necessarily because of some preconditional unobtainability but holds accountably to subjective matters' from which grow stronger in obtaining the right prediction, least of mention, that so many things are yet to be known, in that the stray consequences of studying them will disturb the status quo, and of not-knowing to what influential persuasions do really occur between the protective cranial wall of vertebral anatomy. It is therefore that our manifesting awarenesses cannot accord with the inclining inclinations beheld to what is meant in how. History is not and cannot be determinate. Thus, the supposed causes may only produce the consequences we expect, this has rarely been more true than of those whose thoughts and interaction in psychoanalytic interrelatedness are in a way that no dramatist would ever dare to conceive.

In Winnicott (1969) has noted that there are times when 'analysers' can serve as holding operations and become interminable without any real growth occurring.

An interactive perspective also helps to clarify why in some instances the analysers 'abstinence' carriers as much risk of negative iatrogenic consequences as does active intervention. Although silence at time obviously can be respectful and facilitating, at other times it can be cruel and sadistic, or it can be based on fear of engagement, among a host of possible other meanings and equally attributive to the distributional dynamical functions.

An appreciation of interactive factors also allows us to consider that, to whatever degree the patient's perceptions of the analyst are plausible and even valid (Ferenczi 1933, Little 1951, Levenson 1973, Searles 1975, Gill 1982, Hoffman 1983), this may be due to the patient's expertise of stimulating precisely this kind of responsiveness in the analyst. The reverse is true as well thus, though patient and analyst each will have unique vulnerabilities, sensitivities, strengths, and needs, we must consider why such peculiarities have excited the particular qualities or sensibilities of either patient or analyst at a give moment and not at others. At any moment patient or analyst might be involved in some kind of collusive enactment (Racker 1957, 1959, Grotstein 1981, and McDougall 1979), they have held that their considerations explain of reasons that posit of themselves of why clinicians often seem to practice in ways that contradict their own shared beliefs and theoretical positions, least of mention, principles by way of enacting to some unfiltered dialectical discourse.

Yet, these differences, which occur within and between the diverse analytic traditions, in that an interactive view of the analytic field has some theoretical and technical implications that bridge all psychoanalytically perceptively which each among us cannot ignore. Its premise lies in the fact that we recognize that the analyst and patient cannot simply avoid having an impact on each other, even if both are totally silent, require us to realize that even if a treatment is productive or successful, we cannot be clear whether they have related this to our deliberate technical interventions or to aspects of the interaction that have eluded our awareness.

We have premised its owing intentionality that the recognition that analyst and patient cannot simply avoid having an impact on each other, even if both are totally silent, requires us to realize that even if some treatment is productive or successful, we cannot be clear whether we have related this to our deliberate technical interventions or to aspects of the interaction that have eluded austereness.

Psychoanalysts of diverse orientations increasingly have come to recognize that patient and analysts are continually influencing and being influenced by each other in a dialectical way, often without awareness. This has radical implications for abstractive views drawn upon psychoanalytic technique. Where these psychoanalysts disagree is in their conceptions of what the specific implications of an interactive view of the analytic field might be.

It is therefore that distinguishing between theory of technique is useful and necessary, which relates to what we do with awareness and intention, and theory of therapeutic action, which deals with what is healing in the psychoanalytic interaction whether or not it evolves from our ‘technique’: That recognizing this can allow us to expand our knowledge of the complex and subtler factors that account for therapeutic action. This can ultimately become the most effective basis for refining and developing our understanding of how best to use ourselves to advance the analytic work and to simplify more profound and incisive kinds of psychoanalytic engagement, no matter what our theoretical orientation.

An appreciation of the power of interactive forces in the analytic subject field not only challenges many traditionally held beliefs about the nature of therapeutic action, but also requires us to recognize the untenability of the traditional view that the analyst can be an objective participant in the work? It also helps us to grasp the extent to which presumably therapeutic interpretations, for example, can be ways of harassing, demeaning, patronizing, impinging on, penetrating, or violating the patient, or particular ways of gratifying, supporting, complying, among several of other possibilities. Where patient and analysts assume that the analyst can be an objective interpreter of the patient’s experience, this may factually reflect a form of collusive enactment and a convergence of the needs of both to see the analyst as an authority. If patient and analyst both have needs to believe that the analyst is the omniscient other or the benevolent authority to which one can entrust ones' own, the structure of the relationship might serve to obscure recognition of the fact that they are enacting such a drama. In this regard, Winnicott (1969) has noted that on that point are times when ‘analyses’ can serve as holding operations and become interminable, without any real growth occurring.

An interactive perspective also helps to clarify why sometimes the analyst’s ‘abstinence’ carries as much risk of negative iatrogenic consequences as does actively intervention. Although silence at times obviously can be respectful and facilitating, at other times it can be cruel and sadistic, or it can be based on fear of engagement, among a host of possible other meanings and contributing functions.

The contextual meaning of the patient’s free association also has to be reconsidered from such a perspective. Usually viewed as the medium of analytic work, free association may at times be a profound frame of resistance, and to avoid rather than engage in an analytic process. Alternatively it can reflect a form of compliance or collusion, conscious or unconscious, with the analyst’s needs, fears, resistances.

Amid the welter of competing or complementary theories that have characterized psychoanalyses over the century of its existence, the ideas of transference and the convictions very important in the therapeutic process are an unfiling theme. None of Freud's epochal discoveries - the power to the dynamic unconscious, the meaningfulness of the dream, the uniformity of intrapsychgic conflict - having been more heuristically productive or more clinically valuable than his demonstration that human regularly and inevitably repeat with the analyst and with other important figures in their current live patterned of relationship, of fantasy, and of conflict with the crucial figures in their childhood - primarily their parents?

Even for Freud, however, the awareness of this phenomenon and the understanding of its specific significance in the analytic situation itself came gradually. The flamboyant transference events in Breuer's patient Anna O and the unfortunate outcome in the patient of Dora served to consolidate in Freud's mind a view of transference as a resistance phenomenon, as an obstacle to the recollection of traumatic events that, in his view at the time, formed the true essence of the psychoanalytic process. Emphasis in this early period, thus, was on the 'management' of the transference, on finding ways to prevent its interference with the proper business of the analysis - recognizing, always, the inevitability of its occurrence. Freud was most concerned about the interferences generate by the 'negative' (i.e., hostile) and the erotised transference, the 'positive' transference he considered 'unobjectable,' the vehicle of success in the psychoanalysis.

Freud was also concerned to distinguish the analytic transference from the effects of suggestion in the hypnotic treatment he had learned in France, where he interdependently studying from Professor Charcot at the Salpêtrière hospital, and had been the forerunner of his own psychoanalysis technique. He, and his early followers and students, were at great pains to define the transference as a spontaneous product of the analytic situation, emerging from the patient rather than imposed by the analyst. Ultimately, Freud came to view as essentially for analytic cures the development of a new mental structure, the 'transference neurosis' - re-creation of the original neurosis in the analytic situation itself, with the patient experiencing the analyst as the object of his or her infantile wishes and the focus of his or her pathogenic conflicts. The crucial importance of the transference neurosis - it's very reality as a clinical phenomenon - has been and continues to be a matter of debate among psychoanalysts to this day.

Over the resulting decades several themes appear and reappear. One to which Freud alluded is that of the uniqueness versus the ubiquity of transference, is it a special creation of the analytic situation or is it an inevitable and universal aspect of all human relation? More central and perhaps more heated in the continuing debate, as the primary of transference interpretation in which Strahey called the 'mutative' effects of analysis - for example, whether such interpretations are simply more convincing than others or are the only kinds that are truly an effective therapy constitutionally begotten. Echoes of this debate have resounded through the years and to be perspectively descendable in most recent literary works. Finally, are all of the patient's reactions to the analyst in the analytic situations to be of counter-transference or do some partake of the 'real' 'non-neurotic' relationship or of the 'working alliance'?

It is only to mention, at the outset that resistance is, in certain fundamental references, an operational equivalent of defence, its scope is really far larger and more complicated. The thoughts of its nature and motivations on resistances to the psychoanalytic process use an array of mechanisms that sometimes defy classification in the way that fundamental genetically determined defences, derived from importantly and common developmental trends, can be classified. From falling asleep too brilliant argument, there is a limitless and mobile of devices with which the patient may protect the current integrations of his personality, including his system of permanent defences. In fact, Resistances of a surface, conscious type, related to individual character and to educational and cultural background, often present themselves are the patient’s first confrontations with a unique and often puzzling treatment method. While some of these phenomena are continuous with deeper resistances, a closer, and perhaps balancing equilibrium held in bondage to the mutuality within the continuity that we must meet others at their own level. All the same, it now leaves to a greater extent, the much-neglected faculty of informed and reflective common sense, and moves onto the less readily accessible and explicable dynamism, which inevitably supervene in analytic work, even if these initial surface Resistances have been largely or wholly mastered. Its submissive providences lay order to perfect connectivity, premising with which is the specific influence of the immediate cultural climate, stressed of the general attitude of many young people (Anna Freud 1968) toward the psychoanalytic process and its goals.

When Freud gave up the use of hypnosis for several reasons, beginning with the personal difficulty in inducing the hypnotic state and culminating in his ultimate and adequate reason - that it bypassed the essential lever of lasting therapeutic change, the confrontation with the repressing forces themselves - he turned to the method of waking discourse with the patient, in which insistence, with a sense of infallibility, accompanied by head pressure and release, were the essential tools for the overcoming of resistance (Breuer and Freud 1893-1895). Although the affording the unformidable combinations that are awaiting the presence to the future attributions in which the valuing qualities that allow us the privilege to have observed various forms of resistance ( in a general sense) before, as for example, inability to be hypnotized, ful in totality and a willful rejection of hypnosis, selective refusal to discuss certain topics under hypnosis, adverse reactions to testing for stances, it was the effectiveness of insistence in inducing the patient to fill memory gaps or to accept the physician’s constructions that reapproached of extending its lead, in that Freud was to a first and enduring formulation: Since effort

- psychic work - by the physician was required, a physical; evidently force, a resistance opposed to the pathogenic ideas, becomingly conscious (or being remembered), had to be overcome. They thought this to be the same psychic force that had initiated the symptom formation by preventing the original pathogenic ideas from achieving adequate affective discharge and establishing adequate associations - in short, from remaining or becomingly conscious. The motive for invoking such a force would be the abolition (or avoidance) of some form of physical distress or pain, such as shame, self-reproach, fear of harm, or equivalent cause for rejecting or wishing to forget the experience. Such are the appreciative attributions, in that the distributive contributional dynamic functions bestow the factoring understructure of the constellation of ideas, have already comforted us, yet, the later is clearly the ego and especially the character of it. It was thought important to show the patient that his resistance was the same as the original ‘repulsion’ which had initiated pathogenesis. The step later was short to the essential equivalent and permanent concept of defence at first repression. That is, though Freud gave tremendous sight to the effectiveness of the hand pressure manoeuver, he saw it essentially for distancing the patient’s will and conscious attention and thus simplifying the emergence of latent ideas (or images). From a present-day point of view, one cannot but think of the powerful transference excited by an infallible parental figure in a procedure only one step removed from the relative abdication of will. Consciousnessly involved in hypnosis, and that this quasi-archaic qualitative pattern of relationship was more important to effectiveness or failure than was the exchange of a psychic energy postulate by Freud. In this sense, the ‘laying on of hands’ granted its effect on attention, was probably even more significant in inducing transference regression than in the role that the great discoverer assigned to it.

What is important, in whatever way, is the establishment of a viable scientific and working idea of resistance to the therapeutic process as a manifestation of a reactivated intrapsychic conflict in a new interpersonal context. This in its essentials persists to this day in psychoanalytic work, in the concept of ego resistances.

At the same proven capability, as measuring with this development, less explicitly formulated but often described or inferred, was the marginal total rejecting or hostile or unruly attitude of the patient, sometimes evoking spontaneous antagonistic reactions in the physician. In occasional direct references in the early work and in the choice of figurative phraseology for years after that, Freud recognizes this ‘balky child’ type of struggle against the doctor’s efforts. One needs only recall Elizabeth von R., who would tell Freud that she was not better, “with a sly look of satisfaction” at his discomfiture (Breuer and Freud 1893-1895). When deep hypnosis failed with her, Freud “was glad enough that once, she refrained from triumphantly protesting ‘I am not asleep, you know, and cannot be hypnotized"; in this context that show with which this categorical type of resistance phenomenon that it represents the evolutionary whisper, though Freud and many others found it to come within the evolving gait of steps in a whisper, after-all, the advance of applied science was bringing to light curious new phenomena that, however hard men might try, would not be fitted into the existing order of things. All this is to encourage along the side of the paradigms of science to agree of it achievable obtainability through with of those has witnessed the impregnable future, least mentions, far and above is the first essentially forced finality to agree that fighting a great adventure in thought at lengths to come safely to shore is necessary, in this glare, the human figure has had to apply formally to be enlarged so that the brave stands which make for civic and academic freedom. It also taken to applicate the form to encourage the belief that, as nicely put, 'all men dance to the tune of an invisible piper. Because, we did not attest the big bang, but call its evolution of a particular type of ego-syntonic struggle with the physician that remains potentially important during any analysis by what the negative transference, whatever its particular nuances of motivation. This is, of course, a manifestly different phenomenon from the earnest effortful struggles of the cooperative patient whose associations fail to attend to him, or who forgets his dream, or who comes at the wrong hour, to his extreme humiliation. Still, in that respect is an important dynamic relationship between the two sets of phenomena.

Nonetheless, Freud made the analysis of resistance the central obligation of analytic work and proceeded from primitive beginnings, with rapidly increasing sophistication, both technical and psychopathologic, ideas that remain valid to this day; that conscious knowledge transmitted to the patient may have no, or an adverse, effect in the mobilization of what is similar or identical in the unconscious; that the repressing forces, the resistances, are more like infiltrates than discrete foreign-body capsules in their relation to preconscious associative systems; that the physician must begin with the surface and continue centripetally; that hysterical symptoms are more often serial and multiple than mononuclear, and the resistances participate in all productions and must be dealt with at every step of analytic work, and other matters of equal significance (Breuer and Freud 1893-1895).

Freud always maintained the central concept of resistance, and bequeathed it (reinforced later by the structural theory) to the generations of analysts who have followed him. Still, as the years went on, he elaborated the general scope of resistance far beyond the basic concept of intrapsychic defence, anticathexis that a great variety and range of mechanisms could impede the psychoanalysis as a recognizable process or, beyond this, making it ineffective or reverse expected therapeutic responses, or extend indefinitely the patient’s dependence on the analyst. When extended its direct equation with the anticathexis of defences, the variety of sources - not to speak of manifestations - of resistance multiplied rapidly. To remark upon the merely secondary realizations of illnesses (Freud 1905), under which the ‘external’ resistances are, for example, the hostility of the unmurmuring family line of treatment (Freud 1917), evenhandedly as the persistence of illness, with its detachment, superciliousness, and mechanical compliance as some weapons system for frustrating the analyst, as with the utterly troubled young girl (Freud 1920). The relevant sense of securing the symptomatic primary modes of perturbation conflict solution, and most crucially, the analysable obtainability of such subtly evolving concept of ‘transference-resistance,’ in its oscillating pluralistic sense, for example, (Breuer and Freud 1893-1895: Freud 1912, 1917). In his last writings, conspicuously in Analysis Terminable and Interminable (1937), in considering several possible factors in human personality that obstruct or render ineffectually the successful end of the analytic procedure, Freud offered a variety of psychodynamic considerations that could be fundamental in the extended or broadened concept of resistance: The question of the constitutional strength of instincts and their relation to ego strength; the problem of the accessibility of latent conflicts when undisturbed by the patient’s life situation (briefly but pointedly) the impingement of the analyst’s personality on the analytic situation and process; the existence of certain qualities of the libidinal cathexes - especially undue adhesiveness or excessive mobility; rigid character structure; the existence of certain sex-linked ‘bedrock’ conflicts that Freud regarded as biologically determined (insoluble penis envy in the female, and the male’s persisting conflict with his passivity). Finally and most formidable, there was the cluster of dynamism and phenomena that Freud, beginning in, Beyond the Pleasure Principle (1920) and The Ego and the Id (1923), attributed consistently and with deepening conviction to the operation of a death instinct. That is to say, to the ‘unconscious sense of guilt’ and demands the need for punishment, the repetition compulsion, the negative therapeutic reaction, and the more general operations of the need to suffer or to die or to seek outer or inner worldly concern. Yet, it remains an inexorable truth that the resistances underlying and hidden of representationally inherent cases or certain limitations implicit like psychoanalytic work, are moderately invincibly formidable, and cannot be disestablished by theoretical position any more than they can be thus created.

The varied clinical manifestations of resistance are dealt with extensively throughout Freud’s own writings, in many individual papers of other analysts, and in comprehensive works on analytic technique, for example, those of Fenichel (1941), Glover (1955), and more recently Greenson (1967) of which only makes a selective and occasional reference to their kaleidoscopic variety.

When free association and interpretation displaced hypnosis and derivative primitive techniques, the psychoanalysis as we now construe it came into being. To the extent that free association was the patient’s active participation, it was in this sphere that his ‘resistance’ to the new technique was most clearly recognized as such, cessation, slowing, circumlocution and a lack of informative or relevant content, emotional detachment, and obsessional doubt or circumstantiality became established as obvious impediments to the early (no longer exclusive but still radically important) topographic goals: To convert unconscious ideas largely via the interpretation of preconscious derivatives into conscious ideas. Only with time and increasing sophistication did fluency, even vividness of associative content, tendentious ‘relevancy’ itself evidently can, like over-compliant acceptance of interpretation, conceal and carrying out resistances that were the more formidable because expressed in such ‘good behaviour’.

One may define resistance (and in so doing include a liberal and augmenting paraphrase of Freud’s own most pithy definition [The Interpretation of Dreams 1900]) as anything of essentially intrapsychic significance in the patient that impedes or interrupts the progress of psychoanalytic work or interferes with its basic purposes and goals. In specifying ‘in the patient’ one is to imply as not underestimate the possibly decisive importance of the analyst’s resistances, to separate the ‘counterresistance’ as a different matter, in a practical sense, requiring separate study. One may concur, that as a generalized infraction forwarded of a direction with Glover’s statement (1955) that “however we may approach the mental apparatus there is no part of its function that cannot serve the purposes of mental defence and therefore give apparency during the analysis to the phenomena of resistances.” One may also concur with his formulation that the most successful resistances (in contrast with those employing manifest expressions) are silent, but disagree with the paradoxical sequel “. . . they might say that the sign of their existence is our unawareness of them.” For the absence of important material is a given sign, and becoming aware of such an absence is necessary, if possible.

Freud, in his technical papers and in many other writings, despite his reluctance in this direction did lay down the general and essential technical principles and precepts for analytic practice. We must note, however, that the clear and useful technical precepts are largely in that may be regarded as the ‘tactical sphere’, i.e., they deal with the manifest process phenomena of ego resistances. Other resistances, those largely contained in the ‘silent’ group, for example, detainment or unsuccessful symptomatic alteration, omission of decisive conflict material form free association or [more often] from the transference neurosis, inability to accept cancellation of the analysis, and allied matters. In that saying, the ‘strategic sphere’, relating to the depths of the patient’s psychopathology and personality structure and to his total reactions to the psychoanalytic situation, process, and the person of the analyst. Its use of the tern ‘strategic’ and ‘tactical’ differ from their user by others, for example, Kaiser (1934). While it is not to presume to offer simple precepts for the ready liquidation of the massive silent resistances, heedfully to contribute of something, however slight. To understanding them better and thus, potentially, to their better management but some of these considerations, for example, iatrogenic regression, as to context (1961, 1966). In the ‘strategic’ arena of resistance, so often manifested by total or relative ‘absence’, it is the informed surmise regarding the existence of the silent territory, by way of ongoing reconstructive activity, which is the first and essential ‘activity’ of the analyst. Beyond this mindfulness and subtle potentialities of the shaping and selection of interpretative direction and emphasis and the tactful indication of tendentious distortion or absence.

Because of a possible variety of factors, beginning with the estranging dissimulations that magnetism that the verbal statement of unconscious content puts into action of the analysts and patients alike (of itself is a frequent resistance or counterresistance) the priority of the analysis of resistance over the analysis of content, as discretely separate, did not readily come to its carry out quality. This might have been owing to the difficulties of dealing with more complicated resistances or developing an adequate methodology in this arena, or even the fact that an extensive interval over its timed and tactful reference to content (or its overall nature) sometimes seems the only way of mobilizing (reflexively) and thus exposing the corresponding resistance for interpretation and ‘working through’, an echo of Freud’s early, never fully relinquished diphasic process (1940).

Since this is not a technical paper, the admissive structural functionality, over which an extended discussion of the evolution of views on methods of resistance analysis, although substantiated functions has inevitably related such views to our immediate subject matter. Its mindful approaches that range from the strict systematic analysis of character resistances of Wilhelm Reich (1933) or the absolute exclusion of content interpretation of Kaiser (1934), to the special efforts toward dramatization of the transference of Ferenczi and Rank (1925) or Ferenczi’s own experiments with active techniques of deprivation and (on the other hand) the gratification of regressed transference wishes in adults (for example, 1919, 1920, 1930, 1931, 1932). Developments in ego psychology (for example, Anna Freud’s classical contribution on the mechanisms of defence [1936] brought the variety and importance of defence mechanisms securely into the foreground of analytic work, and the subsequential extent of which is widely accepted priority of defence analysis has rectified a great deal of the original [and not entirely inexplicable] ‘cultural cover with lagging’ in this describing importance, that if not exclusive, spheres of resistance analysis. Concomitant with a more widespread functional acceptance of the essentiality and priority (in principle) of resistance analysis over content interpretation, there is usually a more flexible view of the technical application of the essential precepts, permitting interpretive mobility, according to intuitive certainty or judgement between the psychic structures, according to Anna Freud (1936) principle of ‘equidistance’. Discrete specification may sometimes deal resistance with other than those apart from the intrinsic conceptual difficultly in the latter intellectual process, i.e., the specifying of a resistance without suggesting that against which it is directed (Waelder 1960). There is also a general broadening of the scope of interpretive method. Witness, for example, Loewenstein’s ‘reconstruction upward’ (1951) and Stone, having his own differently derived but often an allied conception, the ‘integrative interpretation’ (1951), both of which recognize that resistance may be directed ‘upward’ or against the integration of experience, than against the affirmative extent and exclusively infantile or against the past. Similar considerations are also reflected in Hartmann’s ‘principle of multiple appeal’ (1951).

It may, nonetheless be of note that while the emphasis on resistance in Freud’s early clinical presentations is overall proportionate to his theoretical statements, his methods of dealing with the concealed and more formidable resistances are not clear, except in certain active interventions, such as the magical intestinal prognosis in the “Wolf Man” (1918), or the ‘time limit’ in the same case, or the principle that at a certain point patients should confront phobic symptoms directly (1910), or the suggestion to transfer to a woman analyst, with the homosexual woman (1920). In these manoeuvres and attitudes it is recognized that (1) interpretation, the prime working instrument of analysis, may often reach an impasse in relation to powerful ‘strategic’ resistances, and (2) an implicit recognition that elements in the personal relationship of the analytic situation, specifically the transference, may subvert the most skilful analytic work by producing massive although ‘silent’ resistances to ultimate goals, and that sometimes where energetic elements are formidable, they may have to be dealt with directly and holistically, in the patient’s living and actual situation.

Freud’s own interest in active techniques stimulated Ferenczi to extreme developments in this sphere (1912, 1920), later combined with his oppositely oriented methods of indulgence (1930). As time presses on, noninterpretative methods, particularly those involving gratifications of transference wishes, whether libidinal or masochistic, were set aside with increasing severity, in recognition of their contravention of the indispensability of the undistorted transference and the unique importance of transference analysis in analytic work. The same has been largely true of tendentious, selective instinctual frustrations (Ferenczi 1919, 1020). However, there is no doubt that the use of interpretive alternatives (sometimes suggests for the deliberate control of obstinate resistance phenomena in this spheric arena) has been sharpened by - partially coloured by - the earlier experiments in prohibition, whose transference implications were fully apparent at the time of their introduction. The type of active intervention introduced by Freud (the time limit, the confrontation of symptoms), confined in actuality to the sphere of the demonstrable clinical relationship, has retained a certain optional place in our work, although the potential transference meaning and impact of such interventions, with corresponding variations or limitations of effectiveness, are increasingly understood and considered. The broad general principle of abstinence in the psychoanalytic situation, stated by Freud in its sharpest epitome in 1919, remains a basic and indispensable context of psychoanalytic technique. The nuances of application remain open to, in fact to require, continuing study (Stone 1961, 1966).

In assent to important developments in ego psychology and characterology (for conspicuous examples, Anna Freud 1936, Kris 1956, Hartmann 1951, Loewenstein 1851, Waelder 1930, the principle factor in deepening, broadening, and complicating the conceptual problem of resistance, and thus modifying the strict latter-like sequential approach (Reich 1933) to the analysis of resistance ad content respectively, even in principle, has been the progressive emergence of transference analysis as the central and decisive task of analytic work. For, to state it over succinctly, and thus to risk some inaccuracy, the transference is far more than the most difficult tool of resistances and (simultaneously) an indispensable element in the therapeutic effort. Given the mature capacity for working alliance, it is the central dynamism of the patient’s participation in the analytic process and, while the proximal or remote source of all significant resistances, but those manifest phenomena originating in the conscious personal or cultural attitudes and experiences of the adult patient or those deriving from the inevitable cohesive-conservative forces in the patient’s personality, for which we must still summon briefly the Goethe-Freud ‘witch’, metapsychology (Freud 1937).

In relation to the ‘tactical’, i.e., process, resistances, an overall view of what is immediate and confronting for example, the threatening emergence of ego-dystonic sexual or aggressive material, may be adequate. All the same, to any casual access to what may be called the ‘strategic’ sphere of resistance. One must have a tentative working formulation of the total psychic situation in mind, including an informed surmise regarding large and essential unconscious trends. Such suggested procedure is, accessibly open to discussion on more than one scope, and it does involve one immediately in some basic epistemological problems of psychoanalysis. Unfortunately, we cannot become involved in this fascinating sphere of dialectic in this brief essay on a large subject nevertheless, in his early work Freud relied enthusiastically on his own capacity to fill primary gaps in the patient’s memory through informed inherences from the available data, and then, with an aura of infallibility, actively persuaded the patient to accept these constructions. However, with the further elaboration of psychoanalysis as process, in the sense of the increasing importance of free association, of the analyst’s relative passivity, and other characteristics of the process as we now know it, there have inevitably been some important modifications of the attitudes reelected in such procedures. While, as far as it had never been revised or revoked, Freud’s view that the resistances are operatives in every step of the analytic work, and knowing that there exists in many minds paradoxical mystiques to the effect that the patient’s free associations as such, unimpeded (and uninterpreted), could ultimately provide the whole and meaningful story of his neurosis, in the sense of direct information. This is, of course, manifestly at variances with Freud’s basic assumptions about the role of resistance, and the germane roles of defence and conflict in the origin of illness.

Nonetheless, in Freud’s, Recommendations (1912) is his advice against attempting to reconstruct the essentials of a case while the case is in progress. Such a reconstruction, here assumes, would be undertaken for scientific reasons. The caution, nevertheless, rests on both scientific and therapeutic grounds, on the assumption that the analyst’s receptiveness to new data and his capacity for evenly suspended attention would be impaired by such an effort. It is true, of course, that rigid preoccupation with an intellectual formulation can impair the capacities. Even so, it is also true that the ‘formulation’ or structuring of a case can and largely does go on preconsciously, in some references even unconsciously, and usually quite spontaneously. One must assume at the very least, that some such process reaches the analyst’s first perception of a ‘resistance’. Some have thought that Freud would have disagreed with using such a process. Still, its use, whatever the form, is a necessity, and, at times, it requires and should have the hypercathexis of conscious and concentrated reflection? One may, of course, assign the more purposive intellectual processes to periods outside hours, and thus better preserve the other equally important responses to the dual intellectual demand of psychoanalytic technique. The ‘voice of the intellect’, all the same, should not be deprived of this essential place in analytic work. It is well known that it must never be allowed to foreclose mobile intuitive perceptiveness or openness to unexpected data. Nor must ongoing formulations in the mind of the analyst be allowed to cram the spontaneity of the patient’s association. They should remain ‘in the analyst’s head’. To epitomize the technical situation: Strategic considerations require varying degrees of reflective thought, possibly outside hours. Except the perspectives and critiques they silently lend to understanding, they should not influence the natural and spontaneous, often intuitive, responses of the disciplined analyst to the never-ending variable nuances of his patient’s ‘tactics’. In relation to any category of clinical psychoanalytic problem. It is the structure of the transference neurosis and its unfolding, with the adumbrative material in characterology, symptom formation, personal and clinical history and the clues from specific data of the psychoanalytic process, taken as an ensemble, which provide the most reliable basis for general tentative reconstruction and thus for the understanding of resistances. While we must marshal our entire body of data, theory, and technology to see the transference neurosis as an epitome of the patient’s emotional life, our comprehension of it is nonetheless based essentially on something that is right before us. Again, the total ensemble is essential, and the objectively observable phenomena of the transference neurosis are of crucial and central valences.

In the background data, the large outlines of life history are uniquely important because they do represent, or at least strikingly suggest, the patient’s gross strategies of survival and growth, of avoidance and affirmation. One may infer that they will be invoked again in the conformation with the analyst, in his pluralistic significance. Some oversimplified and fragmentary illustrations are chosen in the occupational commitments with children and the mood in which they are carried out, with the general character of manifest sexual adaptation, can contribute to rational surmise about whether neurotic childlessness is based predominantly on disturbances of the Oedipus complex, on an original inability to achieve an adequate psychic separation from parent representations, or on the vicissitudes of extreme sibling rivalry. It must surely crystallize illnesses and analytic process if one knows that some patient lives, by choice, the breadth of an ocean removed from parents and siblings with whom there has been no evident quarrel, when this is not a crucial matter of occupational opportunity or equivalently important reality. Necessarily a male patient’s gross psychosexual biography helps us to understand which ‘side’ of the incestuous transference is more likely to be surfacing in his first paroxysm of heterosexual ‘acting out’. While it is true that dreams, parapraxes, and other traditionally dependable psychoanalytic material may dramatically reveal the ego-dystonic directions of impulse and fantasy life, and the specific nature of opposing forces, it is, only, the composite situation that historical and current picture that reveals the prevailing or alternative defences, the large-scale economic patterns, and the preferred or stable, i.e., most strongly over determined, trends of conflict solution.

Tactical problems of resistance were earliest observed largely in disturbances of free association, which, in frequent tacit assumptions, would, or in principle could, lead without assistance to the ultimate genetic truth. This truth was construed to be the awareness of previously repressed memory (or the acceptance of convincing and germane constructions). As time went on, in Freud’s own writing, terms of conative import appeared - such as ‘tendency’ or, more of vividly, ‘impulsiveness’. However, the critical etiological and (reciprocally) therapeutic importance of memory has, of course, never really lost its importance. For, while the recovery of traumatic memories, with an abreaction, is still dramatic in its therapeutic effect, for example, in war neuroses or equivalently civilian experiences and occasionally in isolated sexual experiences of childhood or adolescence, neuroses of isolated traumatic origin are rare in current psychoanalytic experience. Traumata is usually multiple, repetitive, often serving to crystallize, dramatize and fix (something even ‘covers’) more chronic disturbances, such as distortions or pathological pressures in the instinct life, against the background of larger problems of basic object relationships. Freud was already becoming aware of the complex structure of neuroses when he wrote his general discussion for the Studies on Hysteria (Breuer and Freud 1893-1895). Thus, to put it all too briefly, when structurized impulses or general reaction tendencies can truly be accepted for memory, i.e., as matters of the past, other than in a tentative explanatory sense, much of the analytic work with the dynamics of the transference neurosis has necessarily been accomplished. One does not readily give up a love or hatred, personal or national, only because one learns that it is based on a crushing defeat of the remote past.

The manifest communicative phenomena of resistance remain very important, just as the common cold remains important in clinical medicine. Morally justified in those of whom walk continuously among the corpsed of times generations, their circulatory momentum around the cross and forever finding its same death but it's comforting solice and refuge, from which, they dwell of the unknown infinity. It will never cease to be important to tell a patient that he is avoiding the emergence of sexual fantasies, that his blank silence covers latent thoughts about the analyst, or (in a measure more sophisticated) that apparent and enthusiastic erotic fantasies about the analyst conceal and include a wish to humiliate or degrade him. However, we can be better prepared, even for these problems, because of ongoing holistic reconstruction. Surely we are better prepared for the formidable resistances of patients who apparently do ‘tell all’ or even ‘feel all’, in a most convincing way and in all sincerity, yet may finish apparently thorough analysis without having touched certain nuclear conflicts of their lives and characters or, (more often) having failed to meet the transference neurosis, with a sense of affective reality. These instances, for instance refers to the instances described by Freud (1937) in which such conflicts remain dormant because current life does not impinge on them, but to those in which the ‘acting out’, in life or the solution in severe symptoms is desperately elected by the personality in apparently paradoxical preferences to the subjective vicissitudes of the transference neurosis (Stone 1966).

In brief, is a tentative formulation of the respective natures of the two peculiar and yet particular groups of resistance phenomena, ultimately and vestigially related and exists in varying degree in all analyses. It is, however, one or the other is usually important and is, in practical and prognostic sense, quite differently as: (1) Those progress to evidently large discernible impediments of the psychoanalytic process in its immediate operational sense. These are usual in the neuroses, in persons who have achieved satisfactory separation of the 'self' from the primary y object. Nevertheless, whose lives are disturbed by the residues of instinctual and other intrapsychic conflicts in relation to the unconscious representations of early objects and thus to transference objects. (2) Those that may be similarly manifested at times but maybe or even exaggeratedly free of them. Where the essential avoidance is of the genuine and effective e diphasic involvement in the transference neurosis, with regard too fundamental and critical conflicted, and thus of the potential relinquishment of symptomatic solutions and the ultimate satisfactory separation from the analyst. In this context, among other phenomena, there may be large-scale hiatuses in analytic material in the usual experiential sense, or there may be a striking absence of available and appropriate cues of connection with the transference, or failure, this complex of phenomena may repeat an original disturbance in ‘separation and individuation’ (Mahler 1965). Alternatively of other severe disturbances in early object relationships or related pregenital (particular oral) conflicts can have produced tenacious narcissistic avoidance of transference involvement, to facade involvement, or to the alternative of inveterate regressed and ambivalent dependency. Dependable and largely affirmative secondary identifications have usually not been achieved originally, and this phenomenon, related to basic disturbances of separation, contributes importantly to the variously manifested fears of the transference.

Intuitively, the phenomena of the two groups may overlap. There may be deceptively benign ‘aponeuroses’ in the more severe group. In the troublesome phenomenon of ‘acting out’, for example, one may deal with a transitory resistance to an emergent transference fragment, in some instances due to a delay of effective interpretation, or one may be confronted by a deep-seated, variably structuralized, and sometimes even ego-syntonic ‘refusal’ to accept the verbal mode of communication with an unresponsive transference parent in dealing with insistent disturbing and gross affects implored by impulsive unintelligibility.

Freud (1925), pointed out that everything said in the analytic situation must have some coefficient of reflection to the situation in which it is said. This is, of course, consistent not only with reflective common sense but also with the theory of transference and the current view of the central position of the transference neurosis in analytic work. Furthermore, despite his earliest view of the ‘false connection’ as pure resistance (Breuer and Freud 1893-1895) and the continuing high opinion of this aspect of transference, Freud early established the (non-conflictual) positive transference as the analyst’s chief ally against resistances. So, he never stretched out in his appreciation of the primitive driving power of the transference and its indispensable function of conferring a vivid and living sense of reality on the analytic process (Freud 1912). However, in past commination, the transfer is the central dynamism of the entire psychoanalytic situation, and the transference neurosis provides the one framework which give essential and accessible form to the potentially panpsychic scope of free association (Stone 1961, 1966). In this frame of reference the irredentist drive to reunion with the primal mother, as opposed to the benign processes of maturation and separation, underlies neurotic conflict in its broadest sense and is the basis of what is called the ‘primordial transference’, whose striving renewed physical approximation or merger. Speech, which is the veritable stuff of psychoanalysis, serves as the chief ‘bridge’ of mastery for the progressive somatic separations of earliest childhood. The ‘mature transference’, in continuum, alternative and contrast, is that series and complex of attitudes contingent on maturation and benign predisposing elements of early object relationships (conspicuously, the wish to be understood, to learn, and to be taught) that enables increasing somatic separation in a continuing affirmative context of object relationship, as later reelected in the psychoanalytic situation. In this interplay, speech - our essential working tool - plays as these oscillating, curiously intermediates roles, ranging from the threat of regression in the direction of its primitive oral substrate to it is ultimately purely communicative-referential function linked with insight (Stone 1961, 1966).

Nonetheless, the origin of the ‘transference’ as we usually perceive it clinically, and as the term is traditionally employed, is in the primordial transference. Be it essentially the classical triadic incestuous complex or an oral drive toward incorporation or toward permanent nursing dependency or a sadomasochistic and shriving toward a parent, it will be re-experience in the analytic situation, in good part in regressive response to its derivations (Macalpine 1950), and produce the central, and ultimately the most formidable, manifest resistance, the transference-resistance.

The ‘transference-resistance’, while sometimes used in varying references, meant originally the resistance to effective insight into the genetic origins and prototypes of the transference, expressed in the very fact of its emergence (originally, the ‘false connection’ described by Freud [Breuer and Freud, 1893-1895]). Afterwards, as the transference became established in its own autochthonous validity, the same resistance could be viewed as an obstruction to genetic understanding of the transference, and thus putatively to its dissolution. Alternatively, such dissolutions (using this word in a relative and pragmatic sense) are contingent on much germane analytic work, on analysis of the dynamics of the attitude as represented in the transference neurosis, on working through, and on complicated and gradual responsive emotional processes in the patient (Stone 1966). Nevertheless, this genuine genetic insight is indispensable for the demarcation of the transference from the real relationship and for the intellectual incentive toward its dissolution within the framework of the therapeutic alliance.

While to the ‘resistance to the awareness of transference’ the confrontations of patients are characterized by the immediate emergence of intense (even stormy) transference reactions, most patients experience these emergent altitudes as essentially ego dystopia, except in the sense of the attenuate derivatives that enter (or vitiate) the therapeutic alliance or in the sense of chronic characterological reactions that would appear in other parallel situations, however superficial and approximate the parallel might be.

The clinical actuality of emergent transference requires analysis in its usual technical sense, including the prior analysis of defence. Transference may appear in dreams long before it is emotionally manifest; in parapraxes, in symptomatic reactions, in acting out within the analytic situation, or - most formidable - in acting out in the patient’s essential life situation. Except in cases of dangerous acting out, or very intense anxiety or equivalent symptoms, which can form emergencies, the technical approach involves the same patient centripetal address to the surface prescribed for analysis and its comprising it. However, as for this, it would suggest a modification of the classical precept that one does not interpret the transference until it becomes a manifest resistance. At this point, the interpretation is obligatory. The resistance to awareness should be interpreted, and its content brought to awareness, when the analyst believes that the libidinal or aggressive investment of the analyst’s person is economically a sufficient reality to influence the dynamics of the analytic situation and the patient’s everyday life situation.

Stripping the matter of nuances is useful, reservations, and exceptions, for clarity in an essential direction. The avoidance of awareness of transference derives from all of the hazards that accompany consciousness: Accessibility of the voluntary nervous system, therefore heightened ‘temptation’ to action; heightened conflict in relation to the sanctions and satisfactions of impulse materialization; the multiple subjective dangers of communication of "I-you" impulses and wishes or germane fears to an object invested with parental authority; heightened sense of responsibility (in that way, guilt) connected with the same complex, and, very far from least, the fear of direct humiliating disappointment - the narcissistic would have rejection or, perhaps worse of all, no affective response, the avoidance of this helplessness of impact, plays and important part. There is also the exceedingly important fact that the transference conflicts remaining outside awareness retain their unique access to autoplastic symptomatic expression, in compact and narcissistically omnipotent, if painful, solution, without the direct challenge and confrontation with alternative (and essentially ‘hopeless’) solutions.

Why, then, if such fears weigh heavily against the analytic effort and the ultimate therapeutic advantage of awareness, does the patient cling tenaciously to his views of the analyst and the system of wishes connected with this view, once it has become established in his consciousness? In the earliest view, where the cognitive elements in analysis were heavily preponderant, not only in technique but also in the understanding of process, such clinging to transference attitudes was thought to be, since the essence of subjective matters' amounted of what was significantly the essential goal of the analytic effort and was thought to be, itself, the essential therapeutic mechanism. Still, why is the patient not willing, like the historian Leaky’s dinner partner, to ‘let bygones be bygones’? Unless one accepts this aversion to recall or reconstruction, a preference for ‘present pain’, as a primary built-in aversion, in its self of an unexplained fact of ‘human nature’, one must look further. Yet, on the person of the patient might informally reject these elements of ‘insight’ because they vitiate or diminish both the affective and cognitive significance of this central object relationship, which is a current materialization of crucial unconscious wish and fantasy, originally warded off. If it is to be given up, why was it pried out of its secure nest in the unconscious? Such resolution is always felt, at least incidentally, as an attack on the patient’s narcissism and on his secure sense of self, secondarily reestablished. Moreover, to the extent that there is a genuine translation of the subjectively experienced somatic drive elements into verbal and ideational terms related to past objects, there is an inevitable step toward separation from the current object that parallels the original and corresponding development movement.

An essential dynamic difference from the past lies in the different somatic and psychological context in which the renewed struggle is fought. Old desires, old hatreds, old irredentist urges toward mastery, have been reawakened in a mature and resourceful adult, in certain spheres still helpless subjectively but no longer literally and objectively, a fact of which he is also aware. It was pointed out by Freud (1910) that this great quantitative discrepancy between infant conflict and adult resources make possibly and eases therapeutic change, through insight. In many important respects, this remains true. However, the remorseless dialectic of psychoanalysis again asserts itself. Truly effective insight requires validating emotional experience, which is only rarely achieved through recollections alone. The affective realities of the transference neurosis are necessary (now and again, inevitable), and with this experience comes the renewal of the ancient struggle, in which, with varying degrees of depth, the maturity and resources of the analysand often play a role at valiance with his capacity fort understanding. This is true not only of the subjective quality and experience of his striding but of the resources which support his resistances, in either phase of the transference involvement. Whether the wish is to seduce, to cling, to defeat and humiliate, to spite, or to win love, mature resources of mind - sometimes of body - may be involved to start this purpose, including what may occasionally be an uncanny intuitiveness regarding the analyst’s personal traits, especially his vulnerabilities?

The persistence of old desires for gratification and the urge to consummate them, or the given urges to restore and maintain an original relationship with an omnipotent (and omniscient) parent, are intelligible to everyday modes of thought. That the transference, like the neurosis itself, may also entail guilt, anxiety, flustration, disappointment and narcissistic hurt are another matter. If it gives so much trouble, why does it reappear? Freud’s latter-day explanation involved the complex general theory of primary masochism and the repetition compulsion. One cannot, in a brief discussion, reach a disputation that has already occasioned voluminous writing. In ultimate condensation, the operational view to which are the elements to be understood, as perhaps, of (1) accompanying the renewed unregenerate drive for gratification of previously warded off wishes, whether libidinal or aggressive, based on the presentation of an actual object who bears significant functional ‘resemblances’ to the indispensable parent of early childhood, in a climate and structure of instinctual abstinence, and

(2) based on the latent alternative urge to understand, assimilate, perhaps alters parental response, or otherwise master poignantly a painful situation as they were experienced in state of relative helplessness in the past. Both may be viewed as independent of adult motivations, although the power of the first may at times importantly subserve such motivations, and the second may often be phenomenologically congruent with them. Implicit in both, in contrast with the experienced plasticities and varieties of mature ego development, is the persistent and a continuous theme of adhesion to the psychic representation of the decisive original parent figure or a perceptually variant substitute. Still, it is profoundly important against original separation from the primal mother, with its potential phase specifications, as opposed to the powerful urges toward independence development, providing the underlying basis for developmental and later, neurotic conflict, that these conflicting tendencies, in the sense of the profundity that of them provide a certain parallel to the Thanatos-Eros struggle that assumed a decisive role in Freud’s final contributions. In a recent study of aggression (Stone 1971), examined Freud’s views on this subject. Although - in a paradox - by which the existence of a profound ‘alternative’ impulse to die at least conceptually tenable and susceptible to clinical inferential support, it is the conviction of those, that from both observation and inference, that aggression as this is an essential instrumental phenomenon (or can serve self-preservation and sexual impulses alike, and that it is thus, in its original forms, pitted against a postulated latent impulse to die, as it is against external threats to life. These urges and instrumentalities find primal organismic expression and experience in the phenomenon of birth and the immediate neonatal period, the biological prototype of all subsequent specifications, elaborations, and transmutations of the experience of separation. At the very outset the ‘conflict’ may find expression in the delay of breathing or, shortly after that, in the disinclination of suck. There is thus an intertwining of the two conceptions of basic conflict. It may characterize that 'time' will validate Freud’s latter-day views of the fundament of human conflict. For the time being, however, it has to the presents that are an empirically more accessible and a heuristically more useful view of the ultimate human intrapsychic struggle. Thus the originally unmastered or regressively reactivated struggle around separation, revived by developmental conflict, would in this schema represent the ‘bedrock’ of ultimate resistances, although never - at least in theory - utterly and finally insusceptible to influence. If we assume that the vicissitudes of object relationships, initiated by the special relationship of the human infant of his family, are fundamental in the accessible process of personality (thus, structural) development and thus of neuroses, and that, in ‘mirror images’. The transference and thus the transference-resistance has a comparable strategic position in the psychoanalytic process, can we extend these assumptions inti the detailed technical phenomenology of process resistance in its endless variety of expression? Yet, it remains that this extension is altogether valid.

What is more, is whether or not one thinks of it as ‘motivation’ in its usual sense, one can without extravagance postulate and even more intense cohesiveness at the first signal of that stimulus that contributed to the establishment of the organization and its basic strategies in the first place, i.e., the analyst as transference object. In the subjective good sense, the regressive trend of the transference, by the total structure of the psychoanalytic situation (i.e., the basic rule of free association and the systematic deprivations of the personal relationship) confronts the patient with one who has perceived ultimately as his first and an all-important object, the prototypical source of all gratification, all deprivation, all rejection, all punishment - the object involved in the primordial serial experience of separation (Stone 1961). This may seem an exaggeratedly magniloquent way to view a practitioner who puts himself in a seating position, usually in an armchair, listens, tries to understand, and then interprets, when he can, toward a therapeutic end. To a large portion of the adult's patient’s personality, the ‘observing’ portions of his ego, the portion that enters the therapeutic alliance, that is just what he is and that of what he should remain. To another portion, largely unchanged from its past, sequestered in the unconscious but influential although in derivative and indirect ways, he is a formidable object. It is in this field of force that, along with the drive toward better solutions, the range of clinical transferences as we know they are awakened. As, the entire efforts to translate the patient’s view of drives for reunion and contact, whether libidinal or aggressive, into genuine language, insights and voluntary control (or appropriate conative accomplishment elsewhere) is ‘resisted’. As it was originally, as an expression (or at least precursors) of separation, thus repeating aspects of the original developmental conflict. It is, however, it also true that the later and clinically more accessible vicissitudes of childhood create more accessible resistances within the postulated Metapsychological context created by the infant-mother relationship. Such changes as those patients in whom the phenomena of general the unity or approximations have been largely renounced, not only as a physical fait's accompli in perceptual and linguistic fact but also with deployment of the cathexis among other essential intrapsychic representations. These changes remain subject to regression or to the primary investment of certain phase strivings, conspicuously the Oedipus complex, in an excessive libidinal or aggressive cathexis. Such strivings, paradigmatically the incest complex, are in themselves the narrowed, potentially adaptive, maturational expressions of the basic conflict arouse by separation. If the analyst, to this infantile portion of the patient’s personality, an indispensable parent because cognition is, in this reference, subordinate to drive, it follows that the analyst becomes the central object in the complicated infant system of desires, needs, and fears that have previously been incorporated in symptoms and character distortion. The patient must, furthermore, tell these ‘secrets’ to the very object of a complex of disturbing impulses. This is a new vicissitude, not usually encountered in childhood and guarded forthwith. Even within the patient’s own personality, by the very existence of the unconscious. Ordinarily, he does not even have to ‘tell himself’ about them, in the sense that he is to a considerable degree identified with his parents, originally in his ego, then, in a punitive or disciplinary sense, in his superego? To be sure, the adult ‘observing’ portion of his personality, except where matters of adult guilt, embarrassments, or shame interfere, usually cooperates with the analyst. It can at least try to maintain the flow of derivative associations, which give the analyst material for informed inferences. The tolerant and accepting attitudes of the analyst tested by patients' rational and intuitive capacities, evened more decisively his interpretative activity, which suggestively an unredeemed child in the patent that he, ‘knows’ (or at least surmises) already, ‘gradually overcome the patient’s far of his own warded-off material and finally the fear of is frank expression'.

There are, then, three broad aspects of the relationship between resistance and transference. Assuming technical adequacy, the proportional importance of each, one will vary with the individual patient, especially with the depth of psychopathology. First, the resistance awareness of the transference and its subjective elaboration in the transference neurosis; second, the resistance to the dynamic and genetic reductions of the transference neurosis and ultimately the transference attachment itself, once established in awareness; third, the transference presentation of the analyst to the ‘experiencing’ portion of the patient’s ego, as id object and as externalized super-ego simultaneously in juxtaposition to the therapeutic alliance between the analyst in his real function and the rational ‘observing’ portion of the patient’s ego. These phenomena give intelligible dynamic meaning to resistances ordinarily observed in the cognitive-communicative aspects of the analytic process. These are the process or ‘tactical’ resistances, largely deriving from the ego under the pressure or threat of the superego.

As for this, the word ‘working through’ was sometimes, as Freud made mention (1914), that the structure yields only when a peak manifestation of resistance has apparently been achieved. The patient appears to require time, repetition, and a sort of increasing familiarity with the forces involved for real change to occur. In addition, Freud originally thought of the energy transactions as having some relation to the phenomenon of an abreaction in the earlier methods. One is impressed with the insistent recurrence of transference effects, conspicuously irrational anger in essentially rational patients, as though the structuralized tendency from which they derive can be directorially based on repetitive re-enactment and gradual reduction of effect. Since circumscribed symptom formations equivalent forms of neurotic suffering (and gratification) play an ongoing and inevitable economic role in the psychoanalytic situation and process, apart from having usually been the basis for its initiation, one might assume that they bear an important relationship to working through. Even when extinguished short of fear or long since under the influence of the transferee, their continued latent existence (or potentialities) is opposed to the vicissitudes of the current transference neurosis or it through which gradual relinquishment via working. This is true whether one thinks of the symptom in the quasi-neurophysiological sense of Breuer’s early formation of pathways of ‘lowered resistance’ (Breuer and Freud 1893-1895) or in a more empirical sense as a perennially seductive regressive condensation of impulse, gratification, and punishment, a useful and well-grounded concept, allied with the struggle against separation, is the relationship of working through to the process of mourning (Freud 1917).

While from the adult point of view the gratifications may be small and the crucial change for the worse, the symptom is nevertheless autoplastic, narcissistic in an isolated sense, already structuralized, and subject too no outside interference (except by the analysis), an expression of localized infantile omnipotent fantasy, however large or small this fantasy kingdom may be. Similarly, considering unconscious processes administering both the challenges and sanctions of the world of reality, and from the temporary disruptive intrusions of new elements into the narcissistically invested conscious personality organization. In working through, there is the diphasic and arduous problem of restoring original or potential object cathexes' in the transference neurosis, reducing their pathological intensities or distortions, and the deploying them in relation to the outer world. One may thus think of ‘working through’ as opposed to the renewal, symptom formation and as repeating some postulated vicissitude of one of the earliest conceptions of ‘transference’, the infantile transition from autoerotism to an object of love (Ferenczi 1908-9). In this sense, the clinging to the incestuous object, represented in the clinical transference, would represent an intermediate process.

There is thus a tenacious reluctance of the ‘observing’ ego, might seduce the involved portion from its inveterate clinging to the actual transference object or to its autoplastically equivalent symptomatic representation. The postulated two portions of the ego (Freud 1940, Sterba 1934 in different references) are, after all, ‘of the same blood’ to put it mildly, and the urge to reunion in integrated function, the libidinal (synthetic) bonds, is quite strong. This affinity between ego divisions may, of course, take an opposite and adverse turn, a triumph of the ‘resistance’. As to instances of chronic severe transference regression, where the adult segment of the ego is ‘pulled down’ with the other and remains recalcitrant to interpretative e effort (Freud 1940). While this is, often contingent on the depth of manifest or latent illness, it may be simplified by iatrogenic factors, such as excessive and superfluous derivation in inappropriate and essentially irrelevant spheres. With these considerations, of whose importance is increasingly convincing with the passage of time.

Mentioning it is important, even if briefly, that certain special factors, sometimes extrinsic to analysis as such, may indefinitely prolong apparent satisfactory analyses. Real guilt, for example, may not be faced. Emotional distress based on real-life problems may not be confronted and accepted as such. A person of the type described by Freud (1916) as an ‘exception’, who feels of himself as having been abused by the fortune of fate, even if in other respects not more ill than others, may consciously or unconsciously reject the psychoanalytic discipline or the instinctual renunciation derived from its insights. Fixed and unpromising life situations or organic incapacities may permit so little current or anticipated gratification that the attractiveness of the regressive, aim-inhibited analytic relationship is strongly in comparison with the barrenness of the extraanalytic situation. The last general consideration is, of course, always an essential (if silent) constituent of the psychoanalytic field of force, especially in relation to the dissolution of the transference-resistance (Stone 1966). Or alternatively more accessibly, the ‘rules of procedures’ of analysis itself may be consciously or unconsciously exploited by the patient. He may, in ‘obedience’ to a traditional rule, delay certain decisions to the point of absurdity, invoking the analytic work in support of his neurosis and sometimes in contempt of important obligations in real life. Financial support t of the analysis by someone other than the analysand can provide a basis for chronic, concealed ’acting out’. Usually, the analysis itself can, on occasion, become a lever for subtle erasion of obligations, vicissitudes, and contingent gratifications of everyday life, and thus, paradoxically, become a resistance to its on essential goals and purposes. It may become too much like the dream, to which it bears certain dynamic resemblances (Lewin 1954, 1955). The analyst’s perceptive and tactfully illuminating obligation is no less important in these spheres than in other sectors of his commitment.

It is sometimes thought that by the ‘mature transference’ is meant, inflects the ‘therapeutic alliance’ or a group of mature ego functions that enter such an alliance. Now, there is sone blurring and overlapping the conceptual edges in both instances, but the concept as this is largely distinct from either one, as it is from the primitive transference. Either the concept is thought by others to comprehend a demonstrated actuality is a further question, that this question, is, of course, only to follow on conceptual clarity. In other words, the purposeful nonrational urge is not dependent on the perception of immediate clinical purposes, a true ‘transference; in the sense that it is displaced (in current relearnt form) from the parent of early childhood to the analyst. Its content is nontransitional but largely nonsenual (sometimes transitional, as in the child’s pleasure in so-called dirty words) (Ferenczi 1911) and encompasses a special and does not misuse spheric object relationship? : The wish to understand, and to be understood, the wish to be given understanding, i.e., teaching, specifically by the parent (or later surrogate), the wish to be taught ‘controls’ in a nonpunitive way, corresponding to the growing perception of hazard and conflict, and very likely to an implicit wish to provide with and taught channels of substitutive drive discharge. With this, there might be a wish, corresponding as the element in Loewald’s ascription (1960) by therapeutic process, to be seen as for one’s developmental potentialities by the analyst. However, the list could be extended into many subtleties, details, and variations. However, one should not omit to specify that, in its developments, it would include the wish for increasing accurate interpretation and the wish to ease such interpretations by providing sad adequate material: Ultimately, of course, by identification, to participate for being of its interpreter. The childhood system of wishes that underlie the transference is a correlate of biological maturation, and the latent (i.e., teachable) autonomous ego functions appearing with it (Hartmann 1939). However, there is a drive like quality in the particular phenomena that disqualifies any conception of the urge as identical with the functions, no one who has at any time watched a child importunes engendering questions, or experiment with new words, or solicit her interest in a new game, or demand storytelling or reading, can doubt this. That this finds powerful support and integration in the ego identification with a loved parent is undoubtedly true, just like the identification with an analyst toward whom a positive relationship has been established. That functional pleasure’ particates, certain ego energies perhaps, very likely the ego’s urge to extend its hegemony in the personality (Waelder 1936), however, the drive element, even the special phase patterns and colourations, and with it the importance of object relations, libidinal and aggressive, for a special reason. For just as the primordial transference seeks to into separation, in a sense to prevent object relationships as we know then ‘mature transference’ tends toward separation and individuation (Mahler 1965) and increasing contact with the environment, optimally with a large affirmative (increasing neutralized) relationship toward the original object, toward whom (or her surrogates) a different system of demands is now increasingly discrete. The further consideration that has to emphasize the drive like elements in these attitudes as integrated phenomena, as example of ‘multiple function’ than as the discrete exercise of function or functions, is the conviction that there is continuing dynamic relation of relative interchangeability between the two series, at least based on the responses to gratification, a significant zone of complicated energid overlap, possibly including the phenomenon of neutralization. That the empirical ‘interchangeability’ is limited, but this in no way diminishes its decisive importance. In the psychoanalytic situation, both the gratifications offered by the analyst and the freedom of expression by the patient are much more severely limited and concentrated practically entirely (in as much as the day is demonstrable sense) in the sphere of speech, on the analyst’s side, further, in the transmission of understanding.

Whereas the primordial transference exploits the primitive aspects of speech, the mature transference urges seek the heightened mastery of the outer and inner environment, a mastery to which the mature elements in speech contribute importantly. Likewise, the most clear-cut genetic prototype for the free association-interpretation dialogue is in the original learning and teaching of speech, the dialogue between child and mother. It is interesting that just as the profundities of understanding between people often include - ‘in the service of the ego’ - transitory interjections and identification, the very word ‘communication’ represents the central ego function of speech, is intimately related etymologically, even in certain actual usages, to the word chosen for that major religious sacrament that is the physical ingestion of the body and blood of the Deity. Perhaps, this is just another suggestion that the oldest of individual problems does, after all, continues to seek its solution in its own terms, if only in a minimal sense and in channels so remote as to be unrecognisable.

The mature transference is a dynamic and integral part of the ‘therapeutic alliance’, along with the tender aspects of the erotic transference, evens more attenuated (and more dependable) ‘friendly feeling’ of adult type, and the ego identification with the analyst. Indispensable, of course, are the genuine adult need for help, the crystallizing rational and intuitive appraisals of the analyst, the adult sense of confidence in him, and innumerable other nuances of adult thought and feeling. With these giving a driving momentum and power to the analytic process - always by it’s very nature in a potential course of resistance - and always requiring analysis, is the primordial transference and its various appearances in the specific therapeutic transference. That is, if well managed, not only a reelection of the repetition compulsion in its baleful sense, but a living presentation from the id, seeking new solutions, ‘trying again’, so to speak, to find a place in the patient’s conscious and effective life, has important affirmative potentialities. This has been specifically emphasized by Nunberg (1951), Lagache (1953, 1954), and Loewald (1960), among others. Loewald (1960) has recently elaborated very effectively the idea of ‘ghosts’ seeking to become ‘ancestors’, based on an earlier figure of speech of Freud (1900). The mature transference, in its own infantile right, provides some unique quality of propulsive force, which comes from the world of feeling, than the world of thought. If one views it in a purely figurative sense, that fraction of the mature transference that derives from ‘conversion’ is like the propulsive fraction of the wind in a boat navigating through close-haulage away from the wind: The strong headwind, the ultimate source of both resistance and propulsion, is the primordial transference. This view, however, should not displace the original and independent, if cognate, origin of the mature transference. To cohere to the figure of speech a favourable tide or current would also be required. It is not that the mature transference is itself entirely exempt from analytic clarification and interpretation. For one thing, like other childhood spheres of experience, there may have been traumas in this sphere, punishments, serious defects or lack or parental communication, listening, attention, or interest. Overall, this is probably far more important than has previously appeared in our prevalent paradigmatic approach to adult analysis, even taking into account the considerable changes die to the growing interest in ego psychology. ‘Learning’ in the analysis can, of course, be a troublesome intellectualizing resistance. Furthermore, both the patient’s communications and his reception and use of interpretations may exhibit only too clearly, as sometimes with other ego mechanisms, their origin in and tenacious relation to instinctual or analytic dynamism, greediness for the analyst to talk (rarely the opposite), uncritical acceptance (or rejections) of interpretations, parroting without actual assimilation, fluent, ‘rich’, endlessly detailed associations without spontaneous reflection or integration, direct demands for solution of moral and practical problems entirely within the patient’s own intellectual scope, and a variety of others. Discriminating it between the use of speech by an essentially instinctual demand and an intellectual may not always be easy or linguistic trait, or habit, determined by specific factors in their own developmental sphere. However, the underlying essentially genuine dynamism remains largely of a character favourable to the purposes and processes of analysis, as it was the original process of maturational development, communication, and benign separation. Lagache (1953, 1954) comments that on the desirability of separating the current unqualified usage. ‘Positive’ and ‘Negative’ transference, as based on the patient’s immediate state of feeling, from a classification based on the essential affect on analytic process. In the latter sense, the mature transference is usually, a ‘positive transference’.

A few remarks about clinical considerations in the transference neurosis and the problem of transference interpretation, may be offered at this given directions held within time. The whole structural situation of analysis (in contrast with other personal relationships), its dialogue of frees association and interpretation, and its deprivation as to most ordinary cognitive and emotional interpersonal dispensing tends toward the separation of discrete transference from one another with defences, in character or symptoms, and with deepening regression, toward the re-enactment of the essentials of the infantile neurosis in the transference neurosis. In additional relationships, the ‘cooperative’ outlook - gratifying, aggressive, punitive, or in other ways abounding with responsibly, and the open mobility of search for alternative or greater satisfaction - put activities of profound dynamic and economic influence so that the only extraordinary situation or transference of pathologically comparable both, occasion comparable repression.

It is a curious fact that whereas the dynamic meaning and importance of the transference neurosis have been well established since Freud gave this phenomenon a central position in his clinical thinking, the clinical reference, when the term is used, remains variable and ambiguous. For example, Greenson, in his paper of 1965, speaks of it as appearing “when the analyst and the analysis become the central concern in the patient’s life.” Yet, to specify certain aspects of Greenson’s definition, for the term ‘central’ is justifiable, in that the term would apply to the analyst’s symbolic position in relation to the patient’s experiencing ego (Sterba 1934) and the symbolically decisive position that he correspondingly assumes in relation to the other important figures in the patient’s current life. Although the analysis is in any case, and for many reasons, exceedingly important to the seriously involved patient, there is a free-observing portion of his ego, as involved, but not in the same sense as that involved in the transference regression and revived infantile conflicts. There is, of course, always the integrated adult personality, however diluted it may seem at times, to whom the analysis is one of many important realistic life activities. Rarely, although it unavoidably does occur, that the analysis factually thrives of importance to other major concerns, attachments, and responsibilities of the patient’s life, and, perhaps, it is not as desirable that this should occur. On the other hand, if construed with proper attention to the economic considerations, the idea is important both theoretically and clinically. In the theoretical direction, we are to assume that there is a continuing system of object relationships and conflict situations, most important in unconscious representations but participating often in all others, deriving in a successive series of transferences from the experiences of separation from the original object, the mother. In this sense, the analyst is substantially, the uniquely important portion of the patient’s personality, the portion that ‘never grew up’, a central figure. In the clinical sense, its importance is felt of the transference neurosis as outlining for us the essential and central analytic tasks, provided by the informatics adjacencies of currents of relative fugaciousness and demonstrability, a secure cognitive base for analytic work. By its inclusion of the patient’s essential psychopathological processes and tendencies in their original functional connections, it offers in its resolution or marked reduction, the most formidable lever for an analytic cure. The transference neurosis must be seen in its interweaving with the patient’s extra-analytic system of personal contacts. The relationship to the analyst may influence the course of relationships to others, in the same sense that the clinical neurosis did, except that the former is alloplastic, proportionally exposed, and subject to constant interpretations. It is also an important fact that, except in those rare instances where the original dyadic relationship appears to return, the analyst, even in strictly transference spheres, cannot be assigned all the transference roles simultaneously. Other actors are required. He may at times oscillate with confusing rapidity between the status of mother and father, but he usually predominantly in one of these roles for long periods, someone else representing the other. Moreover, apart from ‘acting out’, complicate and mutually inconsistent attitudes, anterior to awareness and verbalization, may require the seeking of other transference objects: Husband or wife, friend, another analyst, and so forth. Children, even the patient’s own children, may be invested with early strivings of the patient, displaced from the analysis, to permit the emergence or maintenance of another system of strivings. Physicians, of course, may encouragingly be more aware of in their patients and their own strivings, mobilized by the analysis, even experience the impulses that they would wish to call forth in the analyst. Transference interpretation therefore often had inescapably had some sorted paradoxical inclusiveness, which is an important reality of technique. There is another aspect, and that is the dynamic and economic impact of the intimate and actualized dramatis personae of the transference neurosis on the progress of the analysis as such and on the patient’s motivations, and his real-life avenues for recovery. For the person in his milieu may fulfill their ‘positive’ or ‘negative’ roles in transference only too well, in the sense that an analyst motivated by a ‘blind’ countertransference may do the same. Apart from their roles in the transference drama, which may ease or impede interpretative effectiveness, they can provide the substantial and dependable real-life gratifications that ultimately ease the analysis of the residual analytic transferences, or their capacities or attitudes may occasion an over-load of the anaclitic and instinctual needs in the transference, rendering the same process far more difficult. In the most unhappy instances, there can be a serious undercutting of the motivations for basic change.

There is also the fundamental question of the role of the transference interpretation, is but nonetheless, the variances reserved as to details and emphasis on the other important aspects of the therapeutic process, in that, there are still many to whom, if not in doubt regardless the quality value of transference interpretation, are inclined doubts their uniqueness and to stress the importance of economic considerations in determining the choice about whether transference or extratransference (In a sense, the necessarily ‘distributed’ character of a variable fraction of transference interpretation), there is the fact that the extra analytic life of the patient often provides indispensable data for the understanding of detailed complexities of his psychic functioning, because of the sheer variety of its references, some of which cannot be reproduced in the relationship to the psychoanalyst. For example, there is not repartee (in the ordinary sense) in the analysis. This way the patient handles the dialogue with an angry employer may be importantly revealing. The same may be true of the quality of his reaction to a real danger of dismissal. There are not only the realities’ not also the ‘formal’ aspects of his responses. These expressions of his personality remain important, though his ‘acting out’ of the transference (assuming this was the case) may have been even more revealing and, of course, requiring transference interpretation. Furthermore, these expressions remain useful, if discriminating and conservatively treated, even if they are inevitable always subject to that epistemological reservation, which haunts so much of the data as placed in the analytic situation. Of course, the ‘positive’ transference simplifies intensified interpretations, but it is what might render their enabling capabilities that the abling of the patient’s acceptably to listen into them and directly take them seriously.

In an operational sense, it seems that extratransference interpretations cannot be set aside or underestimated. However, the unique effectiveness of transference interpretations is not by that disestablished. No other interpretation is free, without reason. Of considering unlikely introduced apart from not substantially knowing the ‘other person’s’ involvement in a feel deep affection for, quarrelling, criticism, or whatever is being hoped-for. No other situation provides for the patient’s combinational sense of cognitive acquisition, with the experience of complete personal tolerance and acceptance, that is implicit in. an interpretation made by an individual who is an object of the emotions, drives or even defences, which are active at the time. There is no doubt that such interpretations must not only (in common with all others) include personal tactfulness but also must be offered with special care as to their intellectual reasonableness, in relation to the immediate context, lest they defeat their essential purpose. It is not too often likely that a patient who had just been jilted in a long-standing love affair and id suffering exceedingly will find useful an immediate interpretation that his suffering is because the analyst does not reciprocate his love, although a dynamism in this general sphere may be ultimate shown, and acceptable to the patient. On the other hand, once the transference neurosis is established, with accompanying subtle (sometimes gross) colourations of the patient’s story, transference interpretations are indicative, for, if all of the patient’s libido and aggressions are not, in fact, invested in the analyst, he has at least an unconscious role in all important emotional transactions, and if the assumption is correct, that the regressive drive, mobilized by the analytic situation, acceding the directorial restoration of a single all-encompassing relationship, specified pragmatically in the individual case by the actual attained level of development, then there is a dynamic factor at work, importantly meriting interpretation as such, to the extent that available material supports it. This would be the immediate clinical application of the material regarding a ‘cognitive lag’.

Freud’s first formal reference to transference (Breuer and Freud 1893-1895) set the tone for all that followed. In discussion resistance and obstacles too effective cathartic (analytic) work, he offers as one possibility that ‘the patient is frightened at finding that she is transferring into the figure of the physician the distressing ideas that arise from the content of the analysis . . . Transference onto the physician takes place through a ‘false connection’. Freud then offers an example of a woman who developed a hysterical symptom based on her wish many years earlier (and now relegated to the unconscious) that the man she was talking to at the time might slowly take the initiative and gives her a kiss. He then described how, toward the end of one session, a similar wish came up within the patient toward himself - Freud. The patient was horrified and unable to work in the next hour, and obstacle to the therapeutic work that was removed once Freud had discovered its basis and pointed it out to the patient. In her response, the patient could recall the pathogenic recollections that accounted for her reactions to Freud the unconscious wish, according to Freud, had become conscious but was linked to the person based on a false connection by the transference,

Importantly, the present of issues is the finding that Freud’s monumental discovery of transference was founded upon his realization that his patient’s conscious fantasy about him was based on an earlier experience with another man. This displacement from an earlier figure (in later writings this person would often be linked to the patient’s father or other childhood figure) was seen as having no foundation in the analyst’s behaviours and as based entirely on the patient’s inner wish. Freud repeatedly characterized such responses as the real for the patient though unfounded in the actualities of the analytic relationships.

Once, again, in his well-known postscript to the case of Dora, Freud (1905) showed an appreciation of the unconscious basis for transference, though he maintained as his clinical reference point some type of conscious allusion to a reaction toward the analyst. Freud defined transference as a special class of mental structures that for the most parts are unconscious. Descriptively, he identified them as; untried additions or facsimiles of the impulses and phantasies that are suspensefully made conscious during the progression of the analysis. . . . They replace some earlier person by the person of the physician. Freud stared that some transferences differ from their earlier models in no way except the substitution of the physician for the earlier figure. He abstractively supposed of these to be new impressions or reprint, but stated that other transferences are more ingeniously constructed and have been subjected to a modifying influence he termed sublimation, the implication was that these transferences took advantage of some real peculiarity in the physician’s person or circumstance and attached themselves to that factor. These transferences he considered revised editions. Through transference, the past of the patient is revived as belonging to the present. Even with the patient Dora, the main transference was seen as a replacement for her father with Freud, and much of this found expression through conscious comparisons such as her question about whether Freud was keeping secrets from her as had her father. Other manifest concerns that Dora expressed in her relationship with Freud were traced to the relationship with Herr K.

Throughout his discussion, Freud maintained the clinical view of transference as involving some direct reference to himself as the analyst. While he clearly stated that transference structures are largely unconscious, his evidently stressed on the role of unrecognized displacement s and an unawareness with the patient of intrapsychic and genetic sources of her direct responses to the analyst. It is this peculiarity of the conceptualization of transference - a recognition of its unconscious basis, which is seldom specified in any detail, and a simultaneous maintenance of the ides that it is expressed through direct references to the analyst - that has contributed too much uncertainty in this area.

Freud and others have treated manifest and conscious fantasies about the analyst as if they represented either the direct awareness of a fantasy influencing the patient’s psychopathology or the breakthrough of as previous unconscious fantasy or memory, originally attached to an earlier figure. This has caused considerable confusion; for all practical purposes, conscious fantasies about the analyst and defences against them have been taken as the substance of the patient’s transference neurosis, while the role of the unconscious fantasies has been neglected.

While Freud and other analysts have at times stressed the critical role of unconscious fantasy constellations in the development of neurosis, in their actual clinical work conscious fantasies are often taken at face value and held responsibly for the patient’s illness. Some of this contradiction has been rationalized away with the idea that these conscious fantasies represent direct breakthroughs of previously unconscious fantasies, a position adopted despite the acknowledgment in other contexts (Arlow 1969, Brenner 1976) that defences and resistances are always at work and that pure breakthroughs are extremely either rare or nonexistent (the conscious product is always a compromise and always contains some degree of disguise).

While this view pats-lip service to the idea of nondistorted reactions by the patient, there has been virtually no consideration of his continuous, essentially sound functioning, or of his conscious and unconscious interventions. This is in keeping with the overriding stress on pathological unconscious fantasies in the etiology of neuroses and in transference, to the neglect of unconscious perceptions and introjects, a factor neglected to this day.

Most of what Freud had to say about unconscious fantasies and derivatives appeared in papers unrelated to technique and transference. In an important contribution in 1908, Hysterical Phantasies and Their Relation to Bisexuality, he specifically identified the role of unconscious fantasies in symptom formation, borrowing heavily from his insights into dreams. Freud had discovered that hysterical symptoms are based on fantasies that represent the satisfactions of wishes. He noted, however, that these fantasies can be conscious or unconscious initially, but that the critical factor in neurosogenesis is the presence of an unconscious fantasy expressing itself through hysterical symptoms and attacks. Freud felt that at times these unconscious fantasies can quickly be made conscious and that both the conscious and the unconscious fantasy may be some derivative of a formally conscious fantasy, suggesting by that the disguise involves the unconscious rather than the conscious fantasy. In this early use of the concept of derivatives, then, it was no the conscious fantasy that was the derivative of the underlying fantasy, but the reverse.

But, nonetheless, his paper on the dynamics of transference, Freud (1912) described transferences as based on a stereotyped plate that is constantly repeated

- repeated afresh - during a person’s life. The underlying fantasias were seen as partly accessible to consciousness, and as partly unconscious. Transference, then, is the introduction of one of these stereotypical plates into the patient’s relationship with the analyst.

It was also that Freud stated that when associations fail or become blocked. They have become connected with the analyst. Freud stressed the role of unconscious complexes in psychopathology and suggested that they are represented consciously and that their roots in the unconscious have to be traced out. The key to analysis is the distortion of pathogenic material expressed through the patient’s transference.

In Remembering, Repeating, and Working Through, Freud (1914) saw transference as involving repetitions of the past in the actual relationship with the analyst. In stressing, once, again, the extent to which the patient experiences these transferences as real and contemporize, Freud again used the term transference to refer to direct reactions to the analyst. In his paper on transference love (1915) Freud is clearly alluding to conscious erotic wishes and fantasies about the analyst. He stated that he was discussing situations in which women patients declare their love for a male analyst and make direct demands for the return of his love, using such demands as resistances. Similar thinking is revealed in An Outline of Psycho-Analysis, (1940), in which Freud discusses how the patient sees the analyst as a reincarnation of figures from his childhood, and transfers feelings and reactions based on this prototype. Freud was to escape an understanding by which, once, again attributive to positive and negative attitudes toward the analyst, and the plastic clarity with which patients experience such transferences.

The clearest evidence for Freud’s clinical definition of transference appears in his presentation of the opening phase of the analysis of the Rat Man (1909). The note’s of Freud decanting of this example, to reveal that with one exception, each time Freud used the term transference he was calling a conscious knowing fantasied illusion about himself or his family unit of measure. Persistently, Freud would attempt to identify the genetic basis for these transferences, largely, the main unconscious aspect was the mechanisms of displacement. It followed, then, that resistance, and in particular transference resistance, became defined as efforts by the patient to avoid the expression or realization of conscious fantasies about the analyst, and that the term could be extended to include unconscious avoidance as well. This is a reminder that the definition of resistance depends largely on the definition of transference - that is to say, that Freud took allusions toward an outside person as displacements from himself, and from ‘the transference’. In this context, it is well to recall that Freud’s original definition o acting out (Freud 1905) alluded to behaviours, directed toward the analyst, such as Dora’s flight from analysis, and to a lesser extent as to natural actions involved with other persons.

Freud’s narrow view of transference concerning direct references to the analyst is also reflected in one of his rare comments on the nature of material from patients’ (Freud 1937). In discussing the kinds of material that patient’s put at the disposal of analysts for recovering lost pathogenic memories. Freud refers to dreams, free association, the repetition of effects, actions performed by the patient both inside and outside the analytic situation, and the relation of transference that becomes established toward the analyst. In addition, his archaeological model of repressed unconscious memories can be seen to imply the discovery of previously repressed fantasies integrated as though it were also to leave room for fragmented representations. Finally, we may note a comparable comment by Freud in the Outliner (1940): “We gather the material for our work from a variety of sources - from what communication has been made a reduction by giving us by the patient and by his free associations, from what her shows us in his transference, from what we reason out by interpreting his dreams and from what he betrays by his slips or parapraxes.”

Moreover, Freud leaned toward the divorce of his discussion of the transference neurosis and transferences from his consideration of the nature of psychopathology. In keeping with this trend, his discussion of the nature of unconscious fantasies and processes, and of derivative communication, appeared primarily in two metaphysical papers - Repression (Freud 1915) and The Unconscious (Freud 1915). In both papers he was concerned with communication between the unconscious mind and the preconscious or conscious mind? He noted that this takes place by means of derivatives that express and represent unconscious instinctual impulses. He also pointed out that unconscious fantasies can be highly organized and logical even thought outside the awareness of the patient, suggesting again the possibility of the direct breakthrough of such fantasy material. In these writings, it is the unconscious fantasy that expresses itself consciously through derivatives as substitute formations such as symptoms or preconscious thought formations. What has been repressed, Freud noted? Can become conscious only if it is sufficiently disguised? On this basis, unconscious fantasies can be appeared in a patient’s free association (the reference to free association rather than to transference), through remote and distorted derivative expressions. These are substitute formations that include the return of the repressed, the repressed instinctual impulses modified by defensive operations such as displacement.

Let it be said, that Freud left considerable room for uncertainty regarding his conceptualization of transference. Theoretically, he implied that transferences are based on unconscious fantasias and memories derived from experiences and brought into play in the relationship with the analyst. He himself never applied his insights into the nature of derivative comminations to the subject of transference. As a result, his clinical referent for transference remained throughout his writings that of a direct reference to the analyst. While he acknowledged the important role of unconscious processes and contented the analyst at face value and to understand them as direct representations displaced from the past. A major contradiction by that unfolded. In that Freud correctly understood neuroses to be based on unconscious fantasy constellations, including unconscious transference fantasies, and yet he worked analytically with the patient’s conscious fantasies toward himself as analyst. Freud’s contention that sometimes unconscious fantasies break through unmodified into conscious awareness is clearly insufficient justification for this approach. There is abundant clinical evidence that unconscious fantasy constellations are always expressed through derivative formations, and that even when elements of the underlying unconscious fantasy break through in unmodified form - or are recovered through interpretation - there always remains an additional cloak-and-dagger element. Further, at the point of realization of an undisguised unconscious fantasy, it seems likely that its own expression would be itself function as a disguised and defensive derivative of a different and still repressed unconscious fantasy (Gill 1963).

The failure by analysts to maintain the essential definition of transference - as based on an unconscious fantasy constellation expressed, almost without acceptation, through derivatives - has led to many mistaken formulations regarding the nature of psychopathology, the analytic process itself, and the techniques of the psychoanalyst and psychotherapist. In their discussion of neuroses, analysts have consistently maintained and documented the thesis that psychopathological syndrome is based on unconscious processes and contents - fantasy constellations. It seems evident, that analytic work with manifest fantasies per se cannot provide access to, or interpretations of, these unconscious constellations.

The need to clarify the contextual significance of ‘transference’ and what it serves to achieve, or prevent, or avoid, and becomes apparent. For example, relating to the analyst based on some preconceived fantasy, rather than as the person he or she is, can function to prevent the possibility of engaging meaningfully and experiencing the anxiety a more mutual and intimate engagement might arouse.

An appreciation of interactive factors also allows us to consider that, to whatever degree the patient’s perceptions of the analyst are plausible and eve valid (Ferenczi, 1933; Little, 1951; Levenson, 1972; Searles, 1975; Gill, 1982; Hoffman, 1983), this may be due to the patient’s expertise at stimulating precisely this kind of responsiveness in the analyst. The reverse is true as well. Thus, though patient and analyst each will have unique vulnerabilities, sensitivities, strengths, and needs, we must consider why particular qualities or sensitivities of either patient or analyst are begun at a given moment and not at others. At any moment patient or analyst might be involved in some find of collusive enactment (Racker, 1957, 1968; Levenson, 1972, 1983; Sandler, 1976, Bion, 1967, 1983; Ogden, 1979; Grotstein, 1981; McDougall, 1979). These considerations to illuminate why clinicians often seem to practice in ways that contradict their own stated beliefs and theoretical positions.

The powerful impact of unwitting communication between patient and analyst is, of course, one reason the analyst’s countertransference experience can be a source of vital data about the patient and may become the ‘key’ to understanding aspects of the interactions that might otherwise remain impenetrable (Heimann, 1950).

An appreciation of interactive factors also requires us to reconsider what makes up analytic ‘mistake’. In this regard Winnicott (1956, 1963) has expressed the views that there are times when our patients need us to fail. In the end the patient uses the analyst’s failure, often quite: Small ones, perhaps manoeuverer by the patient: The operative factors are that the patient now hates the analyst for the failure that originally came as an environmental factor, outside the infant’s area of omnipotent control, that is now staged in the transference. So in the end we succeed by failing the patient’s way. This is a long distance from the simple theory of cures by corrective experience (Winnicott, 1963)

From-Reichmann (1939, 1950, 1952), has emphasized that at times the analyst’s mistakes may become the basis for a ‘golden (analytic) opportunity’. From this vantage point we might consider that how an analyst deals in the accompaniment with wished, in that he or she has in possession of some inevitable fallibility that maybe on of the defining aspects of his or her techniques.

An appreciation of interactive considerations thus requires us to rethink important issues of technique and the question of how we define ‘analysis’. It also requires us to consider that the pattern’s so-called ‘analyzability’ may depend on the nature of the analyst’s participation than has previously been recognized. The dilemma is how to move into a new mode of thinking about clinical technique that is not beset by the inherent limitations of traditional thinking or by those of more radical new perspectives.

The unformidable combinations of others before have thought that the psychoanalytic situation and process as such have a general unconscious meaning, which reproduces certain fundamental aspects of early developments. For example, in Greenacre and in 1956 Spitz offered ideas of the psychoanalytic situation and of the origins of transference, based largely on the mother-child relationship of the first months of life. Greenacre used the term ‘primary transference’ (with two alternatives). As far as the ideas of Greenacre and Spitz emphasize the prototypic position of the first months of life, as reproduced in the current situation, there are subtle but important differences from the view presents. Nacht and Viderman in 1960 extended related ideas to their conceptual extreme, requiring metaphysical terminology. One can readily understand the regressive transference drive set up by the situation as having such general direction, i.e., toward primitive quasi union, a reservation that Spitz accepted and specified, in response to Anna Freud. It is te activation of this drive and its opposing cognate that underlies the construction of the psychoanalytic situation, which is seen primarily as a state of separation, of ‘deprivation-in-intimacy’.

With the prolonged and strictly abstinent contact of the classical analytic situation, there is inevitably for the patient, some growing and paradoxical experience of cognitive and emotional deprivation in the personal sphere, the cognitive and emotional modalities in certain respects overlapping or interchangeable, in the same sense that the giving of interpretations may satisfy to varying degree either cognitive or emotional requirements. The patient, also renounces the important expression of a locomotion. If developed beyond a certain conventional communicative degree, even gesture or other bodily expressions tend, by interpretive pressure, to be translated into the mainstream of oral-vocal-auditory language. The suppression of hand activity, considering both its phylogenetic and ontogenetic relation to the mouth (Hoffer 1949), exquisitely epitomizes the general burdening of the function of speech, regarding its latent instinctual components, especially the oral aggressions.

From the objective features of this real and purposive adult relationship, one may derive the inference that “its representational advance presents of unintentional consciousness, one of disguising itself in its primary and most extensive impact, the superimposed series of basic separation experiences in the child’s relation to his mother." In that, the analyst would represent the mother-of-separation, as differentiated from the traditional physician who, by contrast, represent the mother associated with intimate bodily care. This latent unconscious continuum-polarity eases the oscillation from ‘psychosomatic’ reactions and proximal archaic impulses and fantasies, up to the integration of impulse and fantasy life within the scope of the ego’s control and activities (Stone 1961).

Within this structure, the critical function of speech is seen in a similar perspective, as a continuous telescopic phenomenon ranging from its primitive meanings as physiological contact, resolution of excess or residual primitive oral drive tensions, through the conveyance of expressive or demanding or other primitive communications, on up to its role as a securely established autonomous ego function, genuinely communicative in a referential-symbolic sense. To the extent that an important fraction of human impulse life is directed against separation from birth onward, the role of speech, which develops rapidly as the modalities of actual bodily intimacy are disappearing or becoming stringently attenuated (Sharpe 1940), has a unique importance as a bridge for the state of bodily separation. In the instinctual contribution to speech, considering it as a phenomenon of organic or maturational ‘multiple function’ (Waelder 1936), the cannibalistic urges loom large; they, and more manifestly, their civilized cognates (partially, derivative?), Introjection tracings and their preserving capabilities for re-emergence as such, always. In such view, the most primitive and summary form of mastery of separation, fantasized oral incorporation, is in a continuous line of development with the highest form of objective dialogue between adults. The demonstrable level of response of the given patient, in this general unconscious setting, will be determined (in ideal principle) by his effectively attained level of psychosexual development and ego functioning in its broadest sense and by his potentiality for regression.

Advances in our understanding of the therapeutic action of the psychoanalysis should be based on deeper insight into the psychoanalytic process. By ‘psychoanalytic process’ is to mean the significant interactions between patient which ultimately leads to structural changes in the patient’s personality. Today, after more than fifty years of psychoanalytic investigation and practice, we can appreciate, if not to understand better, the role which interaction with environment plays within the core organizational formation, development, and continued integrity of the psychic apparatus. Psychoanalysis ego-psychology, based on a variety of investigations concerned with

Ego-development, has given us some tools to deal with the central problem of the relationship between the development of psychic and interaction with other psychic structure, and of the connexion between ego-formation and other object-relations.

If ‘structural changes in the patient’s personality’ mean anything, it must mean that we assume that ego-development is resumed in the therapeutic process in the psychoanalysis. This resumption of ego-development is contingent on the relationship with a new object, the analyst. The nature and the effects of this new relationship are under what should be the fruitful attempt to correlate our understanding of the significance of object-relations for the formation and development of the psychic apparatus with the dynamics of the therapeutic process.

Problems, however, of essentially established psychoanalysis theory and tradition concerning object-relations the phenomenon of transference, the relations between instinctual drives and ego, and concerning the function of the analyst in the analytic situation, have to be dealt with, least of mention, it is unavoidable, for clarification to those who think of a divergent repetition from the cental theme to deal with such problems. Thus and so, the existent discussion is anything but a systematic presentation of the subject-matter. Therefore, in continuing further details of attempting to suggest modifications or variations in techniques, but the psychoanalytic changes for the better understanding of therapeutic action of the psychoanalysis in that it may lead to changes in technique, as anything of such clarification may entail as a technique is concerned should be worked out carefully and is not the topic but its psychometric test?

While the fact of an object-relationship between patient and analyst is taken for granted, classical formulations concerning therapeutic action and concerning the role of the analysts in the analytic relationship do not reflect our present understanding of the dynamic organization of the psychic apparatus, and not merely of ego. In that, the modern psychoanalytic ego-psychology that expressed directly or indirectly, as far more than an additional psychoanalytic theory of instinctual drives. It is however the elaboration of a more comprehensive theory of the dynamic organization of the psychic apparatus, and the psychoanalysis are in the process of integrating our knowledge of instinctual drives, gained during earlier stages of its history, into such a psychological theory. The impact of psychoanalytic ego-psychology has on the development of the psychoanalysis, in that is to suggest that ego-psychology be not concerned with just another part of the psychic apparatus, given but a new continuum to the conception of the psychic apparatus as an undivided whole.

In an analysis, one is to think that we have opportunities to observe and investigate primitively and more advanced interaction-processes, that is, interactions between patient and analyst that leads to or from steps in ego-integration and disintegration. Such interactions, or integrative (and disintegrative) experiences, occur often but do not often as such become the focus of attention and observation, and go unnoticed. Apart from the difficulty for the analyst of self-observation while in interaction with his patient, there is a specific reason, stemming from theoretical bias, why such interactions not only go unnoticed but are frequently denied. The theoretical bias is the view of the psychic apparatus as a closed system. Thus the analyst is seen, not as a co-actor on the analytic stage, on which the childhood development, culminating in the infantile neurosis, is restaged and reactivated in the development, crystallization and resolution of the transference neurosis, but as a reflecting mirror, even if of the unconscious, and characterized by scrupulous neutrality.

This neutrality of the analyst is required (1) in the interest of scientific objectivity, to keep the field of observation from being contaminated by the analyst’s own emotional intrusions, and (2) to guarantee an unformed mind for the patient’s transferences. While the latter reason is closely related to the general demand for scientific objectivity and avoidance of the interference of the personal equation, it has its specific relevance for the analytic procedure as such in as far as the analyst is supposed to function not only as an observer of certain precess, but as a mirror that actively reflects back to the patient the latter’s conscious and particularly his unconscious processes through communications. A specific aspect of this neutrality is that the analyst must avoid falling into the role of the environmental figure (or of his opposite) the relationship to whom the patient is transferring to the analyst. Instead of falling into the assigned role, he must be objective and neutral enough to reflect back to the patient what role the latter has assigned to the analyst and to himself in the transference situation. Nevertheless, such objectivity and neutrality now need to be understood more clearly as to their meaning in a therapeutic setting.

It is all the same that ego development is a process of increasingly higher integration and differentiation of the psychic apparatus and does not stop at any given point except in neurosis and psychosis: although it is true that there is normally a marked consolidation of ego-organization around the period of the Oedipus complex. Another consolidation normally takes place toward the end of adolescence, and further, often less marked and less visible, consolidation occurs at various other life-stages. These later consolidations - and this is important - follow periods of relative ego-disorganization and reorganization, characterized by ego-regression. Erickson has described certain types of such periods of ego-regression with subsequent new consolidations as identity crises. An analysis can be characterized, from this standpoint, as a period or periods of induced ego-disorganization and reorganization. The promotion of the transference neurosis is the induction of such ego-disorganization and reorganization. Analysis is thus understood as an intervention designed to set ego-development in motion, be it from a point of relative arrest, or to promote what we conceive of as a healthier direction or comprehensiveness of such development. This is achieved by the promotion and use of (controlled) regression. This regression is one important aspect under which the transference neurosis can be understood. The transference neurosis, in the sense of reactivation of the childhood neurosis, is set in motion not simply by the technical skill of the analyst, but by the fact that the analyst makes himself available for the development of a new ‘object-relationship’ between the patient and the analyst. The patient having a tendency to make this potentially new object-relationship into an old, on the other hand, its total extent from which the patient develops ‘positive transference’ (not in the sense of transference as resistance, but in the sense in which ‘transference’ carries the whole process of an analysis). He keeps this potentiality of a new object-relationship alive through all the various stages of resistance. The patient can dare to take the plunge into the regressive crisis of the transference e neurosis that brings him face to face again with his childhood anxieties and conflicts, if he can hold to the potentiality of a new object-relationship, represented by the analyst.

We know from analytic s well as from life experience that new spurts of self-development may be intimately connected with such ‘regressive’ rediscoveries of oneself as may occur through the establishment of new object-relationships, and this means: New discovery of ‘objects’. Seemingly enough, new discovery of objects, and not discovery of new objects, because the essence of such new object-relationships is the opportunity they offer for rediscovery of the early paths of the development of object-relations, leading to a new way of relating to objects and of being and relating to ones' own. This new discovery of oneself and of objects, this reorganization of ego and objects, is made possible by the encounter with a ‘new object’ which has to possess certain qualification to promote the process. Such a new object-relationship for which the analyst holds himself available to the patient and to which the patient has to hold on throughout the analysis is one meaning of the term ‘positive transference’.

What is the neutrality of the analyst? Its significance branches the intangible quantification upon stemming from the encounter with a potentially new object, the analyst, which new object has to possess certain qualifications to be able to promote the process of ego-reorganization implicit in the transference neurosis. One of these qualifications is objectivity. This objectivity cannot mean the avoidance of being available to the patient as an object. The objectivity of the analyst has reference to the patient’s transference distortions. Increasingly, through the objective analysis of them, the analyst overcomes not only a potentiality but the subjective expanding activities available are of a new object, by eliminating in stages impediments, represented by these transferences, to a new object-relationship. There is a tendency to consider the analyst’s availability as an object merely as a device on his part to attract transference onto himself. His availability is seen as to his being a screen or mirror onto which the patient projects his transference, which reflects them back to him as interpretations. In this view, at the ideal endpoint of the analysis no further transference occurs, no projections are thrown on the mirror, the mirror having nothing now to reflect, can be discarded.

This is only a half-truth. The analyst in actuality does not reflect the transference distortions. In his interpretations he implies aspects of undistorted reality that the patient begins to grasp the successive sequence as the transferences are interpreted. This undistorted reality is mediated to the patient by the analyst, mostly by the process of chiselling away the transference distortions, or, as Freud has beautifully put it, using an expression of Leonardo da Vinci, ‘per via di levare’ as, insomuch as of sculpturing, not ‘per via di porre’ as, in producing a painting. In sculpturing, the figure to be created comes into being by taking away from the material: In painting, by adding something to the canvas. In analysis, we bring out the true form by taking away the neurotic distortions. However, as in sculpture, we must have, if only in rudiments, an image of that which needs to be brought into its own. The patient, in such a way he contributes of himself to the analyst, and provides rudiment infractions of such a continuous image of fragmented fluctuations imbedded by distortion - an image that the analyst has to focus in his mind, thus holding it in safe keeping for the patient to whom it is mainly lost. It is this tenuous reciprocal tie that represents the germ of a new object-relationship.

The objectivity of the analyst regarding the patient’s transference distortions, his neutrality in this sense, should not be confused with the ‘neutral’ attitude of the pure scientist toward his subject of study. Nonetheless, the relationship between a scientific observer and his subject of study has been taken as the model for the analytic relationship, with the following deviation: The subject, under the specific conditions of the analytic experiment, directs his activities toward the observer, and the observer expresses his findings directly to the subject with the goal of modifying the findings. These deviations from the model, however, change the whole structure of the relationship to the extent that the model is not representative and useful but, in earnest, very much misleading. As the subject directs his activities toward the analyst, the latter are not integrated by the subject as an observer: As the observer expresses his findings to the patient, the latter are no longer integrated by the ‘observer’ as a subject of study.

While the relationship between analyst and patient does not possess the structure, scientist-scientific subject, and is not characterized by neutrality in that sense by the analyst, the analyst may become a scientific observer to the extent to which he can observe objectively the patient and himself in interaction. The interaction itself, however, cannot be adequately represented by the model of scientific neutrality. Using this model is unscientific, based on faulty observation? The confusion about the issue of countertransference relates to this. It hardly needs to be pointed out that such a view in no way denies or reduces the role scientific knowledge, understanding, and methodology play in the analytic process, nor does it have anything to do with advocating an emotionally-charged attitude toward the patient or ‘role-taking’. In that a showing attempt to disentangle the justified and requirement of objectivity and neutrality from a model of neutrality that has its origin in propositions that may be untenable.

One of these is that therapeutic analysis is an objective scientific research method, of a special nature to be sure, but falling within the general category of science as an objective, detached study of natural phenomena, their genesis and interrelations. The ideal image of the analyst is that of a detached scientist. The research method and the investigative procedure in themselves, carried out by unspecified scientists, are said to be therapeutic. It is not self-explanatory why a research project should have a therapeutic effort on the subject of study. The therapeutic effect appears to have something to do with the requirement, in analysis, that the subject, the patient himself, gradually becomes an associate, as it was, in the research work, that he himself becomes increasingly engaged in the ‘scientific project’ which is, of course, directed art himself. We speak of the patient’s observing ego on which we need to be able to rely to a certain extent, which we attempt to strengthen and with which we collaborate among ourselves. We encounter and make to some functional applicability of what is known under the general title, ‘identification’. The patient and the analyst acknowledge the fact for being equally increasing to the evolving principles that govern the political nature as deployed to the accessorial evolution for a better and mutually actualized understanding, if the analysis proceeds, in their ego-activity of scientifically guided self-scrutiny.

If the possibility and gradual development of such identification are, as is always claimed, a requirement for a successful analysis, this introduces the component factor from which has nothing to do with scientific detachments and the neutrality of a mirror (‘mirror’ in this sense, is meant as having been for the most part used to denote the ‘properties’ of the analyst as a ‘scientific instrument’. (A psychodynamic understanding of the mirror as it functions in human life may reestablish it as an appropriate description of at least certain aspects of the analyst’s function). This identification does relate to the development of a new object-relationship of which is the foundation for it.

The transference neurosis takes places in the influential presence of the analyst and, as the analysis progresses, ever more ‘in the presence’ and under the eyes of the patient’s observing ego. The scrutiny, carried out by the analyst and by the patient, is an organizing, ‘synthetic’ ego-activity. The development of an ego function is dependent on interaction. Neither the self-scrutiny, nor the freer, healthier development of the psychic apparatus whose resumption is contingent upon such scrutiny, takes place in the vacuum of scientific laboratory conditions. They take place in the presence of a favourable environment, by interaction with it. One could say that in the analytic process this environmental element, as happens in the original development, becomes increasingly internalized as what we are to call; the observing ego of the patient.

There is another aspect to this issue. Involved in the insistence that the analytic activity is a strictly scientific one (not merely using scientific knowledge and methods) is the notion of the dignity of science. Scientific man is considered by Freud as the most advanced form of human development. The scientific stage of the development of man’s conception of the universe has its counterpart in the individual’s state of maturity, according to Totem and Taboo. Scientifically self-understanding, to which the patient is helped, is in and by itself therapeutic, following this view, since it implies the movement toward a stage of human evolution not previously reached. The patient is led toward the maturity of scientific man who understands himself and external reality not animistic or religious terms but as to objective science. There is little doubt that what is called the scientific exploration of the universe, including the self, may lead to greater mastery over it (within certain limits of which we are becoming painfully aware). The activity of mastering it, however, is not itself a scientific activity. If scientific objectivity is assumed to be the most mature stage of man’s understanding of the universe, showing the highest degree of the individual’s state of maturity, we may have a personal stake in viewing psychoanalytic therapy as a purely scientific activity and its effects as due to such scientific objectivity. Beyond the issue of an investment, to be, as necessary and timely to question the assumption, handed to us from the nineteenth century, that the scientific approach to the world and the self represents a higher and more mature evolutionary stage of man than the religious way of life. However, its questioning pursuit will not be for us to pursue.

Though the objective interpretation of the analyst and the transference distortion, it increasingly becomes available to the patient as a new object. This not primarily in the sense of an object not previously met, but the newest consists in the patient’s rediscovery of the early paths of the development of object-relations leading to a new way of relating to objects and of being oneself. Though all the transference distortions the patient reveals rudiments at least of that core (of himself and ‘objects’) which has been distorted. It is this core, rudimentary and vague as it may be, to which the analyst has reference when he interprets transferences and defences, and not one abstract idea of reality or normality, if he is to reach the patient. If the analyst keeps his central focus on this emerging core, he avoids moulding the patient in the analyst’s own image or imposing on the patient his own concept of what the patient should become. It requires objectivity and neutrality the essence of which is love and respect for the individual and for individual development. This love and respect represent that counterpart in ‘reality’. In interaction with which the organization and reorganization of ego and psychic apparatus take place.

The parent-child relationship can serve as a model, in that the parent ideally is in an empathic relationship of understanding the child’s particular stage in development, yet ahead in his vision of the child’s future and mediating this vision to the child in his dealing with him. This vision, informed by the parent’s own experience and knowledge of growth and future, is, ideally, a more articulate and more integrated version of the core of being which the child presents to the parent. This ‘more’ that the parent sees and knows, he mediates to the child so that the child in identification with it can grow. The child, by internalizing aspects of the parents, also internalizes the parent’s image of the child - an image mediated to the child in the thousand different ways of being handled, bodily and emotionally. Early identification as part of ego-development, built up through introjection of maternal aspects, includes introjection of the mother’s image of the child. Part of what is introjected is the image of the child as seen, felt, smelled, heard, touched by the mother. Adding that what happens would perhaps be correct is not wholly a process of introjection, if introjection is used as a term for an intrapsychic activity. The bodily handling of and concern with the child, the manner in which the child is fed, touched, cleaned, the way it is looked at, talked to, called by name, recognized and re-recognized - all these and many other ways of communicating with the child, and communicating to him his identity, sameness, unity, and individuality, shape and mould him so that he can begin to identify himself, to feel and recognize himself as one and as separate from others yet with others. The child begins to experience himself as a central unit by being centred along.

In analysis, if it is to be a process leading to structural changes, interactions of a comparable nature have to take place. At this point, only to suggest, by sketching these interactions during early development, the positive nature of the neutrality required, which includes the capacity for mature object-relations as manifested in the parents by his or her ability to follow and simultaneously be ahead of the child’s development?

Mature object-relations are not characterized by a sameness of relatedness but by an optimal range of relatedness and by the ability to relate to different objects according to their particular levels of maturity. In analysis, a mature object-relationship is maintained with a given patient if the analyst relates to the patient in a tune with the shifting levels of development manifested by the patient at different times, but always from the viewpoint of potential growth, that is, from the viewpoint of the future. It is the fear of moulding the patient in one’s own image that has prevented analysis from coming to grips with the dimension of the future in analytic theory and practice, a strange omission considering the fact that growth and development are at the centre of all psychoanalytic concern. A fresh and deeper approach of the superego problem cannot be taken without facing the issue.

The afforded efforts to say that the activities of the analyst, and specifically his interpretations and the ways in which they are integrated by the patient, need to be considered and understood as for the psychodynamics of the ego. Such psychodynamics cannot be worked out without proper attention to the functioning of integrative processes in the ego-reality field, beginning with such processes as introjection, identification, projection (of which we know something), and progressing to their genetic derivatives, modifications, and transformations in later life-stages (of which we understand very little, except in as far as they are used for defensive purposes). The more intact the ego of the patient, the more of this integration taking place in the analytic process occurs without being noticed or at least without being considered and conceptualized as an essential element in the analytic process. ‘Classical’ analysis with ‘classical’ cases easily leaves unrecognized essential elements of the analytic process, not because they suit the purpose of non-presence, but because they are as different to see in such cases as becoming aware of what was different, ‘classical’ psychodynamics in average citizenries. Cases with obvious ego defects magnify what also occurs in the typical analysis of the neuroses, just as in neurotics we see exaggerated in the psychodynamics of human beings overall. However, this is not to say, that there is no difference between the analysis of the classical psychoneuroses and the cases with obvious ego defects. In the latter, especially in borderline cases and psychoses, processes such as explained in the child-parent relationship take place in the therapeutic situation on levels proportionally close and similar to those of the early child-parent relationship. The further we move away from gross ego defect cases, the more do these integrative processes take place on higher levels of sublimation and by modes of communication which show much more complex stages of organization.

The elaboration of the structural point of view in psychoanalytic theory has caused the danger of isolating the different structures of the psychic apparatus from one another. It may look nowadays as though the ego is a creature of and functioning with external reality, whereas the area of the instinctual drives, of the id, ids as such unrelated to the external world. To use Freud’s archeological simile, it is as though the functional relationship between the deeper strata of an excavation and their eternal environment were denied because these deeper strata are not in a functional relationship with the present-day environment, as though it were maintained that the architectural structures of deeper, earlier strata are due too purely ‘internal’ processes, in contrast to the functional interrelatedness between present architectural structures (higher, later strata) and the external environment that we see and live in. The id, however - in the archeological analogy being comparable to some deeper, earlier strata - as such integrates with its comparable ‘early’ external environment as much as the ego integrates with the ego’s more ‘recent’ external reality. The id deals with and is a creature of ‘adaption’ just as much as the ego - but on a very different level of organization.

Having already confronted us, it related to the conception of the psychic apparatus as a closed system, and in addition that this view has a bearing on the traditional notion of the analyst’s neutrality and of his function as a mirror. It is in this context of the concept of instinctual drives, particularly as for their relation to objects, as formulated in psychoanalytic theory. Freud writes: “The true beginning of scientific activity consists . . . in describing phenomena and then in proceeding to group, classify and correlate them." Even at the stage of description avoiding applying certain abstract ideas to the material in hand is not possible, ideas derived from somewhere or other but not from the new observations alone. Such ideas - which will later become the basic concepts of the science - are still more indispensable as the material is further worked over. They must at first necessarily posses some degree of indefiniteness: There can be no question of any clear delimitation of their content. If they remain in this condition, we come to an understanding about their meaning by making repeated references to the material of observation from which they appear to have been derived, but upon which, in fact, they have been imposed. Thus, strictly speaking, they are like conventions - although everything depends on their not being arbitrarily chosen but determined by there having significant relations to the empirical material, relations that we seem to sense before we can clearly recognize and discover them. It is only after more thorough investigation of the field of observation that we can formulate its basic scientific concepts with increased precision, and progressively to modify those that become serviceable and consistent over a wide area. Then, the time may have come to confine them in definitions. The advance of knowledge, however, does not tolerate any rigidity even in definitions. Physics furnishes an excellent illustration of the way in which even ‘basic concepts’ established in definitions are constantly being altered in their content. The concept of instinct (Trieb), Freud goers on to say, in such a basic concept, “conventional but still partially obscure,” and thus open to alterations in its content.

Freud defines instinct as a stimulus: A stimulus not arising in the outer world but ‘from within the organism’. He adds that “a better term for an instinctual stimulus is a need," and says, that such “stimuli are the sign of an internal world.” Freud lays explicit stress on one functional implication of his whole consideration of instincts, namely that it implies the concept of purpose in what he calls a biological postulate. This postulate runs as follows: The nervous system is an apparatus that has the function of getting rid of the stimuli that reach it, or of reducing them to the lowest possible level. An instinct is a stimulus from within reaching the nervous system. Since an instinct as an id impulse is a stimulus arising within the organism and acting ‘always as a constant force’, it obliges ‘the nervous system to renounce its ideal intention of keeping off stimuli’ and compels it ‘to undertaking to involve and interconnected activity by which the external world it so changed as to afford satisfaction to the internal source of stimulation'.

Instinct being an inner stimulus reaching the nervous apparatus, the object of an instinct is ’the thing concerning which or through which the instinct is abler to achieve its aim’, this aim being satisfaction. The object of an instinct is further described as ‘what is most variable about an instinct’, ‘not originally connected with it’, and as becoming ‘assigned to it only in consequence of being peculiarly fitted to make satisfaction possible’. It is, that we see instinctual drives being conceived as an ‘intrapsychic’, or originally not related to objects.

In his later writing Freud gradually moves away from this position. Instincts are no longer defined as (inner) stimuli with which the nervous apparatus deals according to the scheme of them reflex arc, but instinct in, Beyond the Pleasure Principle, it is as seen, 'an urge inherent in organic life to restore an earlier state of things that the living entity has been obliged to abandon under the pressure of external disturbing forces'. Freud describes, in that instinct in terms equivalent to the terms he used earlier in describing the function of the nervous apparatus itself, the nervous apparatus, the ‘loving entity’, in its interchange with ‘external disturbing forces’. Instinct impulses of an id have no longer an intrapsychic stimulus, but an expression of the function, the ‘urge’ of the nervous apparatus ton deal with environment. The intimate and fundamental relationships of instincts, especially in as far as libido (sexual instincts, Eros) is concerned, with objects, is more clearly brought out in The Problem of Anxiety, until finally, in An Outline of Psycho-Analysis, ‘the aim of the first of these basic instincts [Eros] is to establish ever greater unities and to preserve them thus - in short, to bind together'. Making that is noteworthy not only the relatedness to objects is implicit: The aim of the instinct Eros is no longer formulated as to some contentless ‘satisfaction’, or satisfaction in the sense of abolishing stimuli, but the aim is clearly seen through integration. It is ‘to bind together’. While Freud feels that applying his earlier formula is possible, ‘to the effect that instincts tend toward a return to an earlier [inanimate] stare’. To the descriptive or death instinct, ‘we are unable to apply the formula to Eros (the love instinct).

The basic concept Instinct has thus changed its content since Freud wrote, Instincts and Their Vicissitudes. In his later writing he does not take as his starting point and model the reflex-arc scheme of a self-contained, closed system, but bases his considerations on a much broader, more modern biological framework. It should be clear from the last quotation that it is not the ego alone to which he assigns the function of synthesis, of binding together. Eros, one of the two basic instincts, is itself an integrating force. This is following his concept of primary narcissism as first formulated in, On Narcissism, an Introduction, and further elaborated in his writings, notably in Civilization and Its Discontents, where objects, reality, far from being originally not connected with the libido, are seen as becoming gradually differentiated from a primary narcissistic identity of ‘inner’ and ‘outer’ world.

In his conception of Eros, Freud moves away from an opposition between instinctual drives and ego, and toward a view according to which instinctual drives become moulded, channelled, focussed, tamed, transformed, and sublimated in and by the ego organization, an organization that is more complex and more sharply elaborated and articulated than the drive-organization called the id. In whatever way, the ego is an organization that continues, much more than it is opposing, the inherent tendencies of the drive-organization, the concept Eros encircles one term one of the two basic tendencies or ‘purposes’ of the psychic apparatus as manifested on both levels of organization.

As a whole, with such a perspective, instinctual drives are as primarily related to ‘objects’, to the ‘external world’ as the ego is. The organization of this outer world, of these ‘objects’, corresponds to the drive-organization than of ego-organization. In other words, instinctual drives organize environment and are organized by it no less than is true for the ego and its reality. It is the mutuality of organization, in the sense of organizing each other, which forms the inextricable interrelatedness of ‘inner and an outer world’. It would be justified to speak of primary and secondary processes not only concerning the psychic apparatus but also about the outer world is for its psychological structure. The qualitative difference between the two levels of organization might terminologically be said by speaking of environment as correlative to drives, and of reality as correlative to ego. Instinctual drives can be seen as originally not connected with objects only in the sense that ‘originally’, the world is not organized by the primitive psychic apparatus so that objects are differentiated. Out of an ‘undifferentiated stage’ emerge what has been termed part-objects or object-nuclei. A more appropriate term for such pre-stages of an object-world might be the noun ‘shape’: In the sense of configurations of an indeterminate degree and a fluidity of organization, and without the connotation of object-fragments.

The preceding excursion into some problems of instinct-theory is intended to were that the issue of object-relations in psychoanalytic theory has suffered from a formulation of the instinct-concept according to which instincts, as inner stimuli, are contrasted with outer stimuli, both, although in different ways, affecting the psychic apparatus. Inner and outer stimuli, terms for inner and an outer world on a certain level of abstraction, are thus conceived as originally unrelated or even opposed to each other but running parallel, as it was, in their relation to the nervous apparatus. While, as Freud in his general trend of thought and in many formulations moved away from this framework, psychoanalytic theory has remained under its sway except in the realm of ego-psychology. The development of ego-psychology unfortunately had to take place in relative isolation from instinct-theory. It is true that our understanding of instinctual drives has also progressed. Yet the extremely fruitful concept of organization (the two aspects of which are integration and differentiation) has been insufficiently, if in a at all, applied to the understanding of instinctual drives, and instinct-theory has remained under the aegis of the antiquated stimulus-reflex-arc conception model - a mechanistic frame of reference far removed from modernly psychological and biological thought. The scheme of the reflex-arc, as Freud says in, Instincts and Their Vicissitudes have been given to us by physiology. Nevertheless, this was the mechanistic physiology of the nineteenth century. Ego-psychology began its development in a quite different climate already, as is clear from Freud’s biological reflections in, Beyond the Pleasure Principle. Thus it has come about that the ego is seen as an organ of adaption to and integration and differentiation with and of the outer world, whereas instinctual drives left behind in the realm of stimulus-reflex physiology. This, and specifically the conception of instinct as an ‘inner’ stimulus impinging on the nervous apparatus, has affected the formulations concerning the role of ‘objects’ in libidinal development and, by extension, has vitiated the understanding of the object-relationship between patient and analyst in psychoanalytic treatment.

In discussing aspects of the analytic situation and the therapeutic process in analysis, dwelling further on the dynamics of interaction in the early stages of development will be useful.

The mother recognizes and fulfils the need of the infant. Both recognition and fulfilment of a need are at first beyond the ability of the infant, not merely the fulfilment. The understanding recognition of the infant’s needs for the mother represents some form of accumulating together, and yet undifferentiated urges of the infant, urges which in the acts of recognition and fulfilment by the mother undergo a first organization into some direct drive. In a remarkable passage in the ‘Project for a Scientific Psychology’, in a chapter called The Experience of Satisfaction, Freud discusses this constellation in its consequences for the further organization of the psychic apparatus and in its significance as the origin of communication. Gradually, both recognition and satisfaction of the need coming within the range of the growing infant itself. The processes by which this occurs are generally subsumed under the headings identification and introjection. Accesses to them have to be made available by the environment, here the mother, who performs this function in the acts of recognition and fulfilment of the need. These acts are not merely necessary for the physical survival of the infant but necessary while for its psychological development in as far as they organize, in successive steps, the infant’s uncoordinated urge. The whole complex dynamic constellation one of mutual responsiveness where nothing is introjected by the infant that is not brought to it by the mother, although brought by her often unconsciously. A prerequisite for introjection and identification is the gathering mediation of structure and direction by the mother in her caring activities. As the mediating environment conveys, structure begins to gain structure and direction in the experience of that entity: The environment begins to ‘taker shape’ in the experience of the infant. It is now that identification and introjection plus projection emerge as more defined processes of organization of the psychic apparatus and of environment.

No comments:

Post a Comment