25 Scientific Facts Everyone Should Know

geirj
geirj's picture
Posts: 719
Joined: 2007-06-19
User is offlineOffline
25 Scientific Facts Everyone Should Know

I'm sure anyone who is on Facebook is getting bombarded by requests to post "25 Things About Me". Because I'm not that narcissistic (and because I only have about three interesting things to say about myself), I'd rather post something along the lines of "25 Scientific Facts Everyone Should Know".

 

I am quite open about my atheism on my Facebook page, despite having numberous Christian friends. Can anyone point me to such a list, or would someone be willing to compile one? It would be nice if the list had some focus on poorly understood science (like the Big Bang) that people dismiss in favor of a god.

Nobody I know was brainwashed into being an atheist.

Why Believe?


Archeopteryx
Superfan
Archeopteryx's picture
Posts: 1037
Joined: 2007-09-09
User is offlineOffline
It would be one more note in

It would be one more note in the "25 things" meme.

 

So far I have filled out 25 things about me, 25 things about my friends, and 25 words that alliterate with my name.

 

If you haven't seen the others yet, you might.

A place common to all will be maintained by none. A religion common to all is perhaps not much different.


Cpt_pineapple
atheist
Posts: 5492
Joined: 2007-04-12
User is offlineOffline
1] For every action there is

1] For every action there is an equal and opposite critisism

 

2] Only matter and empty space exists. Everything else is opinion

 

3] You can't win, you can't break even, you can't even quit the game

 

4] 98% of stats only make sense 54% of the time

 

 

 

That's  a start at least.

 

 

 

 

 

 

 


deludedgod
Rational VIP!ScientistDeluded God
deludedgod's picture
Posts: 3221
Joined: 2007-01-28
User is offlineOffline
This is the list I have compiled

I dislike the term facts, after all, a fact in science is a piece of data. What you mean is concepts. I prefer that there are 25 crucial scientific concepts that everyone should know. I've organized the following into physics, chemistry and biology. I trust it is more than sufficient for your purposes. Below is the list I have compiled. This took me fucking ages, as you can imagine.

1. Newton’s Laws of Motion

Everyone should know Newton’s Laws. What makes them such a vast achievement is the fact that they present the first universal picture of motion. They can predict the motions of planets just as well as they can the motion of tennis balls. They can describe rotation, translation, orbit and oscillation. Such vast breadth was unheard of before Newton.

Newton’s First Law is any object which is moving in a straight line at a constant velocity will continue along that path unless an external force acts upon it. The crucial proviso “in a straight line” was added by Descartes. Note that “constant velocity” includes “not moving” (a constant velocity of zero).

Newton’s Second Law, then, follows quite neatly. It states a force on a body will cause it to accelerate. Since velocity is a vector (and thus acceleration, the rate of change of velocity) this includes travelling at a constant speed but changing direction. An obvious example is that of motion in a circle. If the object travels around the circle at a constant speed, it is still changing velocity because its tangential velocity is constantly changing. This gives rise to a centripetal force. It is for this reason that satellites are able to orbit the Earth. The quantitative expression for Newton’s Second Law is that force is proportional to acceleration or equivalently force is equal to rate of change of momentum. The constant of proportionality is mass, also called the inertia. Inertia is a more general concept that is a measure of a body’s reluctance to deviate from a uniform straight line path. There is a rotational equivalent called moment of inertia which is a measure of a body’s reluctance to change its angular velocity.

Newton’s Third Law: This is a frequently misunderstood and simplified law. The correct way to formulate it is as follows:

If object a exerts a force fab on object b, then object b exerts a force fba on object a such that fba= -fab. The crucial negative sign indicates that the force that body a exerts on body b is opposite to that the body b exerts on body a, and the equality indicates that the forces are of equal magnitude. The other crucial aspect of this is the fact that the two forces in a Newton’s Third Law pair act on different bodies. It also important to realize that just because two forces are opposite and equal that does not necessarily make them Newton’s Third Law pairs. For example, if you stand on Earth’s surface, the Earth is exerting a gravitational force on you toward it’s center of mass. You in turn exert an equal and opposite force on the Earth toward your center of mass. However, the reason you do not accelerate toward Earth’s center is because you are exerting a downward contact force on the ground whose magnitude is your weight (the definition of true weight is the force being exerted on you due to gravity) and as a result the ground is exerting an equal and opposing contact force on you. Since the magnitude of this contact force on you points away from Earth’s center and is equal and opposite to the gravitational force, then by Newton’s first law, you  do not accelerate. However, clearly the contact force and gravitational force are not Newton’s Third Law pairs, even if they are equal and opposite. Firstly, they are being exerted on the same body whereas third-law pairs act on different bodies and secondly they are different types of forces (one is electromagnetic and the other is gravitational) and Newton’s third law specifies that the nature of the reaction force is the same as the action force.

The reason for this law is clear. If we imagine an isolated system of objects which is isolated from the rest of the universe (in other words, the only forces present in the system are those resulting from the objects inside it) then if a group of objects a exerted a force on other objects b in the system, and those objects b did not in turn exert a reaction force on the collection of objects a then the net force associated with the system would be non zero and thus by Newton’s Second law it would continue to accelerate. It is not possible for objects in a closed system to spontaneously generate a net force on the whole system, otherwise the system would continue to accelerate forever. This is why, for example, I cannot lift myself off the ground by pulling on my shoelaces.

2. Inverse Square Laws

A vast number of physical phenomenon can be modeled using this concept. We begin with a concept of a point source which, for example, is a source of light, in an isotropic surrounding region of space, or a point mass causing gravitational force to be felt by other masses. In these and many other cases the intensity of the physical quantity emanated by the point source is inversely proportional to the distance from the source. In other words, the closer you are to the point mass, the more gravitational force you feel. The closer you are to a light source, the more intensity of light you receive. If this is the case, then the source quantity is said to exhibit spherical symmetry. In other words, the locus of all points where the intensity of the source quantity is the same is a sphere whose center is the point source (in other words, the set of all points which are the same particular distance from the source). As a result, our formulation of the intensity of the source quantity takes the following form:

I=k[1/4 πr2]

Where I is the intensity of the source quantity, r is the magnitude of distance from the source quantity to a point in space where the intensity is being considered and k is an arbitrary constant.


3. Faraday’s Law

Prior to Faraday, the only known way to generate electricity was with a chemical cell (the same principle is employed in modern batteries). It was Faraday and Oersted who formulated the relation between electricity and magnetism and thus provided the basis for electrodynamics and modern electricity generation. Prior to Faraday and Oersted, magnetism had thought to be a peculiar property of lodestones. Legend has it that Oersted was preparing a lecture on magnetism when he noticed that a current-carrying wire was causing the needle of a compass to move. What Oersted realized was that moving charges were the true cause of all magnetic phenomenon. Faraday’s investigation led to a crucial discovery: changing magnetic fields create electric fields. All vector fields (such as electric fields, magnetic fields and gravitational fields) can be modeled using field lines where the direction of the lines point in the direction of the force lines that would be felt by a particular convention. For example, the electric field lines that emanate from a point charge are the force lines that a positive unit point test charge would feel. As a consequence, field lines can be used to model the intensity of the source (as in the diagram shown for the inverse square law).

What Faraday realized was that the change in the number of magnetic field lines through a region of space per unit time is equal to the work done in moving a unit positive test charge around the boundary line of the surface. In other words, by changing a magnetic field through a region of space, we can get charges to flow, in other words, we can establish a current because the changing magnetic field establishes an electromotive force, in other words a force which causes charges to move. The criteria Faraday established is that the wire (or whatever medium the charges are in) must cut the lines of magnetic flux. Flux lines is merely another name for the field lines. Since the strength of a magnetic field or any field can be defined as the density of flux lines through a region we can see that we can restate Faraday’s Law as the electromotive force generated around the boundary of a surface is equal to the rate of change of flux linkage through the surface.


4. Lenz’s Law

Consider the following discussion between a teacher and students:

*Teacher draws a solenoidal coil of wire on the board*

Teacher: What will happen as a bar magnet is moved closer to the wire?

Students: A current is induced, and hence the coil becomes a temporary magnet

Teacher: Alright. Now, let’s say that the side toward which the magnet is closest is the South pole of the Solenoid, and the North pole of the bar magnet is closest to the solenoid. What happens as it is moved?

Students: The magnet feels a force which directs it toward the solenoid. Hence, it accelerates.

Teacher: And if it accelerates toward the coil, then by Faraday’s Law, what happens?

Students: The rate of flux cutting increases, hence the induced electromotive force increases.

Teacher: And if the emf increases, what happens to the magnet?

Students: The force exerted on the magnet increases.

Teacher: Hence, by Faraday’s Law…

Students: The rate of flux cutting increases and therefore emf increases and therefore the magnet accelerates, and then the rate of flux cutting increases ad infinitum.

Teacher: Precisely. Why is this absurd?

Students: Because kinetic energy is being generated out of nothing.

Teacher: Precisely.

Everything in the described scenario above is a direct consequence of Faraday’s Law, except for one thing:

Teacher: “Alright. Now, let’s say that the side toward which the magnet is closest is the South pole of the Solenoid, and the North pole of the bar magnet is being moved.”

Here is where the impossibility arises. If the magnetic field induced a current which exerted a force on the magnetic field toward the solenoid, the magnetic fields would combine and continue to multiply to infinity. This cannot be the case. Here is where Mr. Lenz is kind enough to save the day.

Lenz’s law states that the induced emf in a wire due to a magnetic field must oppose the motion which creates it.

Thus, we cannot have the North pole of a magnet induce a South pole on the close side of a solenoid. The current is being produced by the motion of the magnet and the resulting emf must oppose the motion creating it. Hence, it must be a North pole as well. Thus, whereas the above impossibility is an example of a self-perpetuating positive feedback loop which does not work, Lenz’s Law is a direct application of a negative feedback principle which works perfectly well.

Consider a motor, for example. As the motor rotates through the magnetic field due to the force on the current (which is always perpendicular to the field), the electrons experience motion which is perpendicular to the current. This produces an emf which must oppose the current. If it did not, then the current would increase, which would increase the force, which would increase the angular velocity, which would increase the emf, etc. This is a violation of conservation of energy.

So, let’s apply Lenz’s law. The emf opposes the current. As the current increases, the opposing force increases. Hence, the current decreases, hence the angular velocity decreases, hence the opposing force decreases, hence the current increases, etc. etc. So, we have a vastly more sensible result when we apply Lenz’s Law. The emf comes to equilibrium with the current.

5. Hubble’s Law

In terms of demonstrating the nature of the universe as a whole, Hubble’s Law is a spectacular achievement. For most of history, the universe was held to be infinite and static. Our understanding of the universe was confounded by Olber’s Paradox. A fundamental principle of Cosmology is the isotropy of the universe. In other words, it is uniform in all directions. If the universe was infinite in terms of space and time a curious paradox arises. It logically follows (from it being spatially infinite) that a star occupies every vertice of direction we could look along. It follows (from it being temporally infinite) that the starlight has had an infinite amount of time to get here. It would seem, therefore, that no matter where we turn our head (the universe is, after all, isotropic) we should see a dazzling white sky, all the time, regardless of whether or not we face the sun. But we do not. The night sky is dark.


Hubble’s crucial observations led to the rethinking of the universe in terms of time. There are two key principles that come from his investigations:

1. Galaxies are receding away from each other at a velocity proportional to their distance from each other. This is Hubble’s Law

2. Light from distant parts of the universe is redshifted because the wavelength is increased by the expansion of the universe itself.

The crucial corollary here is that if the universe is expanding, and the galaxies are receding from each other, then by extrapolation of these conditions it is clear that there was a point in time when the distance between them was negligible. This is the essence of the central principle of modern Cosmology: Big Bang theory. This is the solution to Olber’s Paradox. The universe is not infinite. Light from stars has not had an infinite amount of time to arrive.

6. The Conservation of Momentum

This is one of the most important principles in mechanics. The momentum of a rigid body is the product of its mass and velocity. In other words p=mv. The net momentum of a closed system is always a conserved quantity. Since momentum is a vector, this means more precisely that each vector component is conserved. In classical mechanics, this is a direct consequence of Newton’s second law, which states that dP/dt=Fnet. There are entire disciplines based on this principle, such as rocket science. The crucial principle that comes from this investigation is the fact that the  motion of a system of particles can be modeled as the motion of the center of mass of the system. The center of mass of a system is the weighted sum of all the elements in the system. In other words, if we have a collection of n elements such that the ith element has mass mand position vector ri then the center of mass of the system is:

Σi,N(miri)/ Σi,N(mi)

This result is very important. It allows us, in many situations, to treat extended objects as point particles where all the mass of the object is concentrated at the center of mass.

7. The Relativity Principle

The concept of a frame of reference is one of the most fundamental in all of physics. It is a fundamental topic in physics to articulate what precisely it means to say something is moving. After all, are you, reading this article at your computer, sitting still or moving around the sun?

We introduce the concept of frame of reference. Consider a coordinate system O which is that of a stationary observer with respect to the Earth (all our conventions here are defined with respect to the Earth’s surface). For simplicity we will discuss motion over small regions of space relative to the size of the Earth because vectors do not commute over curved space.

Let us suppose that the motion of an object from the perspective of O is such that the position vector of the object a is r(t) where t is time and the position vector points from the origin to the object. Now suppose there is another reference frame O` (O-dash) which coincides with O when t=0, as shown below:


Let us suppose that O` is moving with velocity v0 in the frame of reference of O. Thus at time t in the coordinate system of O the origin of O’ will be located at position vector v0t. Thus in the coordinate system of O’ the displacement vector of the object  a, which is denoted r’(t) is simply:

r’(t)=r(t)-v0t

This is simply a consequence of vector addition (this will not hold over curved surfaces, however).

Now suppose we wish to find the velocity of object a recorded in frames O and O’, We simply differentiate to find:

dr’/dt = dr/dt-v0

What of the acceleration? Differentiate again to find:

d2r’/dt2 = d2r/dt2

This is the crucial result, for it tells us that the laws of physics are independent of choice of coordinate axes. This is famously known as the relativity principle.  It is clear that if observer O concludes that body a is moving with constant velocity, and, therefore, subject to zero net force, then observer O’ will agree with this conclusion. Furthermore, if observer O concludes that body a is accelerating, and, therefore, subject to a force, then observer O’ will remain in agreement. It follows that Newton's laws of motion are equally valid in the frames of reference of the moving and the stationary observer. Such frames are termed inertial frames of reference. A more precise way to state the relativity principle is that no physical experiment can distinguish between inertial frames.

Consider, for example, a sound wave in stationary air. At STP, this will propagate at 343ms-1. Any experimenter in any frame of reference should obtain this value. An observer which is moving will measure a different  value for the speed of the wave in air, but he will also measure a different value for the speed of the medium, and therefore, his result, the speed of sound with respect to stationary air, will be the same. To put it another way, any wave whose speed will be measured the same in all inertial frames must be capable of propagation without a medium; otherwise we would be able to distinguish between two inertial frames, which would violate the principle of relativity.

8. The invariance of light speed

Maxwell’s equations completely specify electric and magnetic fields as being created by source charges and by changing magnetic and electric fields respectively. The solutions to Maxwell’s equations in terms of the electric and magnetic fields are wave solutions. From the wave solutions, several important pieces of information can be extracted. One of them is that the speed of light in a vacuum is constant. It can be expressed solely in terms of constants of proportionality associated with electric and magnetic fields in the vacuum. These constants link the strength of electric and magnetic fields to the variables associated with them such as distance and charge magnitude. Recalling our relativity principle, it is not possible for these values to depend on the choice of reference frame we measure them in. It would follow that the speed of light is the same in all inertial frames of reference because otherwise we could distinguish between inertial frames by measuring the values of the magnetic and electric constants, which would violate the relativity principle. This crucial realization led to the formation of special relativity.

9. The Conservation of Energy

This is arguably the most important principle in physics. It has applications in every discipline of physics and is employed to solve a vast range of problems. We have already met the concept of a conserved quantity when discussing momentum. This principle is heavily used in analysis of motion, and is enshrined in the first law of thermodynamics, which states that for a system, the total change in internal energy of a system is equal to the energy being transferred to the system minus the work the system does on the surroundings.

 In mechanics, the most important formulation is given like so:

The increase/decrease of the kinetic energy of a body is equal to the decrease/increase of potential energy of the body

The mechanical energy associated with a system is that which is subject to change by mechanical forces. It includes the mechanical potential and kinetic energy. The potential energy associated with a system is that energy which is stored in the system. For example, a charged particle in the presence of an electrostatic field has potential energy associated with it. In the absence of any other forces, this energy will be converted into kinetic energy (the charge will accelerate in the direction of the field lines). Conversely, if we do mechanical work against an electric field by moving a charged particle, then if we input an initial amount of kinetic energy into the particle, then that kinetic energy is used to do work against the field, and so is converted into electric potential energy. It is important to specify exactly what we mean by potential. In a gravitational field, for example, we must do work on a body to move it away from a field source because gravity is always attractive. As a result the potential in a gravitational field is always negative, since the potential at a point r  is the work we do on the body in moving it from a point where the gravitational field is zero to r. Clearly then, this is only meaningful if the energy expended is independent of path otherwise there would be infinitely many different potentials at a particular point. Potential is not a property of the body in question since it depends on where it is. Obviously therefore, in the presence of, say, friction, the concept of potential becomes meaningless.

10. The Gas Laws

The Gas laws are historically important because the gas model based on them was the first scientific model to be formulated using the modern scientific method. These ideal gases are very simple to understand. They are comprised of particles which only undergo elastic collisions (in other words, no kinetic energy is lost during the collisions) and which are single particles with no forces between them. A system of ideal gas particles is governed by three parameters which gives rise to an equation of state which completely specifies the macroscopic properties of the ideal gas.


The experimental basis of the ideal gas laws was formulated by Boyle, Dalton, Gay-Lussac and Charles. These simple experiments employ the fundamental principle of the scientific method, where we attempt to find the relationship between an input and output variable by designing an experiment which controls all the other variables. Boyle’s famous experiment involved altering the pressure of a gas and measuring the resulting volume of the system. This resulted in Boyle’s Law: At a constant temperature, the pressure of a gas is inversely proportional to its volume. The constancy of temperature is a crucial proviso, for it is the other state variable of the gas, and if not controlled there is no way to determine the relationship. With three variables, there are three experiments we could perform where we hold one of the three variables constant and alter one to measure the other. Thus we have Boyle’s Law (pressure and volume are inversely proportional with temperature held constant ) and Charles’ Law (volume and temperature are directly proportional with pressure constant) and Gay-Lussac’s Law (pressure and temperature are directly proportional with volume constant). Thus we have:

PV=k1

V=k2T

P=k3T

Where k1, k2 and k3 are constants. These equations can be combined to give:

PV=kT where k is a constant.

This is the equation of state of an ideal gas. When we combine this with Avogadro’s Law which states that equal volumes of ideal gases have the same number of particles, we get:

PV=nRT where n is the number of moles of gas (see below) and R is the gas constant (the amount of energy needed to raise the temperature of one mole of gas by one Kelvin).

11. The Second Law of Thermodynamics

Although originally formulated due to the development of heat engines, the laws of thermodynamics are the most universal in all of physics. Thermodynamics alone has the capacity to explain more phenomena then every other discipline combined. The second law of thermodynamics is the one with the most explanatory power.

Let us consider 1000 coins in a box, all facing heads. It is a closed system, which, by definition, does not exchange energy input or output with the rest of the universe. States of high order have low probability. For instance, if we imagine a box with 1000 coins lying heads up, and we shake it twice, it is vastly more probable that we will end up with a chaotic arrangement of coins than the arrangement that we had previously. Thus, the law can be restated closed systems tend to progress from states of low probability to high probability. This movement towards high probability in a system where the energy is E, is progressive. In order for the entropy (the progression towards high probability) to be corrected, one would have to break the closed nature of the system. In this case, it would require someone to open the box and rearrange the coins. The second Law of thermodynamics therefore, in general terms, dictates that energy, regardless of how hard we try, always “spreads out” by which we mean that it becomes converted into less useful forms that are probabilistically very, very difficult to retrieve back into ordered states. This governs our lives. Eggs do not unbreak, glasses do not unshatter, entropy is highly directional, for it predicts, in any given system, there to be only a small number  of ordered states and a vast amount of disordered states, such that the probability of a disordered state is logarithmically greater than those of ordered states. Specifically, heat, being random hubbub of molecular motion, is the most singularly chaotic and disordered form of energy, and ultimately, therefore, almost impossible to retrieve into ordered states.

Now let us try to formulate this in a practical manner. Consider a heat engine. If the heat engine does work on a piston by having a gas cylinder push it out, then in order for the piston to continue to do useful work, the system must do work on the piston in order to reset the system. In order for this to be the case, the piston cannot transfer all work to the engine. The easiest way to picture this is with a spinning water wheel that works via gravity. Water flows from the tap down through the paddles, turning them, and then moving to the gravitational equivalent of a reservoir. If the system extracted all the GPE (GPE is Gravitational Potential Energy. Then there is no way to continue to do work on the system unless more water flows in. This cannot continue forever (in the same manner, it is impossible to continuously inject energy into a piston to get it to do work on the wheel of a train (as an example). The gas can't expand forever). Thus we must construct a cyclical system. If we want the water to continue to do work on the wheel, we must reset the system by having work done on the water in raising it back up. This is the essence of the Second Law of Thermodynamics (in classical form, that is, and in the context of engine work. It states that no cyclical process can extract 100% of the energy input do to useful work on a system. Eventually, the system will wind down (the gradient between the input and the reservoir will erode slowly). A simple corollary of this is that the entropy of any system can only be decreased in the context of an increase in entropy outside the system.

The reason this can be used to explain so many phenomenon is that the probability of a state is a concept that can be applied to any physical system. Ultimately, regardless of whether we are talking about gas molecules or coins in a box, or book pages, the reason for the increase in entropy will come, as mentioned, because high entropy states are states with a greater probability. To understand this it is necessary to distinguish between microstates within a closed system and macrostates. The former are measures of the particles themselves, velocity, position, etc. Macrostates are measures of variables of the system, temperature, pressure, volume, etc. Pressure serves as an easy variable to explain. Imagine a box with a slit in the center where both sides are of equal volume and are filled with gas molecules, one side having 6 times the number of molecules than the other. Thus, assuming constant temperature, it follows that by the combined gas law, that side will have 6 times the pressure of the other. This is effectively a restriction on the number of possible microstates that the system could take, because there are fewer ways to arrange the molecules such that one side has six times the pressure of the other. This is called a potential or a thermodynamics inequilibrium. . Once the barrier is released, the molecules will tend to equilibriate, since this represents the macrostate where the greatest number of microstates, hence the greatest probability, could represent, one of equal pressure throughout the system. Hence we say the system moves towards equilibrium. A diagram works well to illustrate this principle:

 


The number of microstates for the given macrostates is given in the diagram. Establishing them ourselves is a simple exercise in combinatorial mathematics. As we saw with the coins, high entropy states have a greater number of microstates corresponding to a single macrostate. This, basic probability, is the reason that things tend towards disorder.

12. Malus' Law

From Maxwell’s equations we can show that a light wave is in fact an electric field oscillating perpendicular to a magnetic field such that both fields are perpendicular to the direction of propagation. Thus, by specifying the direction of propagation, there are an infinite number of planar axis sets that could represent the magnetic and electric fields. Usually we specify a light wave using the direction and magnitude of the electric field component since it is much stronger. If we have light waves incident on a plane, then there are an infinite number of plane in which the electric field could oscillate because there are an infinite number of directions which are perpendicular to the incident rays. Certain surfaces are said to plane-polarize light because they only allow a particular component of the electric field in. As a consequence the light passing through the surface is oscillating in one specific plane. This surface is said to be a polarizer. For many purposes, we put one polarizer behind another to polarize light a second time. The second polarizer is known as an analyzer. If the light is already polarized then the analyzer will only let in a component of the polarized light through. If we project the vector electric field direction from the polarizer onto the plane of the analyzer we find the component of the polarized light that is allowed through the analyzer. Because the intensity of light is proportional to the square of electric field strength (this can be derived from Maxwell’s equations) we have:

I=cos2 θ where θ is the angle between the plane of polarization of the analyzer and the polarizer.


The reason I picked this is because every piece of electronic equipment using a liquid crystal display (such as, most likely, the computer you are sitting at) uses this principle. A liquid crystal has the ability to rotate the plane of polarization. In between the crystal is placed a polarizer and an analyzer which are cross-polarized, in other words, they are perpendicular. Normally the result would be no light admitted through the analyzer since there is no component in the direction of the polarizer let through the analyzer, but with a liquid crystal to twist the plane of polarization, the analyzer will allow light through. When an electric field is put through the liquid crystal, we can force the alignment of the crystals so as to allow the light through in certain areas of the analyzer. This is the basis of all liquid crystal displays.

13.The Ideomotor Effect

Throughout history a vast number of things have been ascribed to supernatural forces and mysterious “energies”. In one swoop the bulk of these were crushed by two men: Michael Faraday and Michel Chevreul. Both among the greatest scientists of their age, their work into the ideomotor effect formalized a set of control mechanisms that run throughout modern science for preventing observer bias and confirmation bias. Chevreul’s discovery of the ideomotor effect was the formalization of the concept of the double blind trial. What Chevreul and Faraday realized that many of the effects of spiritual fads that swept Europe at the time such as the Ouija board and dowsing were not the result of supernatural forces but unconscious movements on part of the people taking part of the experiments. That is the ideomotor effect. The modern double blind trial begins with Chevreul’s work. If, for example, we want to test a drug, we need to control variables that result not from the drug, but unconsciously result from the people taking part in the experiment. In a randomized drug trial for example, a certain frequency of people are actually taking a placebo. Obviously the subjects do not know what they are receiving, as that would defeat the purpose of the control. This is single-blinding. What Chevreul crucially realized was that it was just as important for the experimenter to be blinded. In other words, the experimenter cannot know which are placebos and which are not, otherwise they might give unconscious cues to the subjects. When the experimenter does not know what the controls are, the experiment is said to be double blinded.

14. The quantization of the subshells of electrons


Much work was done on atomic structure in the 20th century. The first modern concept of the atom (after Democritus) was put forth by Dalton. Dalton was the first to realize that there are different types of atoms and these are responsible for different properties of substances. This was completely formalized by Mendeleev (see #14). The structure of the atom was uncovered in the 20th century. The cathode ray experiments demonstrated that atoms contain charged particles. The alpha-scattering experiment demonstrates the existence of the nucleus, and the photoelectric effect demonstrated the existence of energy levels of electrons. The Schrodinger model employed the analytical tools of quantum mechanics to deduce the shapes of the energy levels. The quantum mechanical model of the atom gives us a probabilistic interpretation of electron orbitals. A region of space called an electron orbital is a region giving us a probability density function associated with an electron. Modern atomic theory states that any electron in an atom can be completely described by four quantum numbers. These are the principle quantum number, the azimuthal quantum number, the magnetic quantum number, and the spin quantum number. These are denoted as (respectively), n,l,ml and ms.

 

The principle quantum number is the value to denote quantized energy levels. It can take on an integer value. In general:

For the nth principle quantum number there are 2n2 electrons in that number.

 This is the largest classification term for electrons. It is followed by the azimuthal quantum number. The azimuthal quantum number describes the set of probability spaces within each principle quantum number. For some nth principle quantum number, the value of l can be any integer in the range of 0 to (n-1). These integers correspond to subshells. The 0 azimuthal quantum number is known as the s-subshell. The 1 azimuthal quantum number is known as the p-subshell. The 2 azimuthal quantum number is known as the d-subshell. The 3 quantum azimuthal number is known as the f-subshell.

Magnetic quantum number:

The magnetic quantum number is denoted ml. It denotes the number of probability spaces (each containing two electrons) contained in a certain magnetic quantum number. For some value of l, the magnetic quantum number can be an integer value from l to –l. Thus, the s-subshell has only one probability space. The p subshell has 3. The d subshell has 5, and so on. These spaces, in general, have different shapes. The s-subshell has a single magnetic quantum space with a spherical shape. The p-subshell has three mutually orthogonal spaces, which are dumbbell shaped:

 

Spin quantum number:

This is the final number needed to specify an electron. Within each magnetic quantum number, there are two electrons. The spin of the electron can range over two possible values. These are +1/2 and -1/2. The Pauli exclusion principle states that within any magnetic quantum number cannot contain two electrons of the same spin.

15. The Organization of the Periodic Table

Dalton was the first to realize that there are different types of atoms that result in different properties of substances. It was unknown what the properties of atoms were that resulted in the different nature of substances until Mendeleev formulated the periodic table which is still the most important tool in all chemistry. Chemistry is, in many ways, the study of electrons. Modern chemistry specifies that all properties and trends of chemical substances can be deduced by knowing the number of electrons and protons because this specifies the electron structure of the atom (by applying the Schrodinger model). By merely knowing the number of protons (and hence electrons, since atoms and electrically neutral) and hence the structure of electrons we can deduce the reactivity of the species to other species, its electronegativity, electron affinity and atomic radius. Mendeleev’s key insight was that the atomic species should be ordered according to their atomic number (the number of protons). The number of protons defines the species. Mendeleev was able to predict the existence and properties of elements that were yet unknown at the time.


16. Avogadro’s Constant

We consider the mass of an atom be the sum of the masses of its protons and neutrons (electrons, with negligible comparable mass, are not considered) and the mass of a molecule to be the sum of the mass of its atoms. However, when we examine macroscopic systems of molecules and atoms, where there are vast numbers of atoms, we use measurement units that are too unwieldy to discuss individual atoms and molecules. In chemistry we typically work in grams. Avogadro’s constant is important because it allows us to do vast number of calculations by changing the mass of the system into number of particles of particular reactants and products, whose ratio can be deduced just by looking at the reaction. Avogadro’s constant is a constant which links the formula mass of a particular atom or molecule (which is given in terms of atomic mass units) into grams. The definition of the constant is the number of particles of carbon-12 in exactly 12.00g of carbon-12. This constant is 6.02x1023. This is called one mole. The mole is the most convenient unit in all of chemistry and is a fundamental unit of measurement. It is a measure of the number of particles of a known mass of a particular compound. Avogadro’s Law states that equal volumes of ideal gases have the same number of moles.

17. Lavoisier's Law

This is the most fundamental principle of modern chemistry. As the father of modern chemistry, Lavoisier was the first to formulate the concepts of stoichiometry and mass conservation. Previous theories such as phlogiston theory had failed to recognize this key principle of mass conservation: Where a chenical reaction can alter chemical bonds and the substances present, the total mass of substance before and after a reaction remains conserved. The mass of reactants lost is equal to the mass of products gained. By very careful measurement, Lavoisier was able to deduce the principle of stoichiometric coefficients. This tells us how many particles of particular reactants are required to form the products and in turn how many particles of products are formed. 

18. The homologous series of organic chemistry

Organic chemistry is sometimes called carbon chemistry. It is the study of carbon compounds. Because each carbon can permit four chemical bonds around it, this gives rise to highly regular structures of repeating units. Properties of  organic molecules are determined by three things: the functional group, the chain length and the isomeric configuration. Organic molecules are distinguishable because they have carbon chains called alkyl chains attached to which are functional groups such as alcohol, carboxylic acid, ester groups, etc.

A homologous series is therefore a series of organic molecules which differ only in their alkyl chain length. The alkyl chain is composed of single bonded C-H bonds. These bonds are the most common in organic chemistry and constitute the bulk of the structure of large organic molecules. A homologous series of alkanes is shown (alkanes are composed purely of alkyl chains and are denoted R-H where R is an alkyl chain of arbitrary length). This shows the first four, although obviously we can make far longer alkanes.


These chains are then attached to different functional groups, which determine particular properties of the molecules. These functional groups are shown below


19. Optical isomerism

Consider a molecule which consists of a carbon with four groupings around it. Because the bonds are formed by shared electron pairs, the molecule arranges itself spatially such that the electron pairs are as far away from each other possible because they are repulsed by mutual electrostatic forces. If there are four negative charge groupings around the central atom then the most favorable structure is tetrahedral since mathematically this is the best way to arrange all the bonds so they are maximally repulsed. If there are four distinct groupings attached to a particular carbon then it exhibits optical isomerism. In organic chemistry isomerism occurs when two molecules have identical compositions in terms of atoms but have different bonding order and structure (structural isomerism), functional groups (functional group isomerism), or relative spatial arrangement (stereoisomerism). In the case of optical isomerism (which is a type of stereoisomerism), the fact that the four tetrahedral groups are distinct results in two mirror images. These two molecules cannot be “matched” onto each other, analogous to the left and right hands. Note that this is only possible because of the tetrahedral arrangement. If the molecule was planar this optical isomerism would no longer exist. The two optical isomers are called enantiomers.


20. The lock and key enzyme

In general, all proteins bind to other molecules, whether those other molecules are other proteins, or are particular small organic molecules being operated on, or polynucleotides, etc. For example, all enzymes will bind to molecules to be catalyzed at the active site, such are called substrates. In addition, most proteins are allosteric. They can flip between multiple distinct confirmations which act like on-off switches (this is what allows cells to compute responses to the environs, a phenomenon called signal integration). Thus, most proteins have multiple sites to bind to molecules, some of these molecules being those that the protein operates on, others being molecules controlling the activity of the protein, in the following manner:


This is where we get the lock and key enzyme model. The enzyme is structurally folded such that the active site of the enzyme matches with the substrate. Given that the cell has to maintain all of these complicated ordered, stepwise pathways in the context of the fact that the cell is a confusing hodgepodge of molecules bashing into each other, how is this ordered existence achieved in a non-orderly cytoplasm? To understand this in any meaningful way requires some understanding of protein kinetics. The function of any protein is determined by how it binds to other molecules. Some molecules (including other proteins) serve to regulate other proteins by up-regulating or down-regulating their function with respect to other molecules. Some molecules act as substrates to enzymes, where they bind and are acted upon by the catalytic active site of a protein. But in all cases where any molecule binds to a protein, it does so by means of a large number of non-covalent interactions with whatever molecule it binds to.

This holds true for all interactions. Proteins interact with DNA via non-covalent interactions, with other proteins via non-covalent interactions, with small molecular ligands via non-covalent interactions. Non-covalent interactions are the basis of supramolecular structures such as ribosomes (held together with non-covalent interactions), proteasomes (held together with…well, have a guess.) etc. The strength of the non-covalent fit between a protein and a ligand is determined by the number of non-covalent interactions, and the precise orientation of the protein for the ligand. If the protein forms a large number of non-covalent interactions, the protein-ligand complex will be more stable than if it was formed by fewer interactions. This gives rise to a concept called the binding equilibria. Consider a group of proteins and ligands diffusing in a cell. They will encounter each other and form complexes, then disassociate and so forth, similar to a chemical reaction. Eventually they will reach an equilibrium where the rate of association is equal to disassociation. The point being that the ratio of concentration of complexes over ligand/protein is proportional to the strength of the non-covalent interaction. Thus, protein-ligand interaction is a probabilistic consideration, based on thermodynamics.

This is the primary way in which pathways in the cell are restricted. Proteins and potential ligands are violently colliding all the time in the cell, but only if they actually fit (hence form strong noncovalent contacts) will there be a high probability of the complex being maintained. If not, thermal buffeting in the cell will simply rip it apart. If the molecule doesn’t fit, it is not energetically favorable for a complex to form. As a result, proteins only respond to their specific ligands, be they other proteins, small molecules, or DNA, or (sometimes) all three. Many proteins have active sites which have been so precisely tuned for their ligands by evolution that there is little more that can be done to make the binding stronger.


21. The DNA double helix

Although chromosomes had been observable since the 1800s using a light microscope, the structure of DNA was not worked out until the work of Watson and Crick.

DNA is a double helix structure, a long unbranched polymer made up of nucleotides. A nucleotide consists of a ring-structured sugar, a phosphate, and a chemical called a base, of which there are four types, which hold the information in DNA, whilst the sugar and phosphate merely provide the backbone. Hence, a piece of ssDNA (single strand DNA) appears to be shaped like a ladder which has been cut in half along its long axis. The rungs are complementary, so the bases only recognize a specific counterpart and bind to it very tightly by forming strong hydrogen bonds (which are normally very weak, but their strength is greatly exacerbated by the fact that the DNA molecules is millions of bases in length). The base called Adenine only recognizes Thymine, and cytosine only recognizes guanine. In this way, DNA can be assembled by templated polymerization, ie, on a single strand of DNA, ie half a ladder, the other half can be synthesized by that the nucleotides will slot into correct place on the DNA strand because they only recognize one counterpart. In polymerization, the monomers are strung together by a reactive bond. In this case, a nucleotide triphostate (ATP, GTP, CTP or TTP) is hydrolyzed (split by water) such that the pyrophosphate (two of the phosphates) are displaced and a highly reactive phosphate bond is left on the nucleotide, via which it joins to its fellow nucleotide already on the strand. In this way, each incoming nucleotide carries its own reactive bond to join the growing chain. This is called tails polymerization.


22. Amino acid encoding

The discovery of the DNA double helix allowed for the breakthrough in discovering the precise mechanism by which DNA holds information. Although the functions of DNA extend beyond protein encoding, the encoding of proteins, which constitute the bulk of the solid mass of biological organisms and give rise to all of their properties is the most important function of DNA.

DNA is a cipher, that is to say that it is a direct substitution representation of the sequential structure of another unbranched polymer, polypeptide (some DNA, however, codes for RNA genes), constructed of a different monomer class, amino acids. The order of the amino acids will determine the structure and function of the final product for which the DNA codes, the protein. A substitution cipher is one in which one set of functional expressions is replaced with another.

In this case, each amino acid is read as a triplet group of nucleotides called a codon, with other codons dictating the stop and start of translation. As shown in this table, DNA is a substitution cipher like so:

 

GCA

GCC

GCG

GCU

AGA

AGG

CGA

CGG

CGU

GAC

GAU

AAC

AAU

UGC

UGU

GAA

GAG

CAA

CAG

GGA

GGC

GGG

GGU

CAC

CAU

AUA

AUC

AUU

UUA

UUG

CUA

CUC

CUG

CUU

AAA

AAG

AUG

UUC

UUU

CCA

CCC

CCG

CCU

AGC

AGU

UCA

UCC

UCG

UCU

ACA

ACC

ACG

ACU

UGG

UAC

UAU

GUA

GUC

GUG

GUU

UAA

UAG

UGA

Ala

Arg

Asp

Asn

Cys

Glu

Gln

Gly

His

Ile

Leu

Lys

Met

Phe

Pro

Ser

Thr

Trp

Tyr

Val

Stop

A

R

D

N

C

E

Q

G

H

I

L

K

M

F

P

S

T

W

Y

V

 

23. Le Chatelier’s Principle

We have already met an example of negative feedback in Lenz’s Law. A more general principle (of which Lenz’s Law is an example) is Le Chatelier’s Principle. The cells that constitute biological systems are governed by complex sets of switches, regulators, pathways and loops. This maze of control is sorted by the cell using a variety of mechanisms such as signal integration. A crucial method of control for all cellular systems (and one which has applications beyond biology) is a negative feedback loop.

There are countless examples of this in biological systems. Enzymes can self-regulate by producing products which regulate the substrates, for example.

By far the most important formulation of the this is Le Chatelier’s principle. It states that a system will tend to act so as to oppose perturbations on the system. In chemistry and biology this is tremendously important in the study of chemical equilibrium. We saw above in discussing entropy that systems tend to move toward states of equilibrium. Once in equilibrium they tend to resist changes to the equilibrium. This principle is exploited in all industrial chemistry where we shift the conditions on a chemical system to force it to produce as much product as possible.

A complex example of this in biology is shown below:


24. The cell doctrine

Discovered by the renowned pathologist Rudolf Virchow, this principle states that omne vivum ex ovo. Every living thing comes from another. This is the cell doctrine. Cells are produced by the replication of other cells, and it is this repeated replication over the last 3.8 billion years that has given rise to all biological life on this planet. All current replicators are the descendants of previous replicators. The cell doctrine is closely linked to the notion of common descent (see number 25). This is the fundamental principle of pathology and cell biology.

25. The Darwinian theory of natural selection

This is the most important and celebrated principle in biology. Prior to the work of Darwin and Huxley, there was no unifying principle to biology. Modern biology very much begins aboard the HMS Beagle in the same way that modern chemistry begins with Lavoisier.  Darwin came to the crucial realization of common descent. He was the first to make the crucial link that artificial selection, the same process that humans have been practicing for thousands of years with domesticated crops and dogs, is a microcosm of a much larger and longer process that occurs in nature via the process of natural selection that is responsible for all biological life. Darwin’s other crucial realization was that the process was driven by in inherited characteristics. The idea of “frequency of particles of inheritance” came later, with the development of genetics. The formalization of evolution in terms of the science of genetics (developed by Mendel) was taken up by a highly prestigious group of scientists (such as Haldane) in the 1930s. They formulated the modern synthesis which is the current genetic theory of natural selection. This is compactly stated in five principles of evolutionary biology:

Evolution: Over time, the characteristics of a lineage change.

Common Descent: All organisms have diverged from a common ancestor

Gradualism: Every organism, however different and distant from each other, is related, some distantly. Radical changes in phenotype and genotype have occurred by incremental processes by which lineages diverge from a common ancestor

Gene Frequency: The method by which evolution (the change in lineages) occurs is by changes in gene frequencies of populations. It is the change in proportion of individuals which have certain characteristics that determines the characteristic divergence of a lineage.

Natural Selection: The process by which gene frequencies are altered is characterized by the variations of organisms in a population, and how those variations determine the ability of the organism to survive and reproduce. The selection of alleles over others in a population will accordingly alter the frequency of genetic particles and hence the phenotype of a lineage.

 

 

"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.

-Me

Books about atheism


Cpt_pineapple
atheist
Posts: 5492
Joined: 2007-04-12
User is offlineOffline
25) Snell's law  Consider

25) Snell's law

 

 

Consider a wave in medium with permittivity epsilion1 and permeability U1 going into a medium with epsilion2 and U2

The incident wave is

E=yE0e[-j[kx*X-kz*Z]

 

And the transmitted wave is

 

E=yE0e[-j[ktx*X-ktz*Z]

 

 

Where k is the wave vector defined by

 

k=2[pi]f*SQRT[epslion*U]

 

 

where f is the frequency and c is the speed of light in that medium [1/SQRT[epslion*U]]

 

 

kx is the incident wave vector given by kx=k1sin[angle of incidence]

 

ktx is the transmitted wave vector given by k2sin[angle of refraction]

 

 

since the mediums have different epsilions and U, the speed is different hence k1=/=k2

 

 

There is however, a phase matching condition kx=ktx

 

 

hence k1sin[angle of incidence]=k2sin[angle of refraction]

 

k1 is just 2[pi]f/v1

 

k2 is 2[pi]f/v2

 

where v is the speed of light in that medium

 

 

since the frequency does not change

 

and by multiplying by the speed of light "c" then defining n1=c/v1 and n2=c/v2

 

n1sin[angle of incidence]=n2sin[angle of refraction]

 

 

 

 


Hambydammit
High Level DonorModeratorRRS Core Member
Hambydammit's picture
Posts: 8657
Joined: 2006-10-22
User is offlineOffline
 Deludedgod wrote:This took

 

Deludedgod wrote:
This took me fucking ages, as you can imagine.

DG, you have a nearly unique talent for making me feel incredibly stupid just by typing words.  If I didn't say it clearly enough in your 2 year thread, I'm glad you continue to hang around with us simpletons.

 

 

Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin

http://hambydammit.wordpress.com/
Books about atheism


aiia
Superfan
aiia's picture
Posts: 1923
Joined: 2006-09-12
User is offlineOffline
deludedgodthat was

deludedgod

that was absolutely resplendent

copied and pasted for reference

thanks


Shaitian
Posts: 386
Joined: 2006-07-15
User is offlineOffline
Hambydammit

Hambydammit wrote:

 

Deludedgod wrote:
This took me fucking ages, as you can imagine.

DG, you have a nearly unique talent for making me feel incredibly stupid just by typing words.  If I didn't say it clearly enough in your 2 year thread, I'm glad you continue to hang around with us simpletons.

 

 



I second this!

I also copied and saved this as well thanks!

 


Vastet
atheistBloggerSuperfan
Vastet's picture
Posts: 13234
Joined: 2006-12-25
User is offlineOffline
Holy shit nuggets. That post

Holy shit nuggets. That post will take me awhile to read. Thanks for your work DG! Laughing out loud

 

I would add Murphy's law...but it almost seems wrong....

Enlightened Atheist, Gaming God.