What is “Light-year” ?

Light-year A light-year, also light year or lightyear, is an astronomical unit of length equal to just under 10 trillion kilometres (or about 6 trillion miles).[note 1] As defined by the International More »


What is “Astronomical seeing” ?

Astronomical seeing

Astronomical seeing
Astronomical seeing
Astronomical seeing

Astronomical seeing refers to the blurring and twinkling of astronomical objects such as stars caused by turbulent mixing in the Earth’s atmosphere varying the optical refractive index. The astronomical seeing conditions on a given night at a given location describe how much the Earth’s atmosphere perturbs the images of stars as seen through a telescope.

The most common seeing measurement is the diameter of the seeing disc (the point spread function for imaging through the atmosphere). The point spread function diameter (seeing disc diameter or “seeing”) is a reference to the best possible angular resolution which can be achieved by an optical telescope in a long photographic exposure, and corresponds to the diameter of the fuzzy blob seen when observing a point-like star through the atmosphere. The size of the seeing disc is determined by the astronomical seeing conditions at the time of the observation. The best conditions give a seeing disk diameter of ~0.4 arcseconds and are found at high-altitude observatories on small islands such as Mauna Kea or La Palma.

Seeing is one of the biggest problems for Earth-based astronomy: while the big telescopes have theoretically milli-arcsecond resolution, the real image will never be better than the average seeing disc during the observation. This can easily mean a factor of 100 between the potential and practical resolution. Starting in the 1990s, new adaptive optics have been introduced that can help correct for these effects, dramatically improving the resolution of ground based telescopes.

The image fluctuations seen when looking at the bottom of a lake on a windy day are caused by refractive index fluctuations, but in the case of a lake they don’t result from turbulent mixing.

Typical short-exposure negative image of a binary star as seen through atmospheric seeing. Each star should appear as a single Airy pattern, but the atmosphere causes the images of the two stars to break up into two patterns of speckles (one pattern above left, the other below right). The speckles are a little difficult to make out in this image due to the coarse pixel size on the camera used (see the simulated images below for a clearer example). The speckles move around rapidly, so that each star appears as a single fuzzy blob in long exposure images (called a seeing disc). The telescope used had a diameter of about 7r0 (see definition of r0 below, and example simulated image through a 7r0 telescope).

The effects of atmospheric seeing were indirectly responsible for the belief that there were canals on Mars. In viewing a bright object such as Mars, occasionally a still patch of air will come in front of the planet, resulting in a brief moment of clarity. Before the use of charge-coupled devices, there was no way of recording the image of the planet in the brief moment other than having the observer remember the image and draw it later. This had the effect of having the image of the planet be dependent on the observer’s memory and preconceptions which led the belief that Mars had linear features.

The effects of atmospheric seeing are qualitatively similar throughout the visible and near infra-red wavebands. At large telescopes the long exposure image resolution is generally slightly higher at longer wavelengths, and the timescale for the changes in the dancing speckle patterns is substantially lower.

The distortion changes at a high rate, typically more frequently than 100 times a second. In a typical astronomical image of a star with an exposure time of seconds or even minutes, the different distortions average out as a filled disc called the point spread function or “seeing disc”. The diameter of the seeing disk, most often defined as the full width at half maximum, is a measure of the astronomical seeing conditions.

It follows from this definition that seeing is always a variable quantity, different from place to place, from night to night and even variable on a scale of minutes. Astronomers often talk about “good” nights with a low average seeing disc diameter, and “bad” nights where the seeing diameter was so high that all observations were worthless.

Slow motion movie of what you see through a telescope when you look at a star at high magnification. The telescope used had a diameter of about 7r0 (see definition of r0 below, and example simulated image through a 7r0 telescope). Notice how the star breaks up into multiple blobs (speckles) — entirely an atmospheric effect. Some telescope vibration is also noticeable.

The FWHM of the seeing disc is usually measured in arcseconds, abbreviated with the symbol (“). A 1.0″ seeing is a good one for average astronomical sites. The seeing of an urban environment is usually much worse. Good seeing nights tend to be clear, cold nights without wind gusts. Warm air rises (convection) degrading the seeing as does wind and clouds. At the best high-altitude mountaintop observatories the wind brings in stable air which has not previously been in contact with the ground, sometimes providing seeing as good as 0.4″.

In reality the pattern of blobs in the images changes very rapidly, so that long exposure photographs would just show a single large blurred blob in the centre for each telescope diameter. The diameter (FWHM) of the large blurred blob in long exposure images is called the seeing disc diameter, and is independent of the telescope diameter used (as long as adaptive optics correction isn’t applied).

It is 1st useful to give a brief overview of the basic theory of optical propagation through the atmosphere. In the standard classical theory, light is treated as an oscillation in a field. For monochromatic plane waves arriving from a distant point source with wave-vector : where is the complex field at position and time, with real and imaginary parts corresponding to the electric and magnetic field components, represents a phase offset, is the frequency of the light determined by, and is the amplitude of the light.

It is important to emphasise that and describe the effect of the Earth’s atmosphere, and the timescales for any changes in these functions will be set by the speed of refractive index fluctuations in the atmosphere.

A description of the nature of the wavefront perturbations introduced by the atmosphere is provided by the Kolmogorov model developed by Tatarski, based partly on the studies of turbulence by the Russian mathematician Andrex Kolmogorov. This model is supported by a variety of experimental measurements and is widely used in simulations of astronomical imaging. The model assumes that the wavefront perturbations are brought about by variations in the refractive index of the atmosphere. These refractive index variations lead directly to phase fluctuations described by, but any amplitude fluctuations are only brought about as a second-order effect while the perturbed wavefronts propagate from the perturbing atmospheric layer to the telescope. For all reasonable models of the Earth’s atmosphere at optical and infra-red wavelengths the instantaneous imaging performance is dominated by the phase fluctuations. The amplitude fluctuations described by have negligible effect on the structure of the images seen in the focus of a large telescope.

This equation represents a commonly used definition for, a parameter frequently used to describe the atmospheric conditions at astronomical observatories.

If turbulent evolution is assumed to occur on slow timescales, then the timescale t0 is simply proportional to r0 divided by the mean wind speed.

R (k) is a 2 dimensional square array of independent random complex numbers which have a Gaussian distribution about zero and white noise spectrum, K (k) is the (real) Fourier amplitude expected from the Kolmogorov (or Von Karman) spectrum, Re[] represents taking the real part, and FT[] represents a discrete Fourier transform of the resulting 2 dimensional square array (typically an FFT).

Related Sites for Astronomical seeing

What is “Vernal equinox” ?

Vernal equinox

Vernal equinox
Vernal equinox
Vernal equinox

An equinox occurs twice a year, when the plane of the Earth’s equator passes the center of the Sun. At this time the tilt of the Earth’s axis is inclined neither away from nor towards the Sun. The term equinox can also be used in a broader sense, meaning the date when such a passage happens. The name “equinox” is derived from the Latin aequus (equal) and nox (night), because around the equinox, night and day are about equal length.

The equinoxes are the only times when the terminator is perpendicular to the Earth’s Equator. Thus the Northern and Southern hemispheres are illuminated equally..

Another meaning of equinox is the date when day and night are the same length. The equinox isn’t exactly the same as the day when day and night are of equal length for two reasons. Firstly, because of the size of the sun, the top of the disk rises above the horizon when the center of the disk is still below the horizon. Secondly, the Earth’s atmosphere refracts sunlight which means that an observer can experience light (daytime) even before the 1st glimpse of the sun’s disk has risen above the horizon. To avoid this ambiguity the term equilux is sometimes used in this sense.[note 1] Times of sunset and sunrise vary with an observer’s location (longitude and latitude), so the dates when day and night are of exactly equal length likewise depend on location. For places near the equator the daytime is always longer than the night, so they would never experience an equinox by this definition.

Illumination of Earth by the Sun at the March equinox.

The Earth in its orbit around the Sun causes the Sun to appear on the celestial sphere moving over the ecliptic, which is tilted on the Equator (white).

Diagram of the Earth’s seasons as seen from the north. Far right: December solstice.

Diagram of the Earth’s seasons as seen from the south. Far left: June solstice.

When Julius Caesar established his calendar in 45 BC he set March 25 as the spring equinox. Since a Julian year is slightly longer than an actual year the calendar drifted with respect to the equinox, such that the equinox was occurring on about 21 March in AD 300 and by AD 1500 it had reached 11 March.

This drift induced Pope Gregory XIII to create a modern Gregorian calendar. The Pope wanted to restore the edicts concerning the date of Easter of the Council of Nicaea of AD 325. equinox.) So the shift in the date of the equinox that occurred between the 4th and the 16th centuries was annulled with the Gregorian calendar, but nothing was done for the 1st four centuries of the Julian calendar. The days of 29 February of the years AD 100, AD 200, AD 300, and the day created by the irregular application of leap years between the assassination of Caesar and the decree of Augustus re-arranging the calendar in AD 8, remained in effect. This moved the equinox four days earlier than in Caesar’s time.

On the day of the equinox, the center of the Sun spends a roughly equal amount of time above and below the horizon at every location on the Earth, so night and day are about the same length. The word equinox derives from the Latin words aequus and nox (night). In reality, the day is longer than the night at an equinox. Day is usually defined as the period when sunlight reaches the ground in the absence of local obstacles. From the Earth, the Sun appears as a disc rather than a point of light, so when the center of the Sun is below the horizon, its upper edge is visible. Furthermore, the atmosphere refracts light, so even when the upper limb of the Sun is 0.4 degree below the horizon, its rays curve over the horizon to the ground. In sunrise/sunset tables, the assumed semidiameter (apparent radius) of the Sun is 16 minutes of arc and the atmospheric refraction is assumed to be 34 minutes of arc. Their combination means that when the upper limb of Sun is on the visible horizon, its center is 50 minutes of arc below the geometric horizon, which is the intersection with the celestial sphere of a horizontal plane through the eye of the observer. These effects make the day about 14 minutes longer than the night at the Equator and longer still towards the Poles. The real equality of day and night only happens in places far enough from the Equator to have a seasonal difference in day length of at least 7 minutes, actually occurring a few days towards the winter side of each equinox.

In the half year centered on the June solstice, the Sun rises north of east and sets north of west, which means longer days with shorter nights for the Northern Hemisphere and shorter days with longer nights for the Southern Hemisphere. In the half year centered on the December solstice, the Sun rises south of east and sets south of west and the durations of day and night are reversed.

Some of the statements above can be made clearer by picturing the day arc. The pictures show this for every hour on equinox day. In addition, some ‘ghost’ suns are also indicated below the horizon, up to 18x below it; the Sun in such areas still causes twilight. The depictions presented below can be used for both the Northern Hemisphere and the Southern Hemisphere. The observer is understood to be sitting near the tree on the island depicted in the middle of the ocean; the green arrows give cardinal directions.

Day arc at 0x latitude The arc passes through the zenith, resulting in almost no shadows at high noon.

Day arc at 20x latitudeThe Sun culminates at 70x altitude and its path at sunrise and sunset occurs at a steep 70x angle to the horizon. Twilight still lasts about one hour.

Day arc at 50x latitudeTwilight lasts almost two hours.

Day arc at 70x latitudeThe Sun culminates at no more than 20x altitude and its daily path at sunrise and sunset is at a shallow 20x angle to the horizon. Twilight lasts for more than four hours; in fact, there is barely any night.

Day arc at 90x latitude If it were not for atmospheric refraction, the Sun would be on the horizon all the time.

Because of the precession of the Earth’s axis, the position of the vernal point on the celestial sphere changes over time, and the equatorial and the ecliptic coordinate systems change accordingly. Thus when specifying celestial coordinates for an object, one has to specify at what time the vernal point and the celestial equator are taken. That reference time is called the equinox of date.

The autumnal equinox is at ecliptic longitude 180x and at right ascension 12h.

The upper culmination of the vernal point is considered the start of the sidereal day for the observer. The hour angle of the vernal point is, by definition, the observer’s sidereal time.

The same is true in western tropical astrology: the vernal equinox is the 1st point of the sign of Aries. In this system, it is of no significance that the equinoxes shift over time with respect to the fixed stars.

A number of traditional spring and autumn festivals are celebrated on the date of the equinoxes.

Equinox is a phenomenon that can occur on any planet with a significant tilt to its rotational axis. Most dramatic of these is Saturn, where the equinox places its normally majestic ring system edge-on facing the Sun. As a result, they are visible only as a thin line when seen from Earth. When seen from above – a view seen by humans during an equinox for the 1st time from the Cassini space probe in 2009 – they receive very little sunshine, indeed more planetshine than light from the Sun.

This lack of sunshine occurs once every 14.7 earth-years. It can last a few weeks before and after the exact equinox. The most recent exact equinox for Saturn was on August 11, 2009. Its next equinox will take place on April 30, 2024.

Related Sites for Vernal equinox

What is “Orbital node” ?

Orbital node

Orbital node
Orbital node
Orbital node

An orbital node is one of the two points where an orbit crosses a plane of reference to which it is inclined. An orbit which is contained in the plane of reference has no nodes.

The line of nodes is the intersection of the object’s orbital plane with the plane of reference. It passes through the two nodes.

For the orbit of the Earth around the Sun is important the line of nodes formed by the ecliptic and the equator. There are two special days in which this line points to the sun. For these days, night and day take the same time. These two points in the sun orbit are named equinoxes.

For the orbit of the Moon around the Earth, the reference plane is taken to be the ecliptic, not the equatorial plane. The gravitational pull of the Sun upon the Moon causes its nodes, called the lunar nodes, to precess gradually westward, performing a complete circle in approximately 18.6 years.

Related Sites for Orbital node

What is “Altitude ” ?



The horizontal coordinate system is sometimes also called the az/el system, the Alt/Az system, or the altazimuth system.

The horizontal coordinate system is fixed to the Earth, not the stars. Therefore, the altitude and azimuth of an object changes with time, as the object appears to drift across the sky. In addition, because the horizontal system is defined by the observer’s local horizon, the same object viewed from different locations on Earth at the same time will have different values of altitude and azimuth.

Note that the above considerations are strictly speaking true for the geometric horizon only: the horizon as it would appear for an observer at sea level on a perfectly smooth Earth without an atmosphere. In practice the apparent horizon has a negative altitude, whose absolute value gets larger as the observer ascends higher above sea level, due to the curvature of the Earth. In addition, atmospheric refraction causes celestial objects very close to the horizon to appear about half a degree higher than they would if there were no atmosphere.

Related Sites for Altitude

What is “Solar and lunar eclipses” ?

Solar and lunar eclipses

Solar and lunar eclipses
Solar and lunar eclipses
Solar and lunar eclipses

The Moon is the only natural satellite of the Earth[d] and the 5th largest moon in the Solar System. It is the largest natural satellite of a planet in the Solar System relative to the size of its primary,[e] having 27% the diameter and 60% the density of Earth, resulting in 1⁄81 its mass. Among satellites with known densities, the Moon is the 2nd densest, after Io, a satellite of Jupiter.

The Moon is in synchronous rotation with Earth, always showing the same face with its near side marked by dark volcanic maria that fill between the bright ancient crustal highlands and the prominent impact craters. It is the brightest object in the sky after the Sun, although its surface is actually dark, with a reflectance just slightly higher than that of worn asphalt. Its prominence in the sky and its regular cycle of phases have, since ancient times, made the Moon an important cultural influence on language, calendars, art and mythology. The Moon’s gravitational influence produces the ocean tides and the minute lengthening of the day. The Moon’s current orbital distance, about thirty times the diameter of the Earth, causes it to appear almost the same size in the sky as the Sun, allowing it to cover the Sun nearly precisely in total solar eclipses. This matching of apparent visual size is a coincidence. The Moon’s linear distance from the Earth is currently increasing at a rate of 3.82×0.07 cm per year, but this rate isn’t constant.

The Moon is thought to have formed nearly 4.5 billion years ago, not long after the Earth. Although there have been several hypotheses for its origin in the past, the current most widely accepted explanation is that the Moon formed from the debris left over after a giant impact between Earth and a Mars-sized body.

After the Apollo 17 mission in 1972, the Moon has been visited only by unmanned spacecraft, notably by the final Soviet Lunokhod rover. Since 2004, Japan, China, India, the United States, and the European Space Agency have each sent lunar orbiters. These spacecraft have contributed to confirming the discovery of lunar water ice in permanently shadowed craters at the poles and bound into the lunar regolith. Future manned missions to the Moon have been planned, including government as well as privately funded efforts. The Moon remains, under the Outer Space Treaty, free to all nations to explore for peaceful purposes.

The English proper name for Earth’s natural satellite is “the Moon”. The noun moon derives from moone, which developed from mone (1135), which derives from Old English mōna (dating from before 725), which, like all Germanic language cognates, ultimately stems from Proto-Germanic *mǣnōn.

The principal modern English adjective pertaining to the Moon is lunar, derived from the Latin Luna. Another less common adjective is selenic, derived from the Ancient Greek Selene, from which the prefix “seleno-” (as in selenography) is derived.

The prevailing hypothesis today is that the Earth–Moon system formed as a result of a giant impact, where a Mars-sized body collided with the newly formed proto-Earth, blasting material into orbit around it, which accreted to form the Moon. This hypothesis perhaps best explains the evidence, although not perfectly. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: “You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don’t come to our conference unless you have something to tell about the Moon’s birth.” At the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most popular. “Before the conference, there were partisans of the three ‘traditional’ theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didn’t think the debate would ever be resolved. Afterward there were essentially only two groups: the giant impact camp and the agnostics.”

Giant impacts are thought to have been common in the early Solar System. Computer simulations modelling a giant impact are consistent with measurements of the angular momentum of the Earth–Moon system and the small size of the lunar core. These simulations also show that most of the Moon came from the impactor, not from the proto-Earth. However more recent tests suggest more of the Moon coalesced from the Earth and not the impactor. Meteorites show that other inner Solar System bodies such as Mars and Vesta have very different oxygen and tungsten isotopic compositions to the Earth, while the Earth and Moon have near-identical isotopic compositions. Post-impact mixing of the vaporized material between the forming Earth and Moon could have equalized their isotopic compositions, although this is debated.

Despite its accuracy in explaining many lines of evidence, there are still some difficulties that aren’t fully explained by the giant impact hypothesis, most of them involving the Moon’s composition.

In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks. To their surprise, the team found that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System. Since most of the material that went into orbit to form the Moon was thought to come from Theia, this observation was unexpected. In 2007, researchers from the California Institute of Technology announced that there was less than a 1% chance that Theia and Earth had identical isotopic signatures. Published in 2012, an analysis of titanium isotopes in Apollo lunar samples showed that the Moon has the same composition as the Earth, which conflicts with what is expected if the Moon formed far from Earth’s orbit or from Theia. Variations on GIH may explain this data.

The dark and relatively featureless lunar plains which can clearly be seen with the naked eye are called maria, since they were believed by ancient astronomers to be filled with water. They are now known to be vast solidified pools of ancient basaltic lava. While similar to terrestrial basalts, the mare basalts have much higher abundances of iron and are completely lacking in minerals altered by water. The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side maria.

The concentration of mare on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a 2nd terran moon a few tens of millions of years after the formations of the moons themselves.

In years since, signatures of water have been found to exist on the lunar surface. In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters. In 1998, the neutron spectrometer located on the Lunar Prospector spacecraft, indicated that high concentrations of hydrogen are present in the 1st meter of depth in the regolith near the polar regions. In 2008, an analysis of volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water to exist in the interior of the beads.

In May 2011, Erik Hauri et al. reported 615–1410 ppm water in melt inclusions in lunar sample 74220, the famous high-titanium “orange glass soil” of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth’s upper mantle. While of considerable selenological interest, Hauri’s announcement affords little comfort to would-be lunar colonists—the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument.

The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill these basins. These anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that aren’t linked to mare volcanism.

The Moon has an external magnetic field of about 1–100 nanoteslas, less than one-hundredth that of the Earth. It does not currently have a global dipolar magnetic field, as would be generated by a liquid metal core geodynamo, and only has crustal magnetization, probably acquired early in lunar history when a geodynamo was still operating. Alternatively, some of the remnant magnetization may be from transient magnetic fields generated during large impact events, through the expansion of an impact-generated plasma cloud in the presence of an ambient magnetic field—this is supported by the apparent location of the largest crustal magnetizations near the antipodes of the giant impact basins.

The Moon is exceptionally large relative to the Earth: a quarter the diameter of the planet and 1/81 its mass. It is the largest moon in the Solar System relative to the size of its planet, though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto’s mass.

The Moon is in synchronous rotation: it rotates about its axis in about the same time it takes to orbit the Earth. This results in it nearly always keeping the same face turned towards the Earth. The Moon used to rotate at a faster rate, but early in its history, its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by the Earth. The side of the Moon that faces Earth is called the near side, and the opposite side the far side. The far side is often called the “dark side”, but in fact, it is illuminated as often as the near side: once per lunar day, during the new moon phase we observe on Earth when the near side is dark.

The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the 2nd brightest object in the sky after the Sun.[i] This is partly due to the brightness enhancement of the opposition effect; at quarter phase, the Moon is only one-tenth as bright, rather than half as bright, as at full moon.

Additionally, colour constancy in the visual system recalibrates the relations between the colours of an object and its surroundings, and since the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the centre, with no limb darkening, due to the reflective properties of lunar soil, which reflects more light back towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the Moon illusion, 1st described in the 7th century BC. The full moon subtends an arc of about 0.52x in the sky, roughly the same apparent size as the Sun (see eclipses).

The highest altitude of the Moon in the sky varies: while it has nearly the same limit as the Sun, it alters with the lunar phase and with the season of the year, with the full moon highest during winter. The 18.6-year nodes cycle also has an influence: when the ascending node of the lunar orbit is in the vernal equinox, the lunar declination can go as far as 28x each month. This means the Moon can go overhead at latitudes up to 28x from the equator, instead of only 18x. The orientation of the Moon’s crescent also depends on the latitude of the observation site: close to the equator, an observer can see a smile-shaped crescent Moon.

As the Moon is continuously blocking our view of a half-degree-wide circular area of the sky,[j] the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to the Earth, occultations of individual stars aren’t visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted.

During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was “perfectly smooth”. In 1609, Galileo Galilei drew one of the 1st telescopic drawings of the Moon in his book Sidereus Nuncius and noted that it wasn’t smooth but had mountains and craters. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–36 Mappa Selenographica of Wilhelm Beer and Johann Heinrich Mxdler, and their associated 1837 book Der Mond, the 1st trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography. Lunar craters, 1st noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions. This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s, leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology.

The Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent unmanned probes on both flyby and impact/lander missions. Spacecraft from the Soviet Union’s Luna program were the 1st to accomplish a number of goals: following three unnamed, failed missions in 1958, the 1st man-made object to escape Earth’s gravity and pass near the Moon was Luna 1; the 1st man-made object to impact the lunar surface was Luna 2, and the 1st photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959.

Related Sites for Solar and lunar eclipses

  • Facebook

What is “Limb darkening” ?

Limb darkening

Limb darkening
Limb darkening
Limb darkening

Crucial to understanding limb darkening is the idea of optical depth. An optical depth of unity is that thickness of absorbing gas from which a fraction of 1/e photons can escape. This is what defines the visible edge of a star since it is at an optical depth of unity that the star becomes opaque. The radiation reaching us is closely approximated by the sum of all the emission along the entire line of sight, up to that point where the optical depth is unity. When we look near the edge of a star, we cannot “see” to the same depth as when we look at the center because the line of sight must travel at an oblique angle through the stellar gas when looking near the limb. In other words, the solar radius at which we see the optical depth as being unity increases as we move our line of sight towards the limb.

The 2nd effect is the fact that the effective temperature of the stellar atmosphere is decreasing for an increasing distance from the center of the star. The radiation emitted from a gas is a strongly increasing function of temperature. For a black body, for example, the spectrally integrated intensity is proportional to the 4th power of the temperature (Stefan-Boltzmann law). Since when we look at a star, at 1st approximation the radiation comes from the point at which the optical depth is unity, and that point is deeper in when looking at the center, the temperature will be higher, and the intensity will be greater, than when we look at the limb.

In fact, the temperature in the atmosphere of a star does not always decrease with increasing height, and for certain spectral lines, the optical depth is unity in a region of increasing temperature. In this case we see the phenomenon of “limb brightening”. At very long and very short (EUV to X-ray) wavelengths the situation becomes much more complicated. A coronal emission such as soft X-radiation will be optically thin and thus be characteristically limb-brightened. Further complication comes from the existence of rough (three-dimensional) structure. The classical analysis of stellar limb darkening, as described below, assumes the existence of a smooth hydrostatic equilibrium, and at some level of precision this assumption must fail (most obviously in sunspots and faculae, but generally everywhere). The analysis of these effects is presently in its infancy because of its computational difficulty.

Related Sites for Limb darkening

What is “Visual system” ?

Visual system

Visual system
Visual system
Visual system

The visual system is the part of the central nervous system which gives organisms the ability to process visual detail, as well as enabling the formation of several non-image photo response functions. It detects and interprets information from visible light to build a representation of the surrounding environment. The visual system carries out a number of complex tasks, including the reception of light and the formation of monocular representations; the buildup of a binocular perception from a pair of two dimensional projections; the identification and categorization of visual objects; assessing distances to and between objects; and guiding body movements in relation to visual objects. The psychological process of visual information is known as visual perception, a lack of which is called blindness. Non-image forming visual functions, independent of visual perception, include the pupillary light reflex and circadian photoentrainment.

Different species are able to see different parts of the light spectrum; for example, bees can see into the ultraviolet, while pit vipers can accurately target prey with their pit organs, which are sensitive to infrared radiation. The eye of a swordfish can generate heat to better cope with detecting their prey at depths of 2000 feet.

In the 2nd half of the 19th century, many motifs of the nervous system were identified such as the neuron doctrine and brain localization, which related to the neuron being the basic unit of the nervous system and functional localisation in the brain, respectively. These would become tenets of the fledgling neuroscience and would support further understanding of the visual system.

The notion that the cerebral cortex is divided into functionally distinct cortices now known to be responsible for capacities such as touch, movement (motor cortex), and vision (visual cortex), was 1st proposed by Franz Joseph Gall in 1810. Evidence for functionally distinct areas of the brain (and, specifically, of the cerebral cortex) mounted throughout the 19th century with discoveries by Paul Broca of the language center (1861), and Gustav Fritsch and Edouard Hitzig of the motor cortex (1871). Based on selective damage to parts of the brain and the functional effects this would produce (lesion studies), David Ferrier proposed that visual function was localized to the parietal lobe of the brain in 1876. In 1881, Hermann Munk more accurately located vision in the occipital lobe, where the primary visual cortex is now known to be.

The eye is a complex biological device. The functioning of a camera is often compared with the workings of the eye, mostly since both focus light from external objects in the field of view onto a light-sensitive medium. In the case of the camera, this medium is film or an electronic sensor; in the case of the eye, it is an array of visual receptors. With this simple geometrical similarity, based on the laws of optics, the eye functions as a transducer, as does a CCD camera.

Light entering the eye is refracted as it passes through the cornea. It then passes through the pupil and is further refracted by the lens. The cornea and lens act together as a compound lens to project an inverted image onto the retina.

The retina consists of a large number of photoreceptor cells which contain particular protein molecules called opsins. In humans, two types of opsins are involved in conscious vision: rod opsins and cone opsins., part of the body clock mechanism, is probably not involved in conscious vision, as these RGC don’t project to the lateral geniculate nucleus (LGN) but to the pretectal olivary nucleus (PON).) An opsin absorbs a photon (a particle of light) and transmits a signal to the cell through a signal transduction pathway, resulting in hyper-polarization of the photoreceptor. (For more information, see Photoreceptor cell).

Rods and cones differ in function. Rods are found primarily in the periphery of the retina and are used to see at low levels of light. Cones are found primarily in the center of the retina. There are three types of cones that differ in the wavelengths of light they absorb; they are usually called short or blue, middle or green, and long or red. Cones are used primarily to distinguish colour and other features of the visual world at normal levels of light.

In the retina, the photo-receptors synapse directly onto bipolar cells, which in turn synapse onto ganglion cells of the outermost layer, which will then conduct action potentials to the brain. A significant amount of visual processing arises from the patterns of communication between neurons in the retina. About 130 million photo-receptors absorb light, yet roughly 1.2 million axons of ganglion cells transmit information from the retina to the brain. The processing in the retina includes the formation of center-surround receptive fields of bipolar and ganglion cells in the retina, as well as convergence and divergence from photoreceptor to bipolar cell. In addition, other neurons in the retina, particularly horizontal and amacrine cells, transmit information laterally, resulting in more complex receptive fields that can be either indifferent to colour and sensitive to motion or sensitive to colour and indifferent to motion.

Mechanism of generating visual signals: The retina adapts to change in light through the use of the rods. In the dark, the chromophobe retinal has a bent shape called cis-retinal. When light is present, the retinal changes to a straight form called trans-retinal and breaks away from the opsin. This is called bleaching because the purified rhodopsin changes from violet to colorless in the light. At baseline in the dark, the rhodopsin absorbs no light and releases glutamate which inhibits the bipolar cell. This inhibits the release of neurotransmitters from the bipolar cells to the ganglion cell. When there is light present, glutamate secretion ceases thus no longer inhibiting the bipolar cell from releasing neurotransmitters to the ganglion cell and therefore an image can be detected.

A 2006 University of Pennsylvania study calculated the approximate bandwidth of human retinas to be about 8960 kilobits per second, whereas guinea pig retinas transfer at about 875 kilobits.

In the visual system, retinal, technically called retinene1 or “retinaldehyde”, is a light-sensitive molecule found in the rods and cones of the retina. Retinal is the fundamental structure involved in the transduction of light into visual signals, i.e. nerve impulses in the ocular system of the central nervous system. In the presence of light, the retinal molecule changes configuration and as a result a nerve impulse is generated.

The information about the image via the eye is transmitted to the brain along the optic nerve. Different populations of ganglion cells in the retina send information to the brain through the optic nerve. About 90% of the axons in the optic nerve go to the lateral geniculate nucleus in the thalamus. These axons originate from the M, P, and K ganglion cells in the retina, see above. This parallel processing is important for reconstructing the visual world; each type of information will go through a different route to perception. Another population sends information to the superior colliculus in the midbrain, which assists in controlling eye movements as well as other motor responses.

A final population of photosensitive ganglion cells, containing melanopsin, sends information via the retinohypothalamic tract to the pretectum (pupillary reflex), to several structures involved in the control of circadian rhythms and sleep such as the suprachiasmatic nucleus (SCN, the biological clock), and to the ventrolateral preoptic nucleus (VLPO, a region involved in sleep regulation). A recently discovered role for photoreceptive ganglion cells is that they mediate conscious and unconscious vision – acting as rudimentary visual brightness detectors as shown in rodless coneless eyes.

The optic nerves from both eyes meet and cross at the optic chiasm, at the base of the hypothalamus of the brain. At this point the information coming from both eyes is combined and then splits according to the visual field. The corresponding halves of the field of view are sent to the left and right halves of the brain, respectively, to be processed. That is, the right side of primary visual cortex deals with the left half of the field of view from both eyes, and similarly for the left brain. A small region in the center of the field of view is processed redundantly by both halves of the brain.

Information from the right visual field travels in the left optic tract. Information from the left visual field travels in the right optic tract. Each optic tract terminates in the lateral geniculate nucleus (LGN) in the thalamus.

The lateral geniculate nucleus is a sensory relay nucleus in the thalamus of the brain. The LGN consists of six layers in humans and other primates starting from catarhinians, including cercopithecidae and apes. Layers 1, 4, and 6 correspond to information from the contralateral (crossed) fibers of the nasal retina (temporal visual field); layers 2, 3, and 5 correspond to information from the ipsilateral (uncrossed) fibers of the temporal retina (nasal visual field). Layer one (1) contains M cells which correspond to the M (magnocellular) cells of the optic nerve of the opposite eye and are concerned with depth or motion. Layers four and six (4 & 6) of the LGN also connect to the opposite eye, but to the P cells (color and edges) of the optic nerve. By contrast, layers two, three and five (2, 3, & 5) of the LGN connect to the M cells and P (parvocellular) cells of the optic nerve for the same side of the brain as its respective LGN. Spread out, the six layers of the LGN are the area of a credit card and about three times its thickness. The LGN is rolled up into two ellipsoids about the size and shape of two small birds’ eggs. In between the six layers are smaller cells that receive information from the K cells (color) in the retina. The neurons of the LGN then relay the visual image to the primary visual cortex (V1) which is located at the back of the brain (caudal end) in the occipital lobe in and close to the calcarine sulcus. The LGN isn’t just a simple relay station but it is also a center for processing; it receives reciprocal input from the cortical and subcortical layers and reciprocal innervation from the visual cortex.

The optic radiations, one on each side of the brain, carry information from the thalamic lateral geniculate nucleus to layer 4 of the visual cortex. The P layer neurons of the LGN relay to V1 layer 4C β. The M layer neurons relay to V1 layer 4C α. The K layer neurons in the LGN relay to large neurons called blobs in layers 2 and 3 of V1.

There is a direct correspondence from an angular position in the field of view of the eye, all the way through the optic tract to a nerve position in V1. At this juncture in V1, the image path ceases to be straightforward; there is more cross-connection within the visual cortex.

The visual cortex is the largest system in the human brain and is responsible for processing the visual image. It lies at the rear of the brain, above the cerebellum. The region that receives information directly from the LGN is called the primary visual cortex, (also called V1 and striate cortex). Visual information then flows through a cortical hierarchy. These areas include V2, V3, V4 and area V5/MT (the exact connectivity depends on the species of the animal). These secondary visual areas (collectively termed the extrastriate visual cortex) process a wide variety of visual primitives. Neurons in V1 and V2 respond selectively to bars of specific orientations, or combinations of bars. These are believed to support edge and corner detection. Similarly, basic information about colour and motion is processed here.

As visual information passes forward through the visual hierarchy, the complexity of the neural representations increases. Whereas a V1 neuron may respond selectively to a line segment of a particular orientation in a particular retinotopic location, neurons in the lateral occipital complex respond selectively to complete object, and neurons in visual association cortex may respond selectively to human faces, or to a particular object.

Along with this increasing complexity of neural representation may come a level of specialization of processing into two distinct pathways: the dorsal stream and the ventral stream. The dorsal stream, commonly referred to as the “where” stream, is involved in spatial attention (covert and overt), and communicates with regions that control eye movements and hand movements. More recently, this area has been called the “how” stream to emphasize its role in guiding behaviors to spatial locations. The ventral stream, commonly referred as the “what” stream, is involved in the recognition, identification and categorization of visual stimuli.

However, there is still much debate about the degree of specialization within these two pathways, since they are in fact heavily interconnected.

Along with proprioception and vestibular function, the visual system plays an important role in the ability of an individual to control balance and maintain an upright posture. When these three conditions are isolated and balance is tested, it has been found that vision is the most significant contributor to balance, playing a bigger role than either of the two other intrinsic mechanisms. The clarity with which an individual can see his environment, as well as the size of the visual field, the susceptibility of the individual to light and glare, and poor depth perception play important roles in providing a feedback loop to the brain on the body’s movement through the environment. Anything that affects any of these variables can have a negative effect on balance and maintaining posture. This effect has been seen in research involving elderly subjects when compared to young controls, in glaucoma patients compared to age matched controls, cataract patients pre and post surgery, and even something as simple as wearing safety goggles. Monocular vision has also been shown to negatively impact balance, which was seen in the previously referenced cataract and glaucoma studies, as well as in healthy children and adults.

Related Sites for Visual system

What is “Colour constancy” ?

Colour constancy

Colour constancy
Colour constancy
Colour constancy

Color constancy is an example of subjective constancy and a feature of the human colour perception system which ensures that the perceived colour of objects remains relatively constant under varying illumination conditions. A green apple for instance looks green to us at midday, when the main illumination is white sunlight, and also at sunset, when the main illumination is red. This helps us identify objects.

The physiological basis for colour constancy is thought to involve specialized neurons in the primary visual cortex that compute local ratios of cone activity, which is the same calculation that Land’s retinex algorithm uses to achieve colour constancy. These specialized cells are called double-opponent cells because they compute both colour opponency and spatial opponency. Double-opponent cells were 1st described by Nigel Daw in the goldfish retina. There was considerable debate about the existence of these cells in the primate visual system; their existence was eventually proven using reverse-correlation receptive field mapping and special stimuli that selectively activate single cone classes at a time, so-called “cone-isolating” stimuli.

Color constancy works only if the incident illumination contains a range of wavelengths. The different cone cells of the eye register different ranges of wavelengths of the light reflected by every object in the scene. From this information, the visual system attempts to determine the approximate composition of the illuminating light. This illumination is then discounted in order to obtain the object’s “true color” or reflectance: the wavelengths of light the object reflects. This reflectance then largely determines the perceived color.

The effect was described in 1971 by Edwin H. Land, who formulated “retinex theory” to explain it. The word “retinex” is a portmanteau formed from “retina” and “cortex”, suggesting that both the eye and the brain are involved in the processing.

The effect can be experimentally demonstrated as follows. A display called a “Mondrian” consisting of numerous colored patches is shown to a person. The display is illuminated by three white lights, one projected through a red filter, one projected through a green filter, and one projected through a blue filter. The person is asked to adjust the intensity of the lights so that a particular patch in the display appears white. The experimenter then measures the intensities of red, green, and blue light reflected from this white-appearing patch. Then the experimenter asks the person to identify the colour of a neighboring patch, which, for example, appears green. Then the experimenter adjusts the lights so that the intensities of red, blue, and green light reflected from the green patch are the same as were originally measured from the white patch. The person shows colour constancy in that the green patch continues to appear green, the white patch continues to appear white, and all the remaining patches continue to have their original colors.

Color constancy is a desirable feature of computer vision, and many algorithms have been developed for this purpose. These include several retinex algorithms. These algorithms receive as input the red/green/blue values of each pixel of the image and attempt to estimate the reflectances of each point. One such algorithm operates as follows: the maximal red value rmax of all pixels is determined, and also the maximal green value gmax and the maximal blue value bmax. Assuming that the scene contains objects which reflect all red light, and objects which reflect all green light and still others which reflect all blue light, one can then deduce that the illuminating light source is described by (rmax, gmax, bmax). For each pixel with values (r, g, b) its reflectance is estimated as (r/rmax, g/gmax, b/bmax). The original retinex algorithm proposed by Land and McCann uses a localized version of this principle.

Although retinex models are still widely used in computer vision, they have been shown not to accurately model human colour perception.

Here “Reprinted in McCann” refers to McCann, M., ed. 1993. Edwin H. Land’s Essays. Springfield, Va.: Society for Imaging Science and Technology.

Related Sites for Colour constancy

What is “Opposition effect” ?

Opposition effect

Opposition effect
Opposition effect
Opposition effect

The opposition surge is the brightening of a rough surface, or an object with many particles, when illuminated from directly behind the observer. The term is most widely used in astronomy, where generally it refers to the sudden noticeable increase in the brightness of a celestial body such as a planet, moon, or comet as its phase angle of observation approaches zero. It is so named because the reflected light from the Moon and Mars appear significantly brighter than predicted when at astronomical opposition. Two physical mechanisms have been proposed for this observational phenomenon: shadow hiding and coherent backscatter.

The phase angle is defined as the angle between the observer, the observed object and the source of light. In the case of the solar system, the light source is the Sun, and the observer is situated on Earth.

As the phase angle of an object lit by the sun decreases, the object’s brightness rapidly increases. This is mainly due to the increased area lit, but is also partly due to the intrinsic brightness of the part that is sunlit. This is affected by such factors as the angle at which light reflected from the object is observed. For this reason, a full moon is more than twice as bright as the moon at 1st or 3rd quarter, even though the visible area illuminated appears to be exactly twice as large.

When the angle of reflection is close to the angle at which the light’s rays hit the surface, this intrinsic brightness is usually close to its maximum. At a phase angle of zero degrees, these shadow areas become negligible. The celestial body in effect becomes an imperfect mirror. When phase angles approach zero, there is a sudden increase in apparent brightness, and this sudden increase is referred to as the opposition surge.

The effect is particularly pronounced on regolith surfaces of airless bodies in the solar system. The usual major cause of the effect is that a surface’s small pores and pits that would otherwise be in shadow at other incidence angles become lit up when the observer is almost in the same line as the source of illumination. The effect is usually only visible for a very small range of phase angles near zero. For bodies whose reflectance properties have been quantitatively studied, details of the opposition effect — its strength and angular extent — are described by two of the Hapke parameters. In the case of planetary rings, an opposition surge is due to the covering of shadows on the ring particles. This explanation was 1st proposed by Hugo von Seeliger in 1887.

An alternate theory for the increase in brightness during opposition is that of coherent backscatter. In the case of coherent backscatter, the reflected light is enhanced at narrow angles if the size of the scatterers in the surface of the body is comparable to the wavelength of light and the distance between scattering particles is greater than a wavelength. The increase in brightness is due to the reflected light combining coherently with the emitted light.

The existence of the opposition surge was 1st recorded in 1956 by Tom Gehrels during his study of the reflected light from an asteroid. Gehrels’ later studies showed that the same effect could be shown in the moon’s brightness. He coined the term “opposition effect” for the phenomenon, but the more intuitive “opposition surge” is now more widely used.

Since Gehrels’ early studies, an opposition surge has been noted for most airless solar system bodies. No such surge has been reported for gas giant, nor for bodies with pronounced atmospheres.

In the case of the Moon, B. J. Buratti et al. have suggested that its brightness increases by some 40% between a phase angle of 4x and one of 0x, and that the increase is greater for the rougher-surfaced highland areas than for the relatively smooth maria. As for the principal mechanism of the phenomenon, measurements indicate that the opposition effect exhibits only a small wavelength dependence: the surge is 3-4% larger at 0.41 μm than at 1.00 μm. This result suggests that the principal cause of the lunar opposition surge is shadow-hiding rather than coherent backscatter.

Related Sites for Opposition effect

What is “Tidal locking” ?

Tidal locking

Tidal locking
Tidal locking
Tidal locking

Tidal locking occurs when the gravitational gradient makes one side of an astronomical body always face another, an effect known as synchronous rotation. For example, the same side of the Earth’s Moon always faces the Earth. A tidally locked body takes just as long to rotate around its own axis as it does to revolve around its partner. This causes one hemisphere constantly to face the partner body. Usually, at any given time only the satellite is tidally locked around the larger body, but if the difference in mass between the two bodies and their physical separation is small, each may be tidally locked to the other, as is the case between Pluto and Charon. This effect is employed to stabilize some artificial satellites.

The change in rotation rate necessary to tidally lock a body B to a larger body A is caused by the torque applied by A’s gravity on bulges it has induced on B by tidal forces.

A’s gravity produces a tidal force on B which distorts its gravitational equilibrium shape slightly so that it becomes elongated along the axis oriented toward A, and conversely, is slightly reduced in dimension in directions perpendicular to this axis. These distortions are known as tidal bulges. When B isn’t yet tidally locked, the bulges travel over its surface, with one of the two “high” tidal bulges traveling close to the point where body A is overhead. For large astronomical bodies which are near-spherical due to self-gravitation, the tidal distortion produces a slightly prolate spheroid – i.e., an axially symmetric ellipsoid that is elongated along its major axis. Smaller bodies also experience distortion, but this distortion is less regular.

The material of B exerts resistance to this periodic reshaping caused by the tidal force. In effect, some time is required to reshape B to the gravitational equilibrium shape, by which time the forming bulges have already been carried some distance away from the A–B axis by B’s rotation. Seen from a vantage point in space, the points of maximum bulge extension are displaced from the axis oriented towards A. If B’s rotation period is shorter than its orbital period, the bulges are carried forward of the axis oriented towards A in the direction of rotation, whereas if B’s rotation period is longer the bulges lag behind instead.

Since the bulges are now displaced from the A–B axis, A’s gravitational pull on the mass in them exerts a torque on B. The torque on the A-facing bulge acts to bring B’s rotation in line with its orbital period, while the “back” bulge which faces away from A acts in the opposite sense. However, the bulge on the A-facing side is closer to A than the back bulge by a distance of approximately B’s diameter, and so experiences a slightly stronger gravitational force and torque. The net resulting torque from both bulges, then, is always in the direction which acts to synchronize B’s rotation with its orbital period, leading eventually to tidal locking.

The angular momentum of the whole A–B system is conserved in this process, so that when B slows down and loses rotational angular momentum, its orbital angular momentum is boosted by a similar amount. This results in a raising of B’s orbit about A in tandem with its rotational slowdown. For the other case where B starts off rotating too slowly, tidal locking both speeds up its rotation, and lowers its orbit.

The tidal locking effect is also experienced by the larger body A, but at a slower rate because B’s gravitational effect is weaker due to B’s smaller size. For example, the Earth’s rotation is gradually slowing down because of the Moon, by an amount that becomes noticeable over geological time in some fossils. For bodies of similar size the effect may be of comparable size for both, and both may become tidally locked to each other. The dwarf planet Pluto and its satellite Charon are good examples of this—Charon is only visible from one hemisphere of Pluto and vice versa.

Finally, in some cases where the orbit is eccentric and the tidal effect is relatively weak, the smaller body may end up in an orbital resonance, rather than tidally locked. Here the ratio of rotation period to orbital period is some well-defined fraction different from 1:1. A well known case is the rotation of Mercury—locked to its orbit around the Sun in a 3:2 resonance.

Most significant moons in the Solar System are tidally locked with their primaries, since they orbit very closely and tidal force increases rapidly with decreasing distance. Notable exceptions are the irregular outer satellites of the gas giant planets, which orbit much farther away than the large well-known moons.

The tidal locking situation for asteroid moons is largely unknown, but closely orbiting binaries are expected to be tidally locked, as well as contact binaries.

The Moon’s rotation and orbital periods are tidally locked with each other, so no matter when the Moon is observed from the Earth the same hemisphere of the Moon is always seen. The far side of the Moon wasn’t seen in its entirety until 1959, when photographs were transmitted from the Soviet spacecraft Luna 3.

Despite the Moon’s rotational and orbital periods being exactly locked, about 59% of the Moon’s total surface may be seen with repeated observations from Earth due to the phenomena of libration and parallax. Librations are primarily caused by the Moon’s varying orbital speed due to the eccentricity of its orbit: this allows up to about 6x more along its perimeter to be seen from Earth. Parallax is a geometric effect: at the surface of the Earth we are offset from the line through the centers of Earth and Moon, and because of this we can observe a bit more around the side of the Moon when it is on our local horizon.

It was thought for some time that Mercury was tidally locked with the Sun. This was because whenever Mercury was best placed for observation, the same side faced inward. Radar observations in 1965 demonstrated instead that Mercury has a 3:2 spin–orbit resonance, rotating three times for every two revolutions around the Sun, which results in the same positioning at those observation points. The eccentricity of Mercury’s orbit makes this 3:2 resonance stable.

Close binary stars throughout the universe are expected to be tidally locked with each other, and extrasolar planets that have been found to orbit their primaries extremely closely are also thought to be tidally locked to them. An unusual example, confirmed by MOST, is Tau Boxtis, a star tidally locked by a planet. The tidal locking is almost certainly mutual.

As can be seen, even knowing the size and density of the satellite leaves many parameters that must be estimated, so that any calculated locking times obtained are expected to be inaccurate, to even factors of ten. Further, during the tidal locking phase the orbital radius a may have been significantly different from that observed nowadays due to subsequent tidal acceleration, and the locking time is extremely sensitive to this value.

Nm−2. μ can be roughly taken as 3×1010 Nm−2 for rocky objects and 4×109 Nm−2 for icy ones.

Note the extremely strong dependence on orbital radius a.

For the locking of a primary body to its moon as in the case of Pluto, satellite and primary body parameters can be interchanged.

One conclusion is that other things being equal, a large moon will lock faster than a smaller moon at the same orbital radius from the planet because grows as the cube of the satellite radius,.[contradictory] A possible example of this is in the Saturn system, where Hyperion isn’t tidally locked, while the larger Iapetus, which orbits at a greater distance, is. However, this isn’t clear cut because Hyperion also experiences strong driving from the nearby Titan, which forces its rotation to be chaotic.

The above formulae for the timescale of locking may be off by orders of magnitude, because they ignore the frequency dependence of.

Related Sites for Tidal locking