The Hubble Space Telescope (HST) returns stunning images of objects from 10 B-LY across the Universe. What if someone asked you "What can Hubble see on real close objects in 'our neighborhood' such as the Moon or Pluto?" The answer may be surprising and is not at all difficult to compute. The purpose of this writeup is to encourage the curious to be only a pocket calculator away from such 'answers' at all times.
How sharp the image can be depends on the diameter of the primary optics (lens or mirror) and wavelength of light being observed. Thus diameter and wavelength combine to determine the "Resolving Power" or minimum angle between two distant objects (such as double stars) that the telescope can just resolve as "being two objects". The resolving angle, called Theta (θ), is expressed in units such as radians, degrees, or arc-sec. The two length measurements need only to be converted to identical units to produce (radian) angles, I like to use inches. For white light the average wavelength is very near 22 millionths of an inch (0.000022 inches). As for Hubble, the 'length' across the primary mirror diameter is 94.5 inches.
In radians, the HST resolving power or angle 'theta' that can be resolved is: (θ) = 1.22λ/d = 1.22* 22E-6/ 94.5 = 0.000000284 radians. So now the physical size of a 'pixel' projected on a distant object [for that telescope] can be determined. All that is needed is distance (30 AU = 2.8E+9 miles) to the object Pluto. The pixel size is = Distance*Theta = 2.8E+9*0.000000284 = 794 miles. Why are units expressed in miles?, pixel size units are the same as the units of the distance to the object, in this example, miles.
I like to think about the resolution angle of a telescope (in radians) being equal to the reciprocal of (No. of λ's) needed to stretch across the diameter of the primary mirror; actually, it is 1/(No. of λ's) needed to stretch across 82% of the mirror diameter.
Pluto is about 1450 miles diameter. So 1450/794 is 1.82 or about 2 pixels. Make a pixel array 2x2 for a total of 4 pixels and light from Pluto will just fill that 'image'. But the circular shape of a Pluto cannot be represented by 4 pixels (actually, 1.822=3.3 sq-pix). Each pixel is a square sensor and can only have one intensity; this means that HST looking at Pluto will have less than 4 pixels of useful information somewhere in the Hubble image array as shown on the Pluto Image below.
An interesting exercise (above) is to draw square arrays or matrices from 2x2 to 10x10 (tic-tac-toe 3x3) and fill in squares to represent a circle. For comparison Epson MX-100 Character Array used a 9x8 dot matrix to create printer ASCII characters; small print plus the minds eye helps reduce 'stair steps' to produce smooth character appearance. A circle is impossible to represent on a single 2x2 pixel image and looks like a (+) on a 3x3 matrix. The 4-corners of the plus in a 3x3 do collect a bit of light which helps the minds eye start to interpret a circular shape. 9x9 Pix begins looking pretty circular in the cartoon diagram.
The right hand image (above) shows the optimum pixel size overlaid on Pluto. Keep in mind that an actual HST pixel array could have larger or smaller pixels than the optimum shown. Likely, the CCD has smaller pixels; It is easy to over-kill a smidge on pixel smallness (more pixels) and difficult to over-kill on resolution (need a larger telescope mirror).
If the pixels were larger, then the resolution HST is capable of would not be realized (hereasy). If the pixels were say 4 times smaller, adjacent pairs of pixels would contain identical information since the telescope could not resolve that smaller radian angle. The diffraction limit bounds useful magnification; more pixels or magnification yields no improvement in an image.
NASA is clever beyond my ability to articulate. Algorithms can superimpose multiple images and dithering (accurately known motion of the image across pixel boundaries) can permit determination of the rough size and color of objects and features to more accuracy than resolution alone would permit from a single image. The 'dithering' technique works because Hubble's pointing accuracy angle is several times better than its resolving angle (θ). Colors can be enhanced. I saw a HST 'movie' of Pluto showing rotation and a dark feature; the image was, of course, blurry and lacking detail but looks extremely spherical. The JWST will image Pluto detail about 4x better than Hubble, but will still be quite limited. We have to wait until 2015 for New Horizons Spacecraft (by JHUAPL for NASA) to return really great pictures of Pluto, given that all goes well. Can't wait...
What about the Moon? An optimum Hubble pixel there is 360 feet square. A license tag would have to be about 0.4 miles tall and over 1 mile wide for Hubble to be able to read the tag number. On the other hand, a full pixel array at the moon distance would be 30,000 by 30,000 pixels or would represent a 900 mega-pixel photo. The moon is so bright an ND filter would probably have to be inserted to protect the CCD array.
θ = 1.22(λ)/d came from Diffraction of light experiments performed by Fraunhoffer. Light from a point source was made to fall on a narrow slit of height (b) and the light passing through the slit was made to fall on a screen located at (S) distance. The light was found to be diffracted into bright and dark regions in a direction perpendicular to the length of the slit. The center region behind the slit was bright and dimmed to zero on each side at a distance of mλ/b where m is any integer ≠zero. If the slit is replaced by circular aperture the expression changes to θ = mλ/d where m is no longer integers. m=1.22 (Lommel#1) is the first minima which yields the resolving power of circular aperture telescopes θ = 1.22λ/d. It is Fraunhoffer diffraction of light that describes the maximum resolving power of any telescope.
There are currently a host of telescopes in orbit that cover the electromagnetic spectrum from x-ray to infrared and very large aperature ground based radio telescopes. Infrared telescopes need a larger aperture (primary mirror) to achieve same resolution as those for visible light because of the longer wavelength of IR light. The James Webb Space telescope (JWST) is scheduled for launch in 2018. The JWST has a 256 Inch segmented mirror made of gold coated beryllium that will reflect wavelengths from 0.6 microns to 27 microns.
Two really neat capabilities of the JWST: 1) From the Wein displacement law one can predict that the JWST will be able to see light from bodies as cold as minus 270 oF (oK = 2898/27). And 2) Infrared (IR) light is not attenuated by dust which will allow JWST to see incredibly more clearly to the 4 million sun monster black hole at the center of our galaxy (Sagittarius A).
When an optical system is advertised as "diffraction limited" it means that the "figure" or shape of the primary is a parabola to an accuracy of 1/4 to 1/10 of a wavelength of light it will be collecting. For Hubble, the 94.5 inch diameter mirror surface was ground and polished parabolic to within about 0.000022/10 or 0.000002 inches (2 millionths of an inch). At least, that was the original plan. The mirror was ground to required accuracy, but the figure was not parabolic which led to the enormous disappointment when Hubble saw 'first light'. As buffs know, a 1st mission to Hubble (on orbit) had a corrective optical element (COSTAR) inserted that made the optical system focus as a parabola. COSTAR has since been replaced.
A telescope cannot do better than diffraction limited optics will produce. But it can do worse. On Earth, 'seeing' affects image quality; choose a high altitude dark sky's away from noise light sources. Seeing can be improved with active corrective mirror surface control; that is being done to help cancel atmospheric disturbance. On orbit location removes atmospheric degradation. I do remember when HST had perturbation problems due to the attitude control system reacting to "in and out" of sunlight heating; that must be totally solved at this point in time. The JWST will have a "sun shade".