The Original Paper—A New Physics-Based Model for Light Interaction with the Retina of the Human Eye and the Vision Process
(NOTE: This text was written some time ago and. although still a basic exposition of the concept, read updated and summarizing thoughts in the Running Commentary at ghuth.com
A RECENT SUMMARY
I describe what I believe to be a straightforward, and above all predictive, model based on geometry for the interaction of light with the receptor outer segments of the retina of the eye. The model is based on the concept that electromagnetic radiation interacts with spatial “nano-antennas” that function: 1) in the near field of the light wave (i.e., at dimensions at or smaller than the wavelength of light) and, 2) in the femtosecond (or 10>-15) sec time domain. A spatial antenna has traditionally been considered as an engineering construction. The presence of these antennas means light interacts as the wave of classical physics. This is as opposed to the abstract construction that photons “rain down on” and somehow interact with pigment molecules. In essence, the eye evolved to detect light as an electromagnetic wave.
The proposal that nanospatial antennas exist between individual receptors applied to Osterberg’s classic 1935 measurement of receptor distribution makes instantly clear that the retina is a trichromatic diffractive surface and not a statistically-distributed array of receptors.
I am going to get ahead of myself and state the conclusion of this work that these antenna structures comprising the nanostructural array plane of receptor outer segments, absorb light wave as a wave and, and, immediately in the femtosecond time frame, transduce this energy into what are termed “quantum confined electrons”. The function of the plane of retinal outer segments is then is to effect a translation at each nano detection site from light wave of to quantized electron particle.
END OF SUMMARY
I propose that the light detection centers on the retinal surface are the three discrete spatial dimensionalities that exist between individual cone and rod receptors. The interior of the receptors themselves, and the retinal/rhodopsin complexes that they contain, perform the function of “electron quantum confinement” or “EQC” spaces. The vertical stack of EQC spaces formed by the coin-stacking of thylakoid disks within the body of each receptors would be considered, in a term much used today, a “nanowire”.
The antenna dimensionality corresponding to the center-to-center distances between two receptors determines the light wavelength absorbed.
Since the retina presents three such spatial dimensionalities (i.e., cone/cone, cone/rod and rod/rod) the retina detects (or, in antenna terms, is “tuned to”) three discrete narrow light wavelengths.
One might then envision the retina in an entirely new way as a logically spaced array of “nanowires” with the spaces between them serving as light energy accepting sites.
Then…. as opposed to the pure quantum construction that “photons rain down on pigment molecules and are somehow absorbed using an absurd ‘quantum catch’ hypothesis……etc” it becomes apparent that the eye fundamentally evolved to detect light as the electromagnetic wave of classical physics.
To further define the nature of EQC centers in this new context, I would propose that there is actually only one generic rhodopsin/retinal complex that is common to all receptors. The outer rhodopsin protein forms a “spatially adaptable cage” for the retinal molecules that it contains. It’s role is structural. It’s size/configuration is modified (genetically) to “fit” the differing volumes of the cones and rods.
It is the isomerization (configurational change) of the retinal molecules contained structurally within this cage that represent the fundamental signal-producing (“electron particle initiation”) event for subsequent vision processing. It has long been known but seemingly disregarded that this molecule is dichroically orientated orthogonal to the direction of light. This is consistent with this antenna explanation for light interaction and is totally inconsistent with the traditional photon incidence concept.
The ubiquitous rhodopsin/retinal complex therefore represents an “inert molecular architectural structure” whose dimensions vary to fit within the different sized (cone and rod) receptors.
ADDED IN PROOF:
I would add in support of this new view of light interaction with the retina of the eye that the same light interaction mechanism has recently been found in a rather dramatic solid state discovery – an unexpected interaction of visible lightwith an etched surface nanostructure on silicon. This discovery was made by Canham in England in 1991 with the phenomenon subsequently being termed “porous silicon”. It is fascinating how this nanostructure is similar to the receptors ofthe retina of the eye.
It is instructive to note how this nanostructure absorbs different light wavelengths in the visible region but without the presence of any molecular pigment specie . The only structure evident is an “inert” composite of silicon “pillars” (”receptors”) and intervening pores.
In an experimental effort I carried out I demonstrated that, with the requirement that the silicon “pillars” are reduced to electron quantum confinement dimensions (< 10 nm in silicon), the wavelength of the visible light interaction is controlled by the geometric dimensionality of the intervening pores (i.e., the pillar-to-pillar spacing) exactly replicating the retinal model proposed herein.
The following diagrams a simple geometric “Rosetta Stone” that forms the basis for this new model of light interaction with the retina. It demonstrates that an “admixture of abstract circles of two diameters yields three lengths” (i.e., center-to-center distances). All follows from this simple geometrical premise.
Note that these three antenna appositions are NOT the three primary “colors”. It will become apparent in further reading of this work that the association of the term color with light interaction at this plane is inaccurate with this shortcut having had in my view to tragically misleading consequences in the field of vision. What are actually detected at this point are three narrowly “tuned” (in antenna terms) wavelengths or in the terminology of optics “narrow filters”.
These three wavelengths geometrically define exactly the long and short limits and, crucially, the exact middle, of the visual band . This will be sown to have fundamental consequences in explaining, for example, the color constancy of vision, and,give guidance finally,as to how the “hues” that we term color are synthesized according to the work of Edwin Land.
Geometry then provides the fundamental basis for understanding why nature evolved two sizes of retinal receptors – the cones and rods. It will become clear how this geometric construction constitutes the predictive requirement for assigning the ability of any species to sense color.
My conclusions derive from the now well documented distribution of cones and rod receptors on the retinal surface. This distribution was measured as early as 1935 by Osterberg with his data shown in Figure 2. This curve has been presented in almost every text on vision and the eye and is therefore widely accepted as dogma in the field. It is important to note that there seems absolutely no disagreement about this data.
But….the curve portrays a huge asymmetry in the distribution of the larger cones and smaller rods that comprise the retina. Actually, >99% of cones comprise the central one degree (the “fovea”) of the retinal surface. Microscopic measurements made from retinal sections show that beyond this angle the smaller rods are being statistically introduced into the rapidly decreasing density of cones. As one proceeds outward the density of rods rapidly becomes predominant.
The statistical nature of the cone/rod distribution is shown in a drawing from Pirenne (M.H.Pirenne, VISION AND THE EYE, Chapman and Hall, Ltd, 1967) ) with Section “A” indicating the edge of the fovea at approximately one degree. Shown are the distribution of cones (large circles) and the statistically distributed introduction of rods (small dots) in this region. Pirenne notes that the first rod receptor appears at 0.13 mm from the foveal center. I had used this figure in another context to illustrate the distribution of “classes of cones” that I believe is an entirely false premise. In the spirit of this work that receptor appositions define light wavelength detected, I have labeled a few cone/cone appositions red, the appearance of cone-rod appositions as green, and the fewer rod/rod appostions as blue.
THE GEOMETRICAL ANTENNA PREMISE OF THIS WORK PRESENTS FOR THE FIRST TIME A RATIONAL EXPLANATION FOR OSTERBERG’S FINDINGS OF RECEPTOR DISTRIBUTION.
To begin, discussion of the geometrical antenna premise, we will amplify Osterberg’s data from the fovea to 17-18 degrees where the admixture of cones and rods essentially ceases. At greater angles the retina is composed primarily of rod receptors with very few remaining cones.
Figure 3, using the expanded Osterberg receptor density data, simply counts the number of cone/rod appositions from ~2 to 12 degrees. Surprisingly, three distinct narrow wavelength responsive regions emerge. A distinct peak is seen at a retinal angle of from 7- 8 degrees is revealed. This retinal angle corresponds to the point where the density of rods is first sufficient to completely surround each remaining cone and this is exactly what happens as This order of eight rods around each cone is shown in the following again from Pirenne (Plate 6 labelled “close to the yellow spot”):
Thus, a spatial order emerges from an otherwise statistical distribution of receptors.
The three regions of sensitivity defined by density of receptor appositions are not to be confused with the usual pigment spectral response curves that are in every vision text. These three regions are distinct bands of narrow or, geometrically-constant wavelength response. Using old terminology, the central band would be said to have “total green response”
A number of new conclusions that can be drawn from the appearance of these three bands. I would propose that they represent the fundamental nature of the trichromicity of vision and in that sense represent the primary “wavelengths” instead of primary colors.
In the sense that these bands detect only narrow wavelengths, this cannot mean that the spectral hues that we term ‘color’ are involved. I would propose that the plane of receptor outer segments on the retina do not at all detect color and that the use of this ‘shortcut’ has led over the years to a great misunderstanding in the science of vision.
In fact, I propose that the outer bands of long and short wavelength response (cone-cone and rod-rod appositions) constitute the exact wavelength limits of the visual band. It is the diameter of the receptors that determine the limits of the visual band. The diameter of the cone in determining the center-to-center antenna distance between cones in the fovea determines the response of vision to long wavelengths. Correspondingly, the smaller diameter of rods determines the limit of short wavelength response.
It follows geometrically that the central band of the three defines the exact center of the visual band – or 550 nm. Computer simulations show that the eccentricity at which this peak occurs ( 7-8 degrees) corresponds to angle where this wavelength is refracted on the retinal surface. It is seen therefore that (for the first time ?) that electromagnetic wavelength is determined geometrically in a biological system. Nature does not require a laboratory spectrometer to measure wavelength.
This geometric midband reference undoubtedly forms the basis for the color constancy of the vision process. It would also seem to be the reference “fulcrum” that Edwin Land deduced must be present in his work on color vision. I will discuss this further below.
One then notes another curious geometric finding that the measured ratio of the diameters of retinal cones to rods of 1.8 vs 1.0 microns is: a.) the only ratio that can result in the octagonal arrangement of rods-around cones that appears on the retina, and, b.) the ratio that corresponds to the width of the visual band that extends from 700 to 400 nm. Curious!
These geometric principles teach that the retina can then be seen to be logically divided into three distinct regions of defined narrow band wavelength sensitivity as follows:
1) The all cone/cone apposition fovea that using this geometric construction will be tuned to and actually define the precise long wavelength limit of visual response.
2) The above discussed mid-band sensitive region peaking at 7-8 degrees where the octagonal order of cone/rod appositions predominate. This point defines the exact mid band response
3) The short wavelength sensitive peripheral retina where rod/rod appositions predominate that are tuned to and define the exact short wavelength limit of visual response.
The following predictions can be made from these geometric considerations (and there are many more):
a.) The plan of light interaction with the retina finds that this surface is actually structured as a circularly symmetric diffractometer. Computer simulations, measurements of chromatic “aberration” etc. all indicate that light falls on the retina in this type of pattern. It seems that such a retinal response for which we have provided a geometric basis is the one that would have evolved, i.e., the eye is a direct manifestation of the physical laws fro the refraction of light – nothing more!
We must now question what this tells us about the overall vision process.
It should be obvious that a diffractive retinal response indicates that formation of the visual image cannot involve a “photograph-like” imaging process that vision science has for so long assumed. THE FOURIER TRANSFORMATION PROCESS IS INVOLVED.
As noted above, the accepted trichomicity of the vision process is validated but now from an exact physical or geometrical viewpoint! We assert therefore that there are not “three classes of color sensitive cones directly sensing in overlapping spectral response the visible spectrum” – a belief that has been held for so long. If, experimentally, a region of the retina is found to be sensitive to a specific wavelength (as it certainly will be) this would be due to measurement of a “specific apposition length that exists between two adjacent receptors or alternatively, between a central receptor and the gradient of refractive indices formed by the surrounding medium, that happens to exist in the region of the measurement”.
The singular role of retinal receptors proposed herein is to provide variable spacing between adjacent quantum confined electron spaces. It is the pattern of this spacing that defines the plan of optical wavelength sensitivity on the retinal surface!
Further, it seems from an examination of the longitudinal structure of retinal receptors that it is the inner segment of the receptor that defines interreceptor spacing and therefore determines the wavelength sensitivity of the light interactive outer segments.
It is interesting that when viewed in this manner all of the biological structures of the retina can “fall away” and the retina can be considered purely as a logically spaced array of electron quantum confinement centers!
We propose therefore that there is only one ubiquitous “pigment specie” common to all receptors both rod and cone – the basic retinal/rhodopsin complex – and I would rather refer to the role of this specie as a generic “electron or energy sink”. In support of this assertion please see more detailed thoughts under “Are There Three Classes of Cones…” and, more importantly, the reference to Snyder’s thoughts on this subject under “Additional Thoughts….” elsewhere on this webpage. This is also in consonance again with Wald’s work who, after finding one retinal pigment specie was unsuccessful in finding the “other two” that he believed must be there. The specific wavelength response of any area of the retina is determined by the spacing function resulting from the dimensionality of the inner segment of the receptor.
The mid-band response region seems, however, even to be performing an additional function. All dimensionalities in the mid-band responsive peak are identical, i.e., “tuned” to the central mid-band wavelength. But – the factor that does vary with wavelength is the number density of receptors! This geometrically defined pattern on the retina might then be seen as the actual physical embodiment of a Fourier transform – the translation of frequency/wavelength information into a spatial design.
One must continue the thought process here – it might seem that the wavelength response of single receptors for so long thought to be their central function is not important at all! As explained in the following it is simply their density in the retinal array and their spatial distribution that are important!
Continuing… and let’s think along one radial of the circularly symmetric diffractometric structure extending from the fovea to 20 degrees. At the beginning (the fovea) and end (the start of the peripheral retina) the long and short wavelength appositions (cone/cone and rod/rod) define the endpoints of the spectral response And….the ratio of their sizes defines the “bandwidth” of the detection system – and the ratio of the size of cones to rods does correspond to the visiible band! (see below),. It would seem that the mid-band dimensioned centers perform two functions. First, their number density defines the peak as calculated above which must be directly related to the refractive properties of the body of the eye. They then provide the unique and necessary fixed wavelength reference point at 7 1/2 degrees required by Land (see below) and, additionally, define the shape of the spectral response relative to the normalizing mid-band 7 1/2 degree peak!
The wavelength response on the retinal surface is then both “fixed” in position on the retina and simultaneously normalized by this distribution.
Again, a physical picture of the Fourier transform seems to come through!
I realize that this concept bears on the definition of a pigment. I simply propose that, observationally, nature has evolved photosensitive structures (in this case the retina of the eye) that detect light using a system comprising: a.) a “quantum spatial surround” which acts as a classical, wave optics “antenna dimensionality”, and, with this space necessarily being adjacent to, b.) central quantum confined electron spaces (corresponding to the “absorbing mass”). The latter in retinal receptors I propose is the function of the retinal/rhodopsin protein complex, i.e., that this single entity acts a as a ubiquitous, central electron “energy sink”. It is the dimensionality of the “surround” that defines the wavelength of the interaction. There is justification for this modeling in, for example, the history of phosphor development in the period of World War II for radar application where development of a red phosphor was constantly beyond reach. I believe that the reason for this was that long wavelength, in antenna terms, requires the largest spatially coherent domain around the QC electron center and that this should be (and was) the most difficult to implement. There is much more background here (summarized below) including the relevance of Kuhn’s “photon funnel” experiment which employed artificial lipid membrane to elegantly demonstrate that energy from the absorption of a single photon is transduced over many molecular distances! I have attempted to amplify on this result proposing that the necessarily lossless energy transduction occurs via a mechanically interactive soliton mechanism. The thylakoid disks contained within retinal receptors present exactly the same membraneous situation and should behave in the photon funnel manner. The function of their lateral width should prove to be to thermalize absorbed light energy ultimately presenting it to the retinal/rhodopsin complexes contained within the disks.
b.) The presence of a diffractometric surface on the retina indicates that the vision process must take place in the frequency or Fourier domain and therefore that some sort of 2-D frequency transform is being effected in the processing of visual images. I will outline below how I believe that this happens. This implies that the retina must be located at the focal (or Fourier) plane of the optical system of the eye – and not as heretofore presumed at the intensity-only sensitive image plane (which corresponds to the location of film in a camera).
THE EYE DOES NOT FUNCTION AS A CAMERA!
c.) Without going into the mathematics of 2-D Fourier transforms (which I believe has put off many investigators) the Fourier equation contains two terms relating light amplitude (related to intensity) and phase (related to direction) to image formation at the Fourier plane. At this plane of an optical lensing system (such as the eye) image information is thus “encoded” in two forms – intensity and phase. Any retinal detection “device” (comprising two adjacent receptors) must therefore possess the ability to detect both aspects of light – which, as I will indicate, I believe that they do. In a simple argument of economy, it would seem that nature would utilize both sources of information to process images. NASA, for example, for this reason uses this method for retrieving images from space. Fourier methods are also used in processing images in medical magnetic resonance imaging (MRI) scanners.
One must differentiate here between the electromagnetic radiation sensitive diffractometric spatial pattern that I propose is the response of the retinal surface and the “frequency space” of the Fourier transform. As I will discuss below I propose that the ability of retinal devices to detect both amplitude and phase at every point on the retina actually brings the focal (or Fourier) plane of the eye into coincidence with the image plane. This would seem to be the physical meaning of the Fourier equation. This is then an entirely new image formation process. No photograph that registers light ampliude only can accomplish this.
d.) To quote Brian Hagan (referenced elsewhere on this page) “Under Fourier transformation, the big becomes small and vice versa”. Again without going into the mathematics, the 2-D Fourier transform of even a large “outline sketch” image (a “black and white” sketch drawing, if you will) is a small central spot, i.e., the bulk of information needed to reconstruct such an image is encoded both in intensity and phase in the rays passing through this spot (and, to the degree of angle viewed, around the spot). This is beautifully shown in Figure 4 which is an optical transform taken from Caulfield. I propose that this is the function of the all-cone central fovea of the retina – to discern the basic “outline sketch” or at most the “general outline” of the visual image that we perceive. The fovea consists of a matrix of approximately 15,000 x 15,000 receptor elements sensitive to intensity and phase which should be sufficient for this task. And, as the model predicts, information is acquired by the fovea solely at long wavelengths by the cone/cone appositions of this small central retinal area. We then have the situation where the outline of the perceived image is “decoded” from at most a narrow band of long wavelengths via an optical transform in the small foveal area.
I would think that this is probably the reason why long wavelength (”red”) illumination has been found optimum for use in the low light level conditions of aircraft cockpits, submarines etc.,i.e., to preserve the greatest amount of “outline” detail at the lowest possible levels of illumination.
e.) The measured cone/rod morphology of the retina would seem to indicate that (all?) image processing occurs to retinal angles of ~17-20 degrees (as measured from the central fovea). These are the surrounding concentric rings of mid-band to short wavelength sensitivity described above. It is these regions which, in the frequency domain, supply the “added photographic detail” (and “color” to be discussed) to the final image perceived by the brain. The cone/rod assemblies in this region I believe (predict) will be found to be connected in underlying neural “circuitry” to the central fovea for the purpose of relating this information (via Land’s concept – see below) to the appropriate part of the foveally determined “outline sketch”.
f.) The predominantly rod-containing retina beyond 20 degrees (the “peripheral retina”) I believe functions as a wide angle “light meter” providing the “intensity normalization signal” for the image information presented by the imaging region of the retina. I would propose that this is the reason why the function of rods has been so misunderstood – I see absolutely no physics mechanism that would indicate that they in themselves are more sensitive to light as has been reported over and over for so long. It is rather the totality of the rod-containing area that is fundamentally involved in low light level detection.
This concept predicts that it should be chiefly short wavelength radiation that controls dilation of the pupil and thus light entrance to the eye. Might this be experimentally demonstrated. Conversely, solely long wavelength radiation should have minimal effect on pupil dilation. Might data exist to show this? (SEE NEW LINK ON INDEX PAGE REFERRING TO THIS POINT).
g.) With the above model defined, I must stop here and explain how I believe that the sensation of color is perceived following from this model.. The reader will note that I have not used any color terms, i.e., “red”, “green”, etc, .in the above discussion.
I do not believe that the sensation of color that we perceive should be associated with individual retinal receptors at all and, in fact, does not follow from localized detection sites on the retinal surface. It would eliminate much of the confusion that has clouded the many years of vision research if this sensation of color were completely detached from any consideration of light interaction with the retina.
This assertion follows from the truly fundamental teaching of Edwin Land and his now famous series of color perception experiments! I have come to believe that Land’s work on the perception of color represents the seminal work on this subject.
Fundamentally Land found beautifully by experimentation external to the eye that color is seemingly synthesized from a comparison of two “lightness records” or images detected somehow by the eye. One might think of these records as being “black and white” in consonance with the black and white photographic negatives that Land used in his experimentation. To produce the optimum full “color” that we perceive these images were simultaneously illuminated with a wavelength near the middle of the visible band . Land termed this point a “fulcrum” or “balance point” on either side of which the lightness records were compared. The mechanism for producing color found by Land then required a fixed wavelength reference point which he determined experimentally to be precisely at 588 nm.. However, he was not able to determine where any of this occurred within the eye and he went on to develop (with McCann) an abstract (and complex) “Retinex” theory in an attempted explanation. I propose elsewhere on this webpage (under “Additional Thoughts….”) and will develop the idea that “color blindness” may be traced to a slight shift of this reference point caused by some genetically mediated shift in the relative sizes of the two receptors.
I propose that this geometrical model indicates where Land’s “lightness records” arise within the eye. When the image information from the long and short wavelength geometrically determined (above) peaks are added to the central foveally derived outline image, the result is exactly analogous to the external long and short photographic records recorded by Land in his external vision experiments. The mid-band “balance point” that Land found to be at 588 nm corresponds geometrically to light interaction at the ~ 7 1/2 degree concentric ring – the position on the retina that corresponds to the peak of the number of cone/rod appositions and thus mid-band response.
The actual comparison of lightness record information is undoubtedly made in the neural circuitry underlying the retina which, when added to foveally-derived outline sketch, “fills in the photographic detail” that we see. Then, after normalization by the sensitive (due to it’s large solid angle) short wavelength “light meter” signal derived from the totality of rods from the peripheral retina, “color” is finally perceived – by the brain. I attempt to summarize all of this together with Land’s teaching in Figure 5.
This is the basis for my thought that “color” should not be associated with the retina or as a direct response of retinal receptors. Ony colorless “grey scale” images are produced within the eye and presented to the brain where “color” is perceived.
The fixed mid-band wavelength reference point on the retinal surface is crucially important. It provides the unique ability of the eye to perceive the same color under different illuminations. Land noted correctly how this followed from his theory – and I now show how this is capability is contained within the eye.
Vision is therefore “internally referencing”. I enjoy the thought that in this model vision is “relative” – there is no need to talk about specific wavelengths – only the “endpoints” of the visible band and a specifically determined “mid-point”.
h.) A word about the physics of how I believe the fundamental light detection “devices” of the retina would function. These devices are composed as I have stated above of variable width (lambda / 2n where n is the refractive index of the absorber) spaces between two adjacent receptors – or more precisely, between the “two quantum confined electron spaces formed by retinal/rhodopsin complexes contained within each adjacent receptor”. The generalized physical spacing between receptor centers then defines the wavelength sensitivity of the structure. The aspect ratio (height to width) of the light interacting outer segment of the retinal receptors is amazingly high – ~ 50:1 indicating that light phase information is probably determined by some electrical “giant dipole” mechanism as light impinges along the length of the receptor device. The coin stack of thylakoid disks within each receptor then serves functionally as a “lengthy electrical quantum wire” whose purpose is to decode the phase angle information of the incoming light along it’s length….and, finally, to effect the 2-D Fourier transform. This is as opposed to the almost preposterous “quantal catch” probability hypothesis that has been traditionally used to explain the function of receptor length ! A diagrammatic representation of the devices that we propose is shown in the following figure:
There is some imprecision in ascribing optical “antenna” length as the “center-to-center” distance between receptors. The retina/rhodopsin complexes are seemingly distributed randomly within the lateral surface of the thylakoid disk structure and it may be the lateral distance between any of these that defines this length. “Antenna” length will then comprise laterally: a.) some intereceptor medium (with it’s refractive index), b.) the lipid membraneous thylakoid matrix (whose function I propose will be found to have a lossless energy transport and thermalization function ). and then, finally, c.) to the retinal/rhodopsin complex electron sink.
The general proposal therefore is that nature has evolved the retina of the eye (and the chloroplast of green plants) to detect light as the wave of classical physics while confining the absorbing electron to quantum confined dimensions. This statement in no way negates quantum reality – it would simply seem to be a statement of Bohr’s complementarity in the nanometer spatial domain as expressed by nature. I would suppose, for example, that light in some other situation could be considered to be quantized (photons are real!) and the absorbing electron treated classically. In our world, however, nature would seem by simple observation to treat the situation in biologically evolved photosensitive structures as I have proposed.
i.) Predictiveness – it should be possible using this antenna methodology to predict the visual characteristics of any species (dogs, fish, etc.) if their retinal morphology has been measured. The rules would be : a.) overall receptor size which sets interreceptor distance determines the wavelength of the interaction, b.) a ratio of receptor sizes (if more than one size is present) determines the optical bandwidth of the system. It may be found, for example, that the purpose of the conical shape of cone receptors is to “broad band” long wavelength response! Examples that I have come upon, a.) the vision of fish is supposed to lie in the infrared beyond 700 nanometers. Applying this model, retina sensitive to this spectral region should be composed of receptors larger in size than human -the cone receptors of trout are seven microns in diameter (versus the one micron value for human cones), b.) the vision of insects is in the ultraviolet which would indicate smaller receptors – and there are indications that this is the case.
Further, this model should also apply to other (any) biologically evolved photosensitive systems. It has long been recognized that the photosensitive chloroplast organelle of green plants is characterized generally by two very distinct and well defined absorption bands at both ends of the visible spectrum. These are shown in Figure 6. The nanometer morphology of such green chloroplasts is characterized by two – and only two – major physical separations -the “grana” and “stroma” of the organelle as shown in the figure. The function of this organelle is to absorb optical power as opposed to the low light level imaging requirement of the retina. The natural evolution then would be to extend the quantum confined electron spaces into lengthy two dimensional lamella as seen in the chloroplast. I would propose then that the nanometer morphology of other biologically evolved photosensitive systems will be found to be is directly related to their light absorption properties. Such a study should be undertaken.
j.) Other curious geometrical factors (seemingly predictive) of biologically evolved photosensitive structures seem to emerge from this new perspective. It is very surprising, for example, that the ratio of the sizes (diameter) of cones to rods of the human retina is ~1.8-2:1 which corresponds to the visible bandwidth (700 to 400 nanometers). It is this size ratio that I propose determines the “bandwidth” of visual response. It also follows that the absolute size of individual receptors determines (by setting intereceptor distances) the wavelength response of the array/system. Thus, larger receptors are characteristic of the visual organs of the infrared vision of fish species and smaller receptors in the ultraviolet vision of insects. There are many references that document this – and it has never been noted.Further, this ratio dictates a unique, geometrically determined, retinal motif at the point of peak cone/rod appositions (the center of the visual band) of exactly eight smaller rods able to fit around each larger cone. This is shown in Figure 7 which is an older drawing of the retinal morphology at this point taken from Pirenne (”Vision and the Eye”, M.H. Pirenne, The Pilot Press Ltd., 1948). And…..strangely… this same “eight-around-one” retinal motif is seen in the visual organs of seemingly all species from crustaceans, honeybees, flys, etc to humans! (see Snyder’s work). Even though the receptor morphology of some of these species take distorted and seemingly mishapen forms there is always the eight-around-one motif – quite extraordinary! The “fused rhabdom” morphology of the visual organ of the fly or honeybee which at first glance seem to be composed of only seven surrounding cells actually contains the required nine with the central receptor being divided longitudinally into two sections.
And even further, the same motif can even be seen in the lateral arrangement of the chlorophyl complexes in the morphology of the chloroplast lamellae in photosenstive plants. – which are thus in the same size ratio! This is discussed elsewhere on this page.None of these observations could have been made – and seemingly were not made, starting from the featureless “…a photon interacts….” mental construction. These observations seem to me to be very predictive and may be used to validate the geometrical antenna hypothesis.
Thus the mystery of the ubiquitous eight-around-one motif remains……
It occurs to me that there really is no mystery! Note from the above that the “eight-around-one” motif seemingly is present in three different biological structures that have evolved to be sensitive to the solar spectrum, i.e., the retina of the human eye, the visual organs of all living specie, and even the chloroplast organelle of photosynthetic plants and algae. It is the logic of this hypothesis as discussed above that this motif actually represents the optical spectrum viewed by the eye – it IS the Fourier objectification of that spectrum as reflected in these spatial structures! See the link “Added Thoughts….” on this webpage for a more detailed discussion of this point. This would seem to be very predictive – as I will discuss.
Continuing the original text……
In a leap of fancy, I have speculated elsewhere on this webpage that this motif could correspond, if one envsions a point on the outer receptor revolving around the inner, to a symmetric two lobed epitrochoidal figure (the same shape used by Felix Wankel in his unique internal combustion engine). On “both sides ” of this “mid-point” of the spectrum, i.e., the all-cone and all-rod regions of the retina, assymmetric epitrochoidal figures are obtained with a single lobe on only one side!
Can “spatial symmetry” therefore somehow be related to a spectral peak? And …….the only ratio of sizes of the two circles that forms a symmetrical epitrochoidal figure corresponds to the eight-around-one motif!
In an absolute leap of fancy (!) I have also proposed elsewhere on this webpage that this situation may apply to the K shell of the atom..the shell that contains eight (why eight?) electrons…..and here we have the “spin” necessary to form an epitrochoid !!!!
NOTE: All of the results noted herein are amply supported by references – not all of which I have included in this short precis.
APPENDICES – ADDED IN PROOF
In 1999 I became interested in modeling the optical system of the eye to see if its refractive properties would produce the diffraction pattern on the retina that I had predicted from geometrical principles (i.e., long wavelengths would be concentrated at the foveal center with surrounding rings of shorter wavelength response). I was at the time using Optical Research Associate’s “Light Tools” optical modeling software designing light actuated high power semiconductor devices so this was not a difficult task. It became quickly apparent using accepted refractive data for the eye that this general pattern of wavelength response was present on the retina. I then located (on the web) an investigator who was doing much more sophisticated modeling of the eye using similar optical modeling software. I visited him and explained my non-traditional theory predicting the specific pattern of a diffractometric retinal surface. A very interesting moment then ensued – he touched his computer and brought up the retina wavelength response pattern that resulted from his modeling – and it was exactly the pattern (to a degree of angle) that I had predicted! I left the geometrical figures summarizing my concept and I remember his parting comment – that “he would have to forget all that he had previously learned if he were to believe me”! An interesting aside – I wonder, and cannot answer, how anyone can reconcile such a diffractive retinal pattern with the traditionally held view that the retina is the image plane of the eye! This riddle applies to most of the vision research that I have reviewed. Textbook diagrams invariably show an extended image on the retinal surface – as if the eye were acting as a camera. This implies, I guess, that the retina is composed of small “RGB triads of color sensitive cone receptors” – as on the faceplate of a television cathode ray tube or at least in three RGB layers as in color film – which is not at all evident on the retina. This is, however, in direct conflict with the distinctly non-uniform pattern of cone and rod receptors on the retinal surface that has been known for seventy years – with these measurements indicating that >99% of all cone receptors located within the small, millimeter diameter fovea. Many questions?
“A 1999 paper in Nature entitled “The Arrangement of the Three Cone Classes in the Living Retina” by Roorda and Williams (1) is interesting. Their view (and remember that it is published in 1999!) represents the traditional one as stated in the opening words: “Human colour vision depends on three classes of receptor, the short (S), medium (M) , and long (L) wavelength-sensitive cones”. The paper then goes on to state that these cones are “interleaved in a single mosaic” so that, at each point on the retina “only a single class of cones samples the retinal image”. I believe that this represents the model commonly used in vision science and all experimental data obtained seemingly must be interpreted to fit into it.
(1) Roorda, and D.R.Williams Nature, 397, No.11, February 1999
The abstract goes on to state that: “although the topography of human S cones is known (2,3)……….” (emphasis mine). If this statement were accurate, namely that “S” cones have been shown to exist and that they are arranged in some known topography, I would readily concede that my ideas would be incorrect.
The references in the paper, however, do not seem at all to support the statements made!
I humbly submit that the Williams reference (2) is simply inconsistent with the above statement. In that reference, and after considerable effort, a single cone receptor is finally located which is defined as an “S” cone. One might question here the conclusion in the Nature paper about a “topography”! How is one to ascribe a topography to a single cone?
The Curcio reference (3) is more curious and again does not at all support the referenced statement. This paper relates a much more intensive effort than pursued by Williams to seek the elusive “S”cone using a number of diverse approaches. But the concluding comment of the paper is most telling stating that “all of the evidence (for the “S”cone) must be viewed as ‘inferential only.” Again, the emphasis is mine but this is the exact term used in the paper. I really do not understand what this means but it is certainly not very positive about the existence of the s-cone!
(2.) Williams, D.R. et al “Punctate Sensitivity of the Blue Sensitive Mechanism”, Vision Res, 21, 1357-1375, (1981)
(3.) Curcio, C.A., et al “Distribution and Morphology of Human Cone Photoreceptors Stained with Anti-blue Opsin”, J. Comp. Neurol.,312, 610-624, (1991)
One more note about the Roorda & Williams Nature paper – it is clear from their Figure 1 that the region of the retina that was examined is somewhere outside of the central fovea. I would interpret the area as being near 7-10 degrees of retinal angle as there are obviously a significant number of rods surrounding each cone. This is the region wherein there is a great interplay of cones and rods to which I ascribe mid-band response. Further, the method used in this paper of differentiating “M” and “L” cones is by a complicated subtraction technique which, in my view, raises many questions. A “mid-band response” should occur in this region but in my concept which should be evident to the reader by now this would be the result of cone/rod appositions. “L” cone,long wavelength response must be due to cone/cone appositions and the only place that this can occur is in the central fovea – to a retinal angle of a degree or so.
(4) G. Wald, “Blue-Blindedness in the Norma Fovea”, Journal of the Opt. Soc, of Am., 57, No.11, November 1960
HERE ONE MIGHT SEE THE ENTRY OF 6/19/02 TO THE LINK “ADDITIONAL THOUGHTS…….” REGARDING A BBC DOCUMENTARY ENTITLED “COLOURFUL NOTIONS” WHICH AGAIN PRESENTS A STRANGE INTERPRETATION OF THE EXISTENCE OF “BLUE CONES”
Additional thoughts about Kuhn’s “photon funnel” experiment.
I will attempt to succinctly sum up my thoughts about the importance of the”photon funnel” experiment of Hans Kuhn at Gottingen in the 1970’s. It really forms the basis for this entire concept, namely, that a spatial domain effect exists in single photon interactions with matter. In optical spectroscopy the entire world can be considered to be “Stokes shifted” , i.e. there is energy lost in all processes of optical absorption by matter resulting in longer wavelength (lower energy) emission in optical fluorescence. An exception was discovered simultaneously in ~1938 by Jelley of Kodak in this country and by Scheibe in Europe. They both found that certain optical dye molecules in solution exhibited optical flourescence emission at wavelengths very close to (a few nanometers) the excitation wavelength. They termed this phenomenon “resonance radiation”. Implicit in this finding is lossless (or nearly so) energy transport between the points of excitation and emission – using for the moment the concept that they are spatial and separated. Because it was found chemically that these dye molecules were in a state of aggregation to produce the effect the entities were termed either Jelley (or “J”) aggregates generally in the U.S. or “Scheibe aggregates” in Europe. What Kuhn subsequently accomplished relevant to this effect was elegant and dramatic. He incorporated optical dye molecules into ordered phospholipid molecular layers using Langmuir Blodgett methods. This forced, in very much of a summary, energy transduction to occur in controlled fashion in the two dimensional plane of the film. Kuhn found in the first instance that using simple phospholipid layers with their calculably geometric “empty space” between adjacent lipid molecules, measurement of fluoprescent emission produced the traditional, lossy, Stokes-shifted behavior. Then… the elegant part! Determining that the molecular size of the octadecane molecule would fit nicely into the empty interstices, he intercalated this molecule into the phspholipid layer (a simple matter using LB methods) and ……… he obtained the Jelley or Scheibe resonance radiation behavior! The experiment has since often been repeated. Then, further and most importantly, he reduced optical excitation intensity to the single photon level and demonstrated (the “photon funnel” experiment itself) by comparing the intensity of energy donor and acceptor peaks that energy is transduced over distances of 10-100 molecular distances following optical excitation by a single photon! One can follow all of this in Kuhn’s papers but it is very well summarized including comments regarding it’s fundamental importance by Blinov in Russian Chemical Reviews (52, (8), 1983). The requirement for “space filling” in the PF experiment connotes (at least to me) that energy is being transduced mechanically through the body of the layer (over the many molecular distances that Kuhn found). Lossless mechanical energy transduction, in turn, implies that the mechanism involves solitonic transport. I believe that such lossless solitonic transport is used by nature in, for example, the lateral absorption of energy that I propose takes place laterally in the thylakoid disks retinal receptors.
It is interesting that the counterpart of Kuhn’s octadecane space-filling molecule in nature (probably in the membrane forming the thylakoid disk) is the cholesterol molecule – cholesterol having been shown to be intercalated into biological membrane in exactly the same position as Kuhn’s artificial octadecane!
I have over the past ten years generated interest in the unique character of the Jelley/Scheibe aggregate result in a number of laboratories around the world (Christiensen’s group in Denmark, Blinowska’s in Poland, Giuseppe Vitiello at the University of Salerno, etc.). A group of us published one paper (in Physics Letters) reporting Christiansen’s computer simulation result indicating that a 2-D “ring” solitonic vibration might exist.
I believe that the connection between an experimentally verified spatial distance associated with the interaction of a single photon suggests justification for the concept of dimensional optical antenna entities.
I gratefully acknowledge the many wonderful technical discussions in the initial stages of the development of this concept with my friend of many years – Felix Guttman of Macquarie University in Sydney, Australia. Also, in retrospect I believe, in re-reading his paper (referenced elsewhere on this webpage), that the very prescient ideas of Brian Hagan had some role in leading me along this line of thought. I had the pleasure of meeting Brian in Sydney and he is an individual with a truly extraordinary scientific thought process.