A Modern Explanation for Light Interaction with the Retina of the Eye Based on Nanostructural Geometry: Rethinking the Vision Process

Gerald C. Huth,  Ph.D.  (physics) — Ojai, CA | Tucson, AZ
e-mail: gerald.huth @ gmail.com



To be very clear – this work is based on, and derives from,  the physics of  the interaction of light with the outer segments of retinal receptors. This is the site on the retina where the primary interaction with light occurs. I am not concerned with  chemical or biological processes in the underlying retina. This is an exercise in  pure physics and not biology.

The work introduces  modern nanostructural thought  that  light interacts with matter as the wave of classical physics via spatial dimensionalities (termed “nano-antennas”).  Applied to the retina this implies that the absorption of light takes place in the spatial dimensionalities between adjacent  cone and rod receptors and not in the ‘pure quantum’ conjecture that photons interact within the body of the receptors themselves.

The fundamental finding of my work follows from the application of the dimensional nano-antenna concept to the retina.  Using Osterberg’s 1935 measurement of the distribution of  receptors on the retina, calculations of the spatial density of cone and rod appositions reveal that the retina is a precise  and distinctly structured diffractive surface. It  is not the random distribution of light receptors as has been for so long assumed.

A statistical distribution of receptors reveals the  diffractive surface that has not been seen until now ! The retina  is actually  composed of three narrow band geometrically determined optical filters.   I propose that this surface fundamentally supports the trichromatic  nature of visual response.

All follows from this finding …… please read on… With reference to recent comments proposing a tetrachromacy of vision, no one should dispute the Young/Helmholtz finding in the 19th century of the trichromatic nature of color vision. This fact is safe and secure. A quote from a beautiful book Nature’s Harmonic Unity: a treatise on its relation to proportional form by Samuel Colman written in 1912, however, adds another perhaps more fundamental dimension to the trichromicity of vision in his section “Correlation of Numbers“. I would propose that this definition precisely defines my explanation of vision. From p.12: The number three, being the lowest number to have a fixed start, a termination, and also a middle point, is vastly important as representing the smallest or lowest number of members capable of enclosing or outlining any portion of a plane surface.” As presented in this work , and shown in the following figure, it will become clear that the start and termination of this definition are the long and short wavelength limits of the visual band and the middle point corresponds to the geometrically determined precise center of the band that Edwin Land so presciently forsaw. .

I am continually asked about proof for my theory. I do not consider, as the title states, this to be a theory at all but rather an explanation – perhaps in my humble view, the first rational explanation of light interaction in the retina of the eye based on an interpretation of Osterberg’s classic 1935 measurements of the spatial distribution  of rods and cones on the retinal surface. What follows are surprising geometric answers to the historic and unexplained properties of vision such as color constancy, the ability of the eye to discern single photons (or, as will be taught in this work single “quantized interactions”) and, how the hues that we have termed “color” are synthesized from three “primary” wavelengths that are, on the retina, not yet “colors”. The hues of “color” are not detected on the retina at all. Importantly, this work is supported by historically important experimental results of such individuals as the Nobelist George Wald,  Edwin Land and, is in agreement in areas of physics with the  strongly expressed thoughts of Nobelist Willis Lamb.

  • Image of George Wald
  • Image of Edwin Land
  • Image of Willis Lamb

READ THE ORIGINAL PAPER The  fundamental basis of this work teaches  that light interacts with an  absorbing mass by means of  “optical antennas”* (i.e., a spatial nano-dimensionalities) instead of the pure quantum  idea that “photons interact with pigment molecules etc.”. The concept of an antenna implies that the initial  interaction with outer segments of retinal receptors involves the wave nature of light. It follows that the eye fundamentally evolved to interact with the electromagnetic wave nature of light and not with photons. Then, by applying this spatial antenna concept to Osterberg’s measurements of the spatial density of cones and rods on the retinal surface, and by the simple process of  counting receptor appositions at each retinal angle, the nature of light interaction with this surface  and the trichromicity of the vision process is revealed. The  retina is at once seen to be a diffractive surface implying that it is located at the focal (or Fourier) plane of the optics of the eye. Images at this plane, as opposed to traditional thought, are encoded at each site in two terms corresponding to the intensity but also phase of absorbed light. * I am going to insert here that retinal antennae function in the “near field” of light (i.e., at dimensions smaller than the wavelength of light or in the nanometer range) and in the femtosecond (10>-15 sec) time domain. Each antenna consists of two regions,the dimension between receptors that constitutes  a variable dimensionality for the initial absorption of the wave nature of light and selects the specific wavelength absorbed., This region must be  immediately adjacent to a smaller region of fixed dimension that functions to “quantum-confine” an electron  that constitutes the ‘absorbing mass’. Thus, each antenna (or each light detecting site on the retina) absorbs  the electromagnetic wave  nature of light and translates this  absorbed energy into a quantized electron particle that is  subsequently used (electrically) in the  vision process. Anyone who has ever studied vision will certainly have seen the following curve that has been reproduced in seemingly every textbook:

Indicated is that  the vast majority of, what have been improperly termed, “color sensing” cones  are constrained to the less than one degree  of retinal angle (the fovea).  Smaller  diameter rod receptors are continually  introduced  from this point to larger angles ending in their being predominant  the peripheral retina.

Proceeding outward from the hexagonally arrayed  cones in the fovea to larger angles, and with the continuing introduction of statistically distributed  rods,  a point is reached at 7-8 degrees where a complete octagonal order of rods-around-cones is seen. From this point to the peripheral retina the retinal topography reverts to, again, an hexagonal order of the smaller rods. This is the spatial order of the retina – from a hexagonally ordered all-cone fovea to a completely octagonally ordered state at 7-8 degrees and, finally, to an again hexagonally ordered array of rods.

The  traditional assumption about the eye has been that it behaves as an  imaging “camera” that our technology have come to know so well. Vision texts are replete with the “inverted tree” diagram showing an inverted image  that  encompasses a wide angle of the retina  to perhaps 50-70 degrees.  If the camera analogy were the case  the retina would display a uniform spatial ordering of receptors using,  for example, the periodic arrays of RGB triads or stripes on the silicon imaging chips of digital cameras.  Although many attempts have been made to find such order none (until this work!) has been found. The statistical distribution of receptors  described by Osterberg  obtains.


I cannot help getting ahead of myself here in expanding on the spatial order of receptors that is explained in this work. The wavelength detected by the hexagonally ordered all cone fovea doesn’t detect color at all but rather the single wavelength that  geometrically defines exact long wavelength limit of the visual band. The complete octagonal order observed at 7-8 degrees at the point where rods present in sufficient numbers to completely surround each remaining cone has long been  in the literature – see Pirenne “Vision and the Eye”, Plate 6. This order geometrically  defines the exact middle wavelength (~550 nm) of the visible band. This is Edwin Land;s  ”fulcrum” that is the basis for the subsequent synthesis of  ”color”.  Additionally, I propose that this fulcrum provides a fixed wavelength reference, that explains, again as Land proposed,  the color constancy of vision. The single wavelength detected by the again hexagonally ordered rods  beyond 20  degrees forms the exact short wavelength limit of vision.

NOTE: The idea that it is wavelength that continuously varies across the retina is incorrect. It is the density of the detection sites of the three primary wavelengths noted that varies across the retinal surface..


And… I  would be remiss if I did not note the totally incorrect statement that is again found in every treatise on vision that  “it is the cones that detect color and the rods  that detect black and white”. I remember speaking with an individual in the vision research field on this point, Her answer to my query was – “oh, nobody in vision research really believes that anymore”. I remember asking, then what they did believe…..and didn’t get an answer.  But it remains to this day that this is still vision dogma that is taught to students of the subject.

Back to  this work…. I believe that I present a simple and logical interpretation of Osterberg’s curve as  summarized in Figure 3 in the original paper (link above) and in my Comment posted on May 31st. where this figure is redrawn.

This assumption defines that light wavelengths refracted by the lens and structure of the eye are detected on the retina in three circular rings surrounding the central fovea*.  In  addition to verifying the trichromicity of vision, this pattern demonstrates that the retinal surface is actually a diffractive surface and not, as has for so long been incorrectly assumed, a direct imaging surface (as photographic film).

* The position of these rings – long wavelength sensitive in the central the fovea, mid band sensitivity at 7-8 degrees, with short wavelengths generally interacting beyond 20 degrees, – corresponds to, what has traditionally been thought of as, an aberration –  termed the “longitudinal chromatic aberration” of the eye. This work would demonstrate that this response is not an aberration at all but rather the fundamental basis for the image processing of the eye. In this context I would note the comment from Millidot shown  above.

A diffractive surface  means  that the retina is not located at the intensity-only sensitive (camera) plane but rather at the  focal (or Fourier) plane of the optics of the optics of the eye. To satisfy the Fourier equation for derivation of an image, each light detecting site possess the ability to encode both the intensity of light falling on it but also the phase of incident light.

I must interject here that perhaps the most fundamental conclusion that follows from  this view of light interaction with the retina is that the structure of the eye represents nothing more than a materialization or objectification of the physical laws of the refraction of light using a  basic  geometric principle ! There is no need to introduce  ”design” intelligent or otherwise.


A bit of text that I have written further summarizes this view of light interaction with the retina:

“It therefore becomes clear that the eye evolved to detect light as the electromagnetic wave of classical physics and that detection is not based on the concept that  “photons rain down on the retina and interact with  pigment molecules….etc”. This has been a mistaken assumption of  ”pure quantum” thought that has been characteristic of our modern era. In this context see Willis Lamb’s paper “The Anti-photon”. There is no need any longer for such absurd constructions  in vision science as a  ”quantum catch” hypothesis  to explain the structure of receptors.”

A bit poetically…..I believe that the retina of the eye should be visualized, as “a logically spaced array of the wave-to-particle transition sites shown to exist in this work moving through a sea of electromagnetic energy and geometrically extracting three specific wavelengths from that sea to form what we perceive as the visual image and the sensation of the hues of color…”


And as shown in the “Rosetta Stone” diagram above, the geoemetric  construction employed here  defines the exact center  - 550 nm -of the visual band and, moreover, precisely where this wavelength interacts at the point of complete octagonal symmetry at a retinal eccentricity of 7-8 degrees. This then provides a  fixed wavelength reference that I propose serves as the basis for the color constancy of vision and the synthesis of the hues that we term “color” as described by Edwin Land. We all agree on the same color because, in essence,  we all have the same size/diameter of receptors. In essence, then,  nature translates the wavelength  of light into a geometric construction! One can begin to see that “geometric rules” are predictive in understanding vision.   For example, the ratio of the sizes of receptors (if two sizes are present)  determines the bandwidth of vision. The absolute size of the receptors themselves (that determines receptor-to-receptor  distance) defines the wavelength of light interaction.. This  predicts, for example,  that the receptors of insects whose vision is known to be in the UV region will be smaller than the human variety – that has been experimentally verified. AND THERE ARE MANY MORE GEOMETRIC RULES! What is the  meaning of the term “color” (or more properly, the “hues of color”) …..these terms have historically used  as a  ”shortcut” that that  I believe, has had terrible consequences for the science of vision. I would assert that it is three discrete  electromagnetic  wavelengths that are detected by the retina  that are actually “primary wavelengths” and not , as has been historically termed “primary colors” (i.e., using the shorthand,  red, green and blue). The term “color”should  be reserved for “subsequent synthesis by the eye/brain of hues derived from the three primary wavleengths according to the teaching and findings of Edwin Land”. Following from this basic understanding that  three single, light  wavelengths are detected at the plane of receptor outer segments, the only variable that is observed  in  each band is the intensity of  narrow wavelength detecting sites within each of the primary regions . It is the spatial  density of the geometrically-identical  light detecting sites that varies across each region. The result is not a usual spectrometric plot as shown in vison texts. One who has studied the work of Edwin Land on color vison might see that the three narrow wavelength Fourier images  detected at the plane of retinal outer segments- that can be considered “black and white” images – exactly coincide with the requirements of the Land’s teaching.  Land deduced from measurements made external to the eye, that the sensation of color  was synthesized by the eye determining what he described as a ratio of “lightnesses” (similar to intensities) on either side of a fixed midband “fulcrum” that he proposed must exist somewhere in the vision process. We now know where that fulcrum  and the black and white images reside! Over the history of vision research a great many results have been reported that did not “fit the model” and were therefore glossed over and, as I see it, disregarded. These results can now be seen to be consistent with this new explanation.  For example, the all-cone fovea now seen to be solely sensitive to the  long wavelength end of the visible band (there are no “classes of cones”!), was early on described by George Wald as being “blue blind”. This is in absolute agreement this new explanation. Then, to the historic notion that rod receptors are (somehow?) more sensitive to low level light.  It is shown here that rod-containing  peripheral retina. (more precisely, the region beyond ~ 20 degrees) functions as an integrated, large area “light meter” that controls pupillary constriction and thus light entrance into the eye”. Thus the rods do effect low light level sensitivity of the eye but not at all in the manner that has been historically assumed. And…there is much more…read the entire work. I would add one additional thought.  It is fundamental to this explanation that the retina must be in the living state to effect the spatial order that must be inherent in this light interaction mechanism. There has long been evidence that the spacing between retinal receptors is even dynamically ordered. How has viewing the many dead, frozen sections of electron micrographs led us astray? Respectfully submitted, GCH Tucson, AZ 4.13.10


Comments on this entry are closed.