So infra-red, is just another waveband of light, no different than red, green, blue, yellow etc…
Red is usually defined as being around 670 nanometers in wavelength but really could range any where from 660-700 depending on the sensor system.
Light in cameras like the one presented here are usually detected in an array of tiny detectors, some of which are already sensitive to near infra-red light. Near infrared light is in the range of 800-1100-ish nanometers, and some camera sensors are sensitive in this range. To deal with this, a ‘hot’ filter, or wratten filter is used to filter out the infrared light. So often, with a small screwdriver and a bit of patience and steady hands, you can disassemble something like an old canon digital camera to remove the wratten filter, and you’ll get a camera that sensitive to the near infrared. So thats the basics of sensing, but now lets get into the plant physiology, and why plants are more responsive in the infrared than the red.
Think of a plant like a solar panel that is sensitive primarily to red light, and secondarily to blue light. These two critical wavebands are used to power photosynthesis. Plants ‘appear’ green, because this light is not used in the specific reactions for capturing photons and converting them to electrons. So when you see ‘green’ in a plant, its really light thats just not being used by the plant. THe plant is strategically absorbing red and blue light (rather than reflecting them). However, plants are in a bit of a pickle. They can’t move when their photoreceptors become saturated, and need to deal with the fact that if their equipment for harvesting light becomes damage (usually through heat/ moisture stress), they’ll die. Enter the mesophyll layer. The mesophyll layer is the inner region of a leaf, and is composed of two parts, the spongy mesophyll and the pallisade mesophyll. This layer has a very specific width, that allows for the scattering of infrared light. This width is established by having healthy, water filled mesophyll cells. When these cells lack turgor (are water stressed), they fail to scatter infrared light, and heat up, causing cellular damage.
Plants have adapted to scatter infrared light as a survival strategy based on the fact that they need to deal with excess energy they can’t make use of hitting their leaves. Healthy plants scatter lots of infra-red light and appear ‘bright’ in the infrared because of this. Less healthy plants fail to scatter as much infrared light, and will appear darker in the infrared, which means they are absorbing that light (which isnt useful to them), and heating up.
Good writeup but you forgot to mention why near infrared light looks red when seen through a camera, instead of invisible. Why does removing the infrared filter shift infrared light into visible spectrum, as far as the camera is concerned?
If you’re taking a pure “infrared” image it will look like night vision goggles. Since infrared doesn’t have a color that we can see, it just ends up as brightness value data going into the camera’s sensor. It’s just black and white since the sensor only has a brightness value to reproduce.
For this image I used a filter that allows the infrared through, making things like foliage brighter and giving it sort of an orange hue, while kicking out other wavelengths. I then use basic color adjustments to make the orange-ish foliage that the camera produces look super bright red. You can alter it to pretty much any color you like. All infrared pictures are ultimately false color, so it’s up to you what you want it to look like.
Referring to your last sentence: yes I know that, hence my question. Infrared just being brightness data that the sensor picks up makes sense. Thanks for explaining it.
I will mention as an add-on that it’s entirely possible to take IR photos using a standard camera and an IR filter such as a Hoya R72. The downside to this is that normal cameras don’t take in much IR due to filtering so you have to do a long exposure. The image ends up mostly red (since the filter itself is very dark red in visible light) so you then just turn it monochrome. Skies become dark, foliage becomes bright.
It’s all super cool and the best part is nobody can tell you you’re wrong. You just make it look the way you want it to be.
We do all the time. The NASA landst mission has been detecting in the infrared since it’s inception, and currently collexts 30 m pixels about every 15 days. Likewise the sentinel mission is collecting similar wave bands and I want to say it has more frequent procession? Maybe 10 days?
Also, most modern drones that are for doing things like crop health will have a natively 4 channel sensor gathering blue, green, red, and near infrared imagery. Sometimes they’ll even throw in multiple near infrared channels.
Back when I was flying drones for this we would run either 6 channels, or a 5 channel system with an upward looking sensor for calibration.
Long story short: You can rip the infrared filter out of most any digital camera, then use various filters to alter the wavelengths that actually hit the camera’s sensor.
Excellent, this is exactly what I was looking for. I was thinking this combines with printing on transparencies for cyanotypes might be a cool project.
But how?
So infra-red, is just another waveband of light, no different than red, green, blue, yellow etc…
Red is usually defined as being around 670 nanometers in wavelength but really could range any where from 660-700 depending on the sensor system.
Light in cameras like the one presented here are usually detected in an array of tiny detectors, some of which are already sensitive to near infra-red light. Near infrared light is in the range of 800-1100-ish nanometers, and some camera sensors are sensitive in this range. To deal with this, a ‘hot’ filter, or wratten filter is used to filter out the infrared light. So often, with a small screwdriver and a bit of patience and steady hands, you can disassemble something like an old canon digital camera to remove the wratten filter, and you’ll get a camera that sensitive to the near infrared. So thats the basics of sensing, but now lets get into the plant physiology, and why plants are more responsive in the infrared than the red.
Think of a plant like a solar panel that is sensitive primarily to red light, and secondarily to blue light. These two critical wavebands are used to power photosynthesis. Plants ‘appear’ green, because this light is not used in the specific reactions for capturing photons and converting them to electrons. So when you see ‘green’ in a plant, its really light thats just not being used by the plant. THe plant is strategically absorbing red and blue light (rather than reflecting them). However, plants are in a bit of a pickle. They can’t move when their photoreceptors become saturated, and need to deal with the fact that if their equipment for harvesting light becomes damage (usually through heat/ moisture stress), they’ll die. Enter the mesophyll layer. The mesophyll layer is the inner region of a leaf, and is composed of two parts, the spongy mesophyll and the pallisade mesophyll. This layer has a very specific width, that allows for the scattering of infrared light. This width is established by having healthy, water filled mesophyll cells. When these cells lack turgor (are water stressed), they fail to scatter infrared light, and heat up, causing cellular damage.
Plants have adapted to scatter infrared light as a survival strategy based on the fact that they need to deal with excess energy they can’t make use of hitting their leaves. Healthy plants scatter lots of infra-red light and appear ‘bright’ in the infrared because of this. Less healthy plants fail to scatter as much infrared light, and will appear darker in the infrared, which means they are absorbing that light (which isnt useful to them), and heating up.
Thats why plants appear bright in the infrared.
this was the best comment i’ve read since i started using jerboa/lemmy
That was a good read. Thank you!
Awesome explanation. Thank you
Good writeup but you forgot to mention why near infrared light looks red when seen through a camera, instead of invisible. Why does removing the infrared filter shift infrared light into visible spectrum, as far as the camera is concerned?
It doesn’t.
If you’re taking a pure “infrared” image it will look like night vision goggles. Since infrared doesn’t have a color that we can see, it just ends up as brightness value data going into the camera’s sensor. It’s just black and white since the sensor only has a brightness value to reproduce.
For this image I used a filter that allows the infrared through, making things like foliage brighter and giving it sort of an orange hue, while kicking out other wavelengths. I then use basic color adjustments to make the orange-ish foliage that the camera produces look super bright red. You can alter it to pretty much any color you like. All infrared pictures are ultimately false color, so it’s up to you what you want it to look like.
Referring to your last sentence: yes I know that, hence my question. Infrared just being brightness data that the sensor picks up makes sense. Thanks for explaining it.
All spot on!
I will mention as an add-on that it’s entirely possible to take IR photos using a standard camera and an IR filter such as a Hoya R72. The downside to this is that normal cameras don’t take in much IR due to filtering so you have to do a long exposure. The image ends up mostly red (since the filter itself is very dark red in visible light) so you then just turn it monochrome. Skies become dark, foliage becomes bright.
It’s all super cool and the best part is nobody can tell you you’re wrong. You just make it look the way you want it to be.
So we could basically use infrared sensors to see the health of our plants?
deleted by creator
We do all the time. The NASA landst mission has been detecting in the infrared since it’s inception, and currently collexts 30 m pixels about every 15 days. Likewise the sentinel mission is collecting similar wave bands and I want to say it has more frequent procession? Maybe 10 days?
Also, most modern drones that are for doing things like crop health will have a natively 4 channel sensor gathering blue, green, red, and near infrared imagery. Sometimes they’ll even throw in multiple near infrared channels.
Back when I was flying drones for this we would run either 6 channels, or a 5 channel system with an upward looking sensor for calibration.
I have a writeup on my terrible 1998-ass website. Complete with no ads or monetization, just like 1998 intended.
https://capraobscura.com/infra.html
Long story short: You can rip the infrared filter out of most any digital camera, then use various filters to alter the wavelengths that actually hit the camera’s sensor.
Excellent, this is exactly what I was looking for. I was thinking this combines with printing on transparencies for cyanotypes might be a cool project.