Beesource Beekeeping Forums banner

Raspberry Pi Model B + IR Cam + Honeybees +YouTube Streaming

14K views 24 replies 8 participants last post by  Steve in PA 
#1 ·
Greetings Bee Source and collaborating educators.

I am using a variety of different cameras, microphones and other sensors inside of a beehive. The primary scope of these projects is to quantitatively monitor honeybees, for health, behavior, performance and production forecasting. This is somewhat similar to an ICU in a hospital but for honeybees. This year I have decided to try out the Raspberry Pi Model B and the IR Cam breakout board (from adafruit.com) to make a public share honeybee surveillance system so anyone can watch honeybees make honeycomb. Its fun and I invite you all to watch. I am currently streaming this public project to YouTube and would like to share it with this forum@: https://youtu.be/PBvMjU---ys

I am interested in any input, especially regarding the Cam and default lens and position. I have rotated the lens counter clockwise 1.5 turns, the purpose of this adjustment was to limit the scope of the Cam to 2cm -25cm. I am not sure...but will find out sooner or later if it was a wise adjustment. I am highly limited with space inside of the beehive, this cam and its IR LED array can not be larger than 1/2 inch in any dimension to be functional on an annual "re-occurring" basis. Sometimes wax and other trash drops from the cluster and this will eventually drop onto the lens. In the past I have used alcohol to clean my glass lens cams, but do not know if it is safe to use on the IR Cam lens. It could be plastic and react poorly so I thought I would inquire with you folks. Has anyone cleaned these lens, and have a recommended protocol? What do you recommend for cleaning the lens of the IR Cam? I found it very difficult to rotate lens,and I doubt I used the proper tool (needle nose)
This Camera might eventually end up getting entombed in the comb as the cluster gets closer and closer to the camera. So I will eventually remove it or leave it depending on the field of view the bees leave me with when they finish drawing comb.

I have on order the fish-eye lens and am curious if/is anyone else already using it, or some of the Wide/Macro aftermarket lens'? Can anyone share their experience manipulating/exchanging the lens and other tasks?

Regarding the use of RASPIVID ---ok some geeky Python code stuff here---

I am trying to tune up my current running script:
raspivid -o - -t 0 -w 1280 -h 720 -fps 25 -b 1500000 -g 50 | ./ffmpeg -re -ar 128000 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/MyYouTubeID

1: Have issue with audio, would like to add stereo broadcast to stream, but Model B does not support audio very well. I am considering the New Pi 3 with either a USB audio or making a custom ADC/DAC. So far the current video only stream might be taxing the Pi Model B to its limits as this Pi gets hot serving the stream today with this operational script. I do receive problems/complaints from the YouTube streaming server as they would much prefer a greater stream rate than I am currently providing. I do keep this Pi well ventilated and am just planning for the next rendition so to speak - this prototype is disposable, but would love to receive any feedback on what could be better in next effort. Eventually the audio will be supported by custom acoustic arrays made by the wax comb of the bees themselves (I can explain further as needed) but they are just like standard acoustic transducers/microphones as far as wires. Is anyone using the Pi 3 to stream both video and at least 2 channels of audio?

This beehive is also monitored with a variety of climate sensors and motion sensors, accelerometers, and a unique climate stable weight change system. The monitoring on this hive is supported by an additional Arduino Yun and another Dallas 1-wire management system.

I'm using the free YouTube Creator Studio BETA to make this production.


regards,
Stephen


Stephen Engel
Program Director
StephensApiary.com
Hivelogger.com
 
See less See more
#3 ·
That is so cool! I am pea green with envy because I want to have a live stream like that from all my hives but I haven't the technical chops, the bandwidth, nor the equipment (my cell phone is a 15 year old folding model.)

Have you thought of having an easily removable protective glass cover slide over the lens to protect it from debris and to be cleanable? I frequently use grain alcohol to clean things, I find it works quite well on propolis.

Do you think it would be possible to mark the queen with some type of IR-reflective paint so she'd show up if she ever got out on the edge of the comb? Some people use stick-on circles to mark queens - do they make RFId tags small enough to mark a single insect?

I just love your livestream - I've already added it to my toolbar. I would love to read updates about what how your project is going. Thanks for posting about it.

Enj.
 
#4 ·
I do all my ffmpeg encoding with mp4 and audio in mp3. Not sure if the Pi would support that or not. Here's a command line I use...
-c:v libx264 -preset medium -threads 0 -vf kerndeint=thresh=10:map=0:eek:rder=0:sharp=1:twoway=0 -vf hqdn3d -s 1280x720 -sws_flags lanczos -aspect 16:9 -crf 18 -maxrate 2200k -bufsize 1536k -c:a libmp3lame -q:a 4
Not sure why it shows smilies?
 
#5 · (Edited)
That code snippet above shows smilies because certain character combinations are translated (by default) into smileys. The purple "Embarrassment" smiley is 'colon' 'o' and the blue "Confused" smiley is 'colon' 's' (the first is a lower case 'o' as in 'ouch', not a 'zero'.)

It is possible to disable the smiley function in a given post. Choose the "Advanced" mode, then look for the "Disable smilies in text" checkbox.

Here is that same code again with the smiley function disabled:

-c:v libx264 -preset medium -threads 0 -vf kerndeint=thresh=10:map=0rder=0harp=1:twoway=0 -vf hqdn3d -s 1280x720 -sws_flags lanczos -aspect 16:9 -crf 18 -maxrate 2200k -bufsize 1536k -c:a libmp3lame -q:a 4
However, I don't see the appropriate codes where I expect to see them. Since I just copied/pasted that from the previous post, the codes may be missing. I suggest that Steve reposts that code snippet with smileys turned off to be sure the code is correctly displayed.

UPDATE: perhaps a better alternative - leave the smiley checkbox alone:
The "Advanced mode also offers a "Code" function that should prevent the smileys from being displayed within the code box. Here is an example of that:
Code:
-c:v libx264 -preset medium -threads 0 -vf kerndeint=thresh=10:map=0rder=0harp=1:twoway=0 -vf hqdn3d -s 1280x720 -sws_flags lanczos -aspect 16:9 -crf 18 -maxrate 2200k -bufsize 1536k -c:a libmp3lame -q:a 4
To use the "Code function, look for the icon that looks like a "#" on the Advanced menu.

.
 
#6 ·
Greetings Enj,
Thank you for your comments.

yes small rfid tags exist
yes is possible to mark queen with IR reflector
yes i have tried glass slides and isolation of lens only problem is the additional space requirements in a hive...as you know a huge challenge and the flexibility of moving the cam. If its not space its the propolis so one must find a happy medium

This is a very easy and cheap project - Pi + Cam + IR LED's < $100 - I got my Arduino stuff from Adfafruit.com, and if you wanted to do something similar, I can probably help out a little.

regards,
Stephen

ps....ditch the clam shell - and get a smartphone. What are you seeking, sympathy?
 
#7 ·
PM me and I'll put you in touch with a friend of mine who was designing hive sensor boards for me.

We have a hive scale board designed that uses an I2C interface, and should work well with the Pi family. I wanted to build Apidictor functions into the system so he added an audio channel, which has a bias voltage circuit to drive condenser mic modules. The ADC has sufficient processing capability in it that it could supposedly run FFT analysis standalone.

I've gotten distracted on other projects, so this system is just sitting there looking for a customer. These are not expensive to knock out, maybe $20-ish.

FYI, this fellow hates Arduinos and Raspberry-Pi platforms, and likes to program in Forth, so just stick to the peripheral boards when talking to him. I'm planning to try the Raspberry Pi route myself, as it has more in common with the embedded systems I've worked on professionally.

On the cameras, I've got limited experience with this sort at close range. I expect two problems. First, unless purpose built for it I expect a problem with close focus. This may be correctable with an add-on lens, optically similar to a "close-up filter" for DSLR lenses. Second, I expect the bees will prophylize it, so you'll want some sort of removable filter or lens anyway, solvent-resistant so it can stand cleaning.

I've used a video inspection snake camera on my hives. When inserted in the entrance of an active hive it shows what bee balling looks like from the inside. They have a rather negative opinion of black snakelike objects intruding into the hive.
 
#9 · (Edited by Moderator)
Phoebee, Thanks but why would I want to talk to anyone who hates Arduino and Raspberry Pi...this IS a Raspberry Pi project?
Also this is an open public project...so please don't ask me to privately PM you, what I do I share with everyone that is why its posted here. If your going to comment make it real and make it something helpful that everyone can benefit from and understand. FFT Analysis? If you have only limited experience then read, please don't feel compelled to write, you likely confused more people than helped anyone.
 
#10 ·
Cybrk,

The reason for the private PM is so I could give you the e-mail of my friend rather than posting it here.

As for his choice of language, stick with what you know. What matters is the I2C interface, which the Pis can handle just fine.

The FFT analysis is is to make a modern Apidictor, to analyze frequency content of the buzz the bees make. The inventor of the Apidictor, back in the 60's, believed you could predict swarms well in advance by a change in the relative amplitude of two frequencies. Its an option once you have a microphone in the hive. We provided for that on the board. We have a couple of threads here on the subject, and the Beesource deep archives have articles on the Apidictor. Sorry if you don't think that's "real". If you are not interested in a 2-way exchange of ideas, I'm not sure what you expect from the forum. I was just trying to be "open."
 
#11 ·
Thank you for the additional information
You jumped through many topics without much foundation. Why not invite your friend to this BeeSource forum so he can comment directly? I intend to keep this "open-source" and public. I am familiar with the I2C interface in electronics. I am familiar with the Woods Apidictor in honeybee acoustical analysis. I am also familiar with the fact that bees buzz.
This open source world of mechatronics and streaming is very new, it's fresh, it's open and its for public consumption and collaboration. I think many young entrepreneurs and innovators are out out here in the honeybee world. I think I am one of the first using this YouTube Creator Studio BETA tool-set to stream beehives A/V with <$100 in micro-electronics from Arduino/Adafruit. I am here in this public forum in an effort to try and solicit/build a sustainable cohesive discussion on it for the future, and for the community. I am not trying to be rude, but your comment could be more specific or informative. Is your comment that the I2C interface on a Pi Model B or any Pi is the solution to audio streaming with YouTube Creator Studio BETA? If this was your intent then, maybe a script or comment sharing your direct experience and how that specific I2C deployment rolled out with the Pi and the YouTube BETA which can be to the benefit of all and not exchanged or hidden in PM's.
 
#12 ·
2200k is what I am using for transcoding scout films into 720p coach's film. I guess I should have been clearer, sorry. I use ffmpeg alot. Almost daily actually. I haven't seen the build on the Pi...yet.

I'd have to dig into the literature more but I think this would work for what you want or get you close.

-c:v libx264 -preset medium -threads 0 -vf hqdn3d -s 720x480 -crf 18 -maxrate 800k -bufsize 1536k -c:a libmp3lame -q:a 4

You may have to play with the buffersize and maxrate to get what you want. The -crf and -q:a flags were outstanding once I figured out how to use them
 
#14 ·
Thank you Steve,
I am not familiar with the "-crf " how are you using that? Is that related to setting a hard "video-output" frame rate to 18fps?
In regards to the use of "-q:a" I hope to be able to actually use this some day or something similar.... I assume you are using this to set the quality of audio you are passing to your codec. (Maybe you are setting a bit rate?) I do not have an audio interface on my Pi Model B so I am tricking the YouTube streamer with my "-i /dev/zero" for now. What is your experience with -q:a = 1,2,3,5, 6, n ? What did they do for you and how did you use these flags? Is this specific to your hardware/codec?

What type of hardware are you using for your coaching streaming service? Do you establish a temporary or continuous relationship with your streamer server? Are you using the YouTube Creator Studio?

My use of the Pi Model B is my key limitation as the Pi 3+ will come out soon and from what I understand it will have full A/V support for interfaces at > 8bit and so much more memory and resources. Even today I think the Model 3 will support aftermarket A/V boards. Pi is moving fast, its tough to keep up with all the versions.
 
#15 ·
Ffmpeg is Linux based but ported to many other platforms. I have Linux on all my personal systems so that's where I use it. Pi is also based on Linux as most, if not all, the commands are the same. I believe Raspian is just a stripped version.

The -crf and mp4 encoding process is explained here: https://trac.ffmpeg.org/wiki/Encode/H.264. I have the newest Pi sitting in a box as I have no time to mess with it right now. First order of business for me will be building a reliable scale which I did in engineering school years ago but it didn't interface with anything.

Q:A is subjective. For what you want I suspect lower numbers would be just fine.
 
#18 ·
I've got a live BeeCam up, 24/7:

https://www.youtube.com/user/IAmTheWaterbug/live

Actually I've got three channels and three cameras! The cameras have gotten so inexpensive ($55, shipped) that they're cheaper than a Pi + camera + SD card + power supply, etc., and they all have built-in POE support, so I can power them all from a POE switch.

They can't push to YouTube natively, so I have a Pi as an "ffmpeg relay station", with 3 services running ffmpeg with "-vcodec copy", and each instance takes < 10% of CPU.

I haven't figured out how to overlay weather, temp, sensor data, etc., as soft subtitles on a Stream Now! live stream, and I don't have enough CPU to burn them in with ffmpeg. There's allegedly a way to do this, but I don't know if it will work with a live stream.
 
#19 ·
Wow !!!!! Excellent Job with your three channels. I am very impressed and may try out a similar config, thank you for sharing. I do have a suggestion and that would be to stream real hive audio from each channel in each hive, I think you will enjoy this audio and you can have a notification setup with both motion and audio to give you a heads up on your swarm hive.

I am curious, I have had some heat problems with my old Pi pushing content to YouTube and your config with Pi3 looks much less cpu intensive. Do you have any heat issues, and did you install a heat-sink? Is your Pi in side hive or just cam? If you store outside of cam, are using a weatherproof enclosure? I can show you a few scripts that you could include on your YouTube page that would deliver ....local weather data by zipcode. See Weatherbug

Overall this is frigging awesome ref -Painted Peacock Great Job!!!
 
#20 ·
Thanks! Yes, audio was, and still is, an issue.

My very first attempt at a BeeCam was built with Pi running this image. Since I was using a pre-packaged image with a write-protected file system, I had problems compiling ffmpeg on it, so I had to set up a 2nd Pi to run ffmpeg as the relay station. I had no idea raspivid could push the video directly to YouTube until I just saw your script.

I tried attaching a USB microphone to the Pi, and I could record audio onto the SD card, but I couldn't figure out how to get it into the stream.

Since YT required an audio stream for the live streaming I added a playlist of royalty-free MP3 track from YT to the ffmpeg script, on the relay station:

-f concat -safe 0 -i playlist.txt -acodec copy

which provides the classical music soundtrack.

I did have a date/time/weather overlay on my Pi cam, but it was horribly done, and I'm sure someone with more bash expertise could have done it all in 2-3 lines!

Then I gave up on the Pi camera and went with an off-the-shelf IP camera. But I kept the 2nd Pi as the ffmpeg relay station, and when I added the 2nd camera I just cloned the relay service. And then this year I added the 3rd camera and cloned it again.

Camera 1 doesn't have a mic. It has a mic input, so I bought a mic, but the mic requires power, which I don't have broken out at the junction box. That's on my list of things to do, but that particular camera takes a lot of work to work on, because it's right in front of an active hive, and it's out in the weather, so adding something like a mic requires another hole in the enclosure, and gasketing, etc. But for now, that's why I've kept the classical soundtrack for Camera 1.

Camera 2 (swarm cam front view) has a built-in mic, so that's why it has real audio.

Camera 3 (interior view) also lacks a mic, so I just cloned the classical soundtrack for it. I wonder if there's a way for ffmpeg to receive only the audio stream from Cam 2 and mux it in with the video from Cam 3, without transcoding.

Cams 2 and 3 are in a chicken coop, so they don't need any weather proofing, but I don't currently have power in there, so it's all POE:



(old photo, with the Pi cam)

I don't currently have any heat or power issues on the relay Pi, because it's not doing much. With -vcodec copy and -acodec copy it's just relaying packets, and it's not doing any transcoding, so all 3 instances of ffmpeg use <30% of CPU, combined. Back when I had a Pi-based camera I did have some heat issues, but that was because I was running 12 V passive POE over a 140' cable and using a linear voltage regulator at the end. That was burning hot to the touch, but it worked until I pulled it down last month. All that's gone away with POE cameras and POE switches. And POE-powered POE switches.

If I get ambitious next next weekend I might pull Cam 1 off the fence and try to add that mic. I'm still using passive POE on that one, and that's a good thing, because I can split the 12 V at the junction box to power the mic.

Sometime down the road I may also want to add a 3rd camera to the swarm trap, to do a comb-building time-lapse. I just figured out how to take a still on command from a Reolink or M7 camera, so now I just need to put a Pi back in the chicken coop to turn on a set of LEDs every 10 minutes.
 
#21 ·
I don't think you can use subtitles with -vcodec copy. I use subtitles on all of my home movies but write them to a .ass (lol, yes someone thought that was a good idea) file. This is the file I use to crunch a lot of home movies into subtitled x265 format. Maybe it would help you?

Code:
#!/bin/sh

for file in $@ ; do
name=`echo $file | sed -e "s/\....$//g"`

ffmpeg -i "$file" -c:v libx265 -threads 0 -s 1280x720 -vf kerndeint=thresh=10:map=0:order=0:sharp=1:twoway=0 -vf hqdn3d  -sws_flags lanczos -aspect 16:9 -crf 24 -c:a aac -b:a 96k -strict -2 -filter:v subtitles="$file"\.ass  "$file"\.265.mp4 -y

done
 
#24 ·
I did something very similar with time and weather from openweathermap.org:

Code:
#!/bin/bash

while [[ 1 == 1 ]]; do
        curl -s 'api.openweathermap.org/data/2.5/weather?id=5388601&APPID=MyOpenWeatherMapAPIKey&units=imperial' | jq '.name,", ",.main.temp," F"' > /tmp/temp1.txt
        tr -d '"' < /tmp/temp1.txt > /tmp/temp2.txt
        tr -d '\n' < /tmp/temp2.txt > /tmp/weather.txt
        sleep 600
done
I don't want to burn in subtitles, because I don't have the CPU power in the Pi to rasterize, overlay, and re-encode. So I want to find a way to do soft subtitles.
 
#25 ·
I dug around some more. I wish I had more time to actually help. I understand ffmpeg more than the other coding.

From, https://forum.videohelp.com/threads/377210-Soft-subbing-video-with-ffmpeg, it looks like you can stream map the .ass file into your video without burning it.

If true, I would build a cron to run every 15 or 20 minutes to grab the temp and output to the .ass file then have ffmpeg merge them.

*I may even try this with my home stuff instead of burning it.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top