Does PHP support webcam integration


Webcam on the Linux computer

Actually, you have to differentiate between two types of cameras: the extremely inexpensive and simple USB webcams that are connected directly to the PC, and the not so cheap network cameras that are connected to the network via twisted pair cables and contain their own small web server. It starts with the inexpensive USB cameras, the network cameras follow below. If you are interested in the camera of the Raspberry Pi, you will find special information in the information on the Rasp Pi.

USB webcams

The same applies to the USB cameras as to the TV cards: the corresponding drivers must be integrated before they can be used. In Linux, these are the Video4Linux drivers, or V4L2 for short). Different kernel modules are required for different camera models. Therefore, find out before buying on the WWW whether the respective camera is supported. At this point just a few general remarks: In addition to the USB modules (usbcore, usb-uhci, usb-ohci), which are almost always running, you usually still need the input module (input) and Video for Linux 2 (videodev). Sometimes the kernel module for the camera is still required. The corresponding entries for access to the camera in the device directory are usually generated automatically by the modules, usually it is / dev / video0. At first there was trouble with the webcams and their drivers. There are now fewer and fewer cameras that require additional drivers. Most of the time, plugging in is enough to have the camera ready for use.

The only thing missing now is the appropriate software to be able to use the devices. If you have Debian, Ubuntu or Mint and a webcam, you can easily get snapshots or streams from the camera. This also applies to the Raspberry Pi, provided the camera is supported. In general, you should proceed as follows.

Test the USB webcam

Choose a webcam that runs with any drivers as easily as possible. The best way to identify the drivers used by the webcam is to connect them to a PC / laptop with Linux. After plugging in the camera, some useful information can be obtained using or. An output from looks like this in extracts: # dmesg ... [12.621009] Linux video capture interface: v2.00 [12.685253] uvcvideo: Found UVC 1.00 device USB 2.0 Camera (0c45: 6340) [12.712522] input: USB 2.0 Camera as /devices/pci0000:00/0000:00:1d.7/usb1/1-1/1-1:1.0/input/input8 [12.713513] uvcvideo: Found UVC 1.00 device HD 720P webcam (0603: 8f01) [12.718307 ] input: HD 720P Webcam as /devices/pci0000:00/0000:00:1d.7/usb1/1-4/1-4:1.0/input/input9 [12.718700] usbcore: registered new interface driver uvcvideo [12.718709] USB Video Class driver (1.1.1) ... In our case there are two different cameras, but the drivers for both uvcvideo. If the dmesg command does not show such a message, you can use to view the list of all loaded modules and find out the driver suitable for the webcam. To do this, unplug the webcam, call it up and view the output. Then plug in the webcam and call it up again. The newly emerged module is the webcam driver.

Attention: If you have a laptop with a built-in webcam, two modules for webcams may be loaded (unless the internal and external webcam have the same chipset).

If you see something similar to the one above, you can be satisfied (especially UVC now seems to work reliably - and no special driver is required). A camera is recognized and over input0 tied up. This now applies to a whole range of models. Usually the Device / dev / video0 (or. / dev / video1 ...) installed, via which the camera is addressed. To be on the safe side (or before making a purchase) you should google the name of the webcam and see if the technical data says something about UVC.

Take a photo with the USB webcam

If the webcam is okay, you can already try it and install two image viewers, one for X and one for the console, and try to capture an image. With Debian Linux and its descendants this works without any special effort: apt-get install fswebcam gpicview fbi fswebcam -v -r "640 × 480" test.jpg gpicview test.jpg The example creates an image called test.jpg in the current directory and is then viewed (every camera should be able to use 640x480 pixels). If the computer has no X server running and no graphical interface, then the image can be viewed with the "Frame Buffer Imager": fbi test.jpg The latter is of course only possible on a real console on the computer and not via an ssh connection or similar. Ä.

If the device supports V4L / V4L2 and UVC, there are a number of other tools for further processing the images, e.g. B. the package or. My current project only transmits single images at certain times (is therefore not required for a stream).

The tool used is a small and sophisticated webcam program. It can read a number of frames from most devices that are V4L2 compatible, average them to reduce noise, and draw a title on the result. The GD graphics library is used for this and also compresses the image in PNG or JPEG. The latter is installed by default in most distributions, otherwise you have to install it later. Homepage: http://www.firestorm.cx/fswebcam/.

A number of settings can be set for the snapshots. With a camera for outdoor shots, proprietary parameters of the camera can also be passed through using the -s option (see below). For indoor shots, the number of parameters is more manageable, a call could be:

fswebcam -v -S 5 -F 2 -r "1280x720" –d / dev / video0 –-no-banner snap.jpg The options of the command are relatively simple: -v "verbose mode": offers a more detailed output -F 2 Save 2 frames -S 5 5 skip frames and save the following ones first (the cameras often have to "lock in" first) -r specifies the resolution of the image, in our case HD 1280 x 720 pixels -d specifies which device is used should be, in our case / dev / video0 Finally, the name of the captured image should be specified.

Regarding the S option, it should be noted that the first frames that go by are sometimes not acceptable, as a camera sensitivity set shortly before has not yet taken effect. Therefore it makes sense to skip a few frames. You can also insert a waiting time of n seconds.

Note: If the resolution is not set manually using the option, the program will use the standard resolution (usually not the optimal resolution). Usually the standard resolution is 352 x 288 pixels. But you get much better pictures if you use z. B. indicates 800 x 600 pixels. Often, however, the maximum possible resolution (e.g. 1200 x 1600) is also not optimal and it is better to drive with a slightly lower resolution.

you can do almost everything yourself, you just have to configure it. This includes, for example, capturing an image at regular intervals. Personally, however, I tend to leave such tasks to the demon. Among other things, it is interesting. the option to automatically label the image at the lower or upper edge, with the background and font color, transparency, font size, etc. being freely selectable. By default, the labeling is done at the bottom of the screen. The following options are possible:

--title "xxx" title (1st line left) --subtitle "xxx" subtitle (2nd line left) --timestamp "xxx" date / time (1st line right), e.g. B. "% d.% M.% Y% H:% M" --info "xxx" Additional info (2nd line on the right) The timestamp option can handle all forms of time formatting that are used by the function strftime () get supported. If no font is found (error message from) the path to a font must be specified, e.g. E.g .: --font "/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono-Bold.ttf:12"

The option allows the (h) orizontal or (v) vertical mirroring of the image (e.g. when mounting the camera on the ceiling). For example or. rotates the image by the specified angle www, whereby only the values ​​90, 180 and 270 are possible.

allows the creation of image details. Both details are given in the form WxH (width times height). This extracts a central section of 320x240 pixels in size. on the other hand produces a 100 x 100 pixel section of the upper left corner.

The image can be scaled to the specified size using. The information is again in the form WxH (width times height), e.g. B.

Further options are listed in the extensive manual page of the program.

So that the command line does not become too confusing, it can be divided, as the following example shows (attention: the end of the line must come after the '\'; no further character!):

fswebcam -D 3 -S 10 -F 10 -r 1280x720 -d / dev / video0 \ --font "/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono-Bold.ttf:12" \ --title " Camera 1 "--subtitle" http://www.netzmafia.de "\ --timestamp"% d.% M.% Y% H:% M "--info" additional info "$ FILE For title, subtitle, info etc. variable values ​​can also be used, for example by passing them in shell variables. If there are no variable parameters that change with each call, the entire configuration can also be stored in a file, which shortens the call on the command line accordingly.

Determine the controls of the USB webcam

also allows the camera to be queried with regard to the possibility of changing camera-specific control variables such as contrast or brightness. all proprietary values ​​that the driver can process are queried with the parameter. The following example listing shows the query of two cameras, a "Logilink UA0155" and a "HAMA Digital Eye II". fswebcam -d / dev / video0 --list-controls --- Opening / dev / video0 ... Trying source module v4l2 ... / dev / video0 opened. No input was specified using the first. Available Controls Current Value Range ------------------ ------------- ----- Brightness 10 (57%) -64 - 64 Contrast 21 (32%) 0 - 64 Saturation 64 (49%) 1 - 128 Hue 0 (50%) -40 - 40 Gamma 72 (0%) 72 - 500 Gain 0 (0%) 0 - 100 Power Line Frequency 50 Hz Disabled | 50 Hz | 60 Hz Sharpness 4 0 - 6 Backlight Compensation 1 0 - 2 Adjusting resolution from 384x288 to 352x288. --- Capturing frame ... Captured frame in 0.00 seconds. --- Processing captured image ... There are unsaved changes to the image.
fswebcam -d / dev / video1 --list-controls --- Opening / dev / video1 ... Trying source module v4l2 ... / dev / video1 opened. No input was specified using the first. Available Controls Current Value Range ------------------ ------------- ----- Brightness -128 (0%) -128 - 127 Contrast 124 (32%) 60 - 255 Saturation 70 (35%) 0 - 200 Hue 5 (52%) -128 - 127 White Balance Temperature, Auto True True | False Gamma 9 0 - 10 Power Line Frequency 50 Hz Disabled | 50 Hz | 60 Hz White Balance Temperature 4500 (45%) 2800 - 6500 Sharpness 150 (75%) 0 - 200 Backlight Compensation 6 0 - 10 Adjusting resolution from 384x288 to 432x240. --- Capturing frame ... Captured frame in 0.00 seconds. --- Processing captured image ... There are unsaved changes to the image. You can already see in comparison that the numerical range is very different (e.g. the brightness ranges from -64 to +64 for the "UA0155", but from -128 to +127 for the "Digital Eye II").

In order to avoid the risk of transferring incorrect values, percentages are always allowed. Is z. For example, if the brightness is specified as 20%, the program calculates the corresponding numerical value (rounded to whole numbers). the 20% would then be -51 (UA0155) or -77 (Digital Eye II).

The setting is made via the parameter and always in the form "Designation = value", for example:

... -s "Backlight Compensation = 75%" \ -s "Brightness = 35%" \ -s "Contrast = 45%" \ ...

Two more cameras were tested: the Logitech C270 USB webcam and the Microsoft LifeCam HD-3000. Both cameras ran immediately under Linux, although the technical data (as usual) always only stated Windows as the operating system. Both cameras have HD resolution (1280 x 720 pixels) and an integrated microphone with noise reduction. The Logitech C270 also allows photos that correspond to a 3-megapixel camera via software interpolation (Windows software version). When capturing images, there was a small uncleanliness in the delivered data, but it was able to save a correct JPG image. In weak lighting, however, the images look a bit dull.


The Logitech C270 USB Webcam (left) and the Microsoft LifeCam HD-3000 (right).

The Microsoft LifeCam HD-3000 offered a surprise. Apart from the fact that it did what it wanted under Linux, it also delivered the best images from all four USB cameras. It can also use 16: 9 widescreen format for video recordings in cinema format. Furthermore, any brightness adjustment can be omitted, because the so-called TrueColor technology automatically ensures strong colors in almost all lighting conditions. The software interpolation of the HD-300 even corresponds to a 4 megapixel camera. Overall a pleasant surprise.

Network webcams

So far only USB webcams have been mentioned. The methods described above can also be used with almost no changes on cameras that can be reached via network cables or WiFi. A network camera also delivers individual images or video streams. The advantage of such cameras is, on the one hand, that the camera can be located anywhere in the network and is not tied to the vicinity of a computer. On the other hand, finished JPG images or MPEG video streams are delivered via a web interface, so there are no driver problems whatsoever. The disadvantage is the much higher price of the cameras (about three to four times as much as a good USB camera). The energy requirement (power consumption) is also considerably higher than with USB cameras - after all, the camera usually contains a complete computer. I used four cameras for my tests: an older "Axis 207", which was already a standard in many areas, but is no longer up-to-date in terms of image resolution (but was just available), the "Trendnet" TV-IP100 ", which was offered quite inexpensively years ago, the also quite inexpensive Logilink WC0040 and the" Grandtec Megapixel IP Camera ", which delivers a high image resolution. Information about the cameras can be found at:

www.axis.com/products/
www.trendnet.com/products/
www.logilink.eu/showproduct/WC0040.htm
www.grand.com.tw/sur_mega_pixel.php

The first three cameras deliver a color image of a maximum of 640 x 480 pixels (so they are no longer the latest models), the Grandtech creates 1280 x 1024 pixels. At this point, however, it is not so much the resolution or other camera properties that matter, but the four models are simply intended to illustrate the possibilities of how such cameras are accessed. Often the methods can then be transferred to other models.

Via a web interface or a Windows tool provided, not only the video stream or individual images can be called up, but any settings can also be made. When starting up, the camera must first be assigned an IP address. You can also make the other basic settings at the same time. The cameras themselves should never be directly accessible "from outside", but their images should be filtered via a local web server. When the images are picked up, the camera is of course addressed via the network. The programs or can be used on the command line to download the images.

The Axis camera has a fixed focus lens, with the others you have to adjust the focus manually.

With the Trendnet TV-IP100, some research work was necessary, since by default the delivery of the images is only offered via email or FTP. It was then quite easy: the browser will View → Page Source called and in the HTML jumble for a IMG-Tag wanted. It is nicely packaged in a table, but with a somewhat strange file specification, e.g. B:

The tail "" is actually unnecessary, it is a trick of the camera software to trick the browser: since the website always looks the same, a Smarter browsers do not call up the current image of the camera when clicking on "Reload", but deliver the one in the browser cache. Attaching a pseudo form entry with constantly changing values ​​ensures that the current image is always fetched.With the Trendnet TV-IP100 you get an image and nothing else when you call it up: curl -o snap.jpg http://192.168.2.100/IMAGE.JPG

With the Logilink WC0040, calling up a still image is just as easy, in the example with user and password details:

curl -o snap.jpg http: // user: [email protected]/snapshot.jpg It is just as easy to access an MPEG stream here.

The Axis camera does something similar to the Trendnet. It apparently delivers the image via a CGI script, which also ensures that the browser always fetches a fresh image. With the Axis 207, parameters such as brightness, image size, etc. can also be specified. In the following example I specify the image size:

curl -o snap.jpg http://192.168.2.101/axis-cgi/jpg/image.cgi?resolution=640x480 It is not in the manual, but if you only want an image without settings, curl -o is sufficient snap.jpg http://192.168.2.101/axis-cgi/jpg/image.jpg (For video use instead.)

The "Grandtec Megapixel IP Camera", on the other hand, resisted a little, although the advertisement also explicitly listed Linux compatibility. Their Windows software (there was nothing for Linux) comes with loads of Active X and that kind of stuff. However, some research gave rise to the suspicion that an ARM Linux was working on the camera "under the hood" - albeit with a familiar interface.

In any case, there was also a directory called within the mini-server running on the camera / cgi-bin with some promising commands. Unfortunately, it turned out that there was also a command called still.cgi in no way delivered a still image, but an MJPEG stream. At least without Active X or other things, but with a query for username (always "root") and password. It was possible to build on that. With the program (or) the video can be loaded and saved as a JPG file. was used because, in contrast to, the download time can be limited here. So you fish out a short video sequence, from which a frame is extracted using what used to be called - and you have a still image:

# Get camera image (stream) curl -o video.jpg -m 3 http: // root :[email protected]/cgi-bin/still.cgi # extract an image with avconv (formerly ffmpeg) avconv -i video.jpg -s 1280x1024 snap.jpg rm video.jpg You see, there's always a way to tame a stubborn network camera.

post processing

After capturing the image in the file snap.jpg some cosmetics are still being used, which of course can be done in the same way for all cameras. In the case of the network cameras, we also lack the fatures for inserting labels in the image. The ImageMagick package helps, especially the program it contains. ImageMagick is a free software package for creating and editing pixel graphics. It can read, change and write almost all common image formats. In addition, images can be generated dynamically. There are even suitable modules for Perl programmers. ImageMagick can be found in many Linux distributions, otherwise under www.imagemagick.org.

The date and time are used for a unique file name for each image. Then the picture is also labeled with the date and time - black or white depending on the brightness of the picture.

# Timestamp for picture TS = $ (date "+% d.% M.% Y% H:% M") # Timestamp for file names FILE = $ (date "+% d% m% Y% H% M") # depending on the brightness insert black or white font # Function Brightness see below (comparison "bigger" because of correction value) col = $ (Brightness snap.jpg) if [$ col -gt 50] then convert -fill white -gravity SouthEast -pointsize 20 \ -draw "text 15.15 '$ TS'" snap.jpg $ {FILE} .jpg else convert -fill black -gravity SouthEast -pointsize 20 \ -draw "text 15.15 '$ TS'" snap.jpg $ {FILE} .jpg fi rm snap.jpg Depending on the resolution, the size of an image is around 30 to 150 KByte. A day has 86,400 seconds, a month (30 days) 2,592,000 seconds. If you save a picture every 10 seconds in order to later make a time-lapse movie out of it, that is around 7.8 to 39 GB per month. So pay attention to the disk space and delete old files in good time.

if you are a jack-of-all-trades, you can also make the image smaller and save it with a higher compression rate (but lower quality) in order to save space. Or you can make black and white pictures out of it, or, or ...

Correct the brightness

If the camera is to provide a reasonably usable image in very different lighting conditions, the brightness often has to be adjusted - unless the camera already does this itself. To do this, the picture must be taken twice in quick succession. The first image is only used to determine the brightness, the second image is then "shot" on the basis of the determined brightness, e.g. E.g. with the USB webcam: # Get camera image, determine brightness fswebcam -q -D 3 -S 10 -F 10 -r 1280x720 -d $ CAMERA \ -s "Backlight Compensation = 75%" \ --no-banner $ TMP / $ FILE HELL = $ (Brightness $ TMP / $ FILE) # Get camera image, adjust brightness fswebcam -D 3 -S 10 -F 10 -r 1280x720 -d $ CAMERA \ -s "Backlight Compensation = 75%" \ - s "Brightness = $ {HELL}%" \ -s "Contrast = 45%" \ ... $ TMP / $ FILE The shell function (see below) supplies the brightness value in percent. He surrenders to 100 - (determined brightness) [Percent]. If the picture is quite dark, the value for the camera parameter Brightness is set relatively high and vice versa.

How do you actually determine the brightness of an image? Mathematically, the image brightness of a gray value image can be defined as the mean value of all gray values, the contrast as the variance of all gray values. The simplest way to determine the brightness of a pixel (or its gray value) is therefore (R + G + B) / 3, where R, G and B are the values ​​for the three primary colors red, green and blue. Since each color can have a value between 0 and 255, the same range results for the brightness value. The overall brightness of the image is calculated from the brightness of all image points, as illustrated by the following C program fragment (be a structure that contains all image information):

int brightness (t_img * image) {int x, y, br, avg; t_color val; double sum; for (x = 0; x xsize; x ++) {for (y = 0; y ysize; y ++) {getpixel (image, x, y, & val); gray = (val.R + val.G + val.B) / 3; sum + = gray; }} avg = sum / (image-> xsize * image-> ysize); return avg; } Only this form of brightness determination disregards the perceptual properties of our eyes. We perceive the yellow-green color range much lighter than the other colors. This is taken into account in other color models. Instead of the linear conversion, you could also convert the RGB color space to YUV. YUV uses two components to represent the color information, the luminance Y and the chrominance (color component), which in turn consists of the two sub-components U and V:

y = r * 0.299 + g * 0.587 + b * 0.114
u = (b - y) * 0.493
v = (r - y) * 0.877

Only the Y value is required to determine the brightness. Fortunately, you can leave the work to the universal tool ImageMagick and you don't have to program yourself. The program can not only carry out all kinds of image manipulations, but also provide various information about an image. The problem is rather to fish out the right value from the mass of data. To do this, the image is first converted to grayscale. Each pixel no longer has three, but only one brightness value. Then the brightness value ("Mean") is fished out. This value is between 0 and 1 and must be multiplied by 100 to get a percentage. The percentage value is then subtracted from 100 (see above). The command line calculator takes care of that. Finally, the decimal places are cut off:

Brightness () {# Determine the image brightness of the text field (percent, integer) # Determine the brightness of $ 1 local data = `convert $ 1 -colorspace gray -verbose info:` local mean = `echo" $ data "| sed -n '/^.* [Mm] ean:. * [(] \ ([0-9.] * \). * $ / {s // \ 1 /; p; q;}' `echo" 100- $ mean * 100 "| bc | sed -e 's /\..*$//'}

Recognize movement

What if i'm not in my office? Why is the coffee always running out? It is relatively easy to find out these things these days. All you need is a webcam and a suitable program. Motion detection methods are used more and more frequently, for example in surveillance technology or video compression.

A very simple method of detecting movement is to compare two consecutive images in order to see how the pixels have changed. If the number (or additionally the amount) of the changes exceeds a threshold value, an alarm is triggered, for example. In the case of static subjects and constant lighting, a comparison with a reference image may also be sufficient. In this case, if you subtract the current image from the reference image, all pixels that have not changed return the result zero (a black pixel). This procedure works very well, but it reaches its limits when the lighting changes and when there are small changes in the image.

If the moving object stands out from the background (an object moving in front of the camera with a white wall behind it), even the slowest movement can be detected. If the background varies a lot and the movement is very slow, the motion detection can be tricked. In the case of outdoor shots, effects such as branches moved by the wind or the neighbour's cat are added. Unlike a burglar, the house tiger shouldn't trigger an alarm. In such cases it is advisable to examine only part of the image.

The following program fragment shows the principle of the difference algorithm. The function returns the number of pixels that are more than the value in both images diff distinguish (is again a structure that contains all image information):

int changed (t_img * image1, t_img * image2, int diff) {int y, z, chan, diffcount; t_color val1, val2; / * Counter for the differences * / diffcount = 0; / * Iteration over all pixels (x, y) * / for (x = 0; x xsize; x ++) {for (y = 0; y ysize; y ++) {/ * act. Read in pixels of both images * / getpixel (image1, x, y, & val1); getpixel (image2, x, y, & val2); / * Calculate gray values ​​of both pixels * / gray1 = (val1.R + val1.G + val1.B) / 3; gray2 = (val2.R + val2.G + val2.B) / 3; / * if the difference is greater diff, increment the counter * / if (abs (gray1 - gray2)> diff) diffcount ++; }} return diffcount; } Don't forget that the images can be prepared and post-processed with the Imagemagick tools (convert, mogrify). In addition to various filter functions, upper and lower limits for the brightness can also be specified, and if these are exceeded or undershot, the pixels are colored white or black.

A program for motion detection (MD) can do all of this in a much more sophisticated way. With some cameras, MD is even implemented in the camera itself. A tool for Windows is often also supplied with the webcam. Of course there is also the same for Linux. One of the best-known representatives of this type of program can also be started again via the command line.

The program receives non-stop images from any number of webcams or network cameras. If a defined number of pixels changes from one image to another, the program postulates that something is moving in the area to be monitored. In this case, records a video stream or a series of individual images and saves them on a server. It is also possible to mask areas in images in order to ignore movements in them. The latter is particularly helpful if, for example, there are trees or bushes in the detection area that sway in the wind - or the cat that controls their territory.

In addition to the function as a motion detector, it is also suitable to save snapshots within certain intervals or to record videos continuously. The tool also has the option of calling up the currently received images from anywhere with a browser. When installing, a group with the same name is also created. All users who use must be added to this group so that the configuration file can be read by.

Numerous settings can be made in the configuration file. The most important for first attempts are:

  • videodevice: device file of the webcam.
  • width: Width of the images to be recorded in pixels.
  • height: Height of the pictures to be recorded in pixels.
  • framerate: number of images to be recorded per second or
  • minimum_frame_time: Waiting time (seconds) between two frames.
  • threshold: Number of pixels that have to change for triggering.
  • quality: Quality of the pictures to be taken.
  • ffmpeg_video_codec: file type of the video generated from the images e.g. B. flv.
  • target_dir: Storage location of the images and videos.
If no data is to be saved, it can be specified as the popular one. After adapting the configuration file, you can test it by calling it up. If everything works as desired, start as a daemon. If necessary, this must then be set in the configuration file. If something moves in front of the camera, pictures should now be taken and saved. The daemon can later be included in the start scripts. If too many images are taken without any noticeable movement, you can set the sensitivity using the value (default: 1500).

If this is not successful, the image section can be masked. To do this, use the graphics program of your choice to create an image the size of the camera image (and specify. The image only has black and white areas. Everything that is black is faded out. In the simplest case, this would be a simple black frame. The file is then saved in the format "pgm" (Portable Gray Map). Now enter another line in the configuration file:

mask_file /pfad/zur/maskendatei.pgm

Another feature of is that you can react to certain events. The configuration option specifies a program or script that executes as soon as motion is detected. In this way you can then have messages sent to you via SMS, Twitter or email. For example, for initial tests you can attach a timestamp to a file:

on_event_start 'date "+% d% m% Y% H% M" >> / home / testuser / event'

Further information can be found on the homepage of and a magazine article:

www.lavrsen.dk/foswiki/bin/view/Motion
www.lavrsen.dk/foswiki/bin/view/Motion/FrequentlyAskedQuestions
Monitor objects with motion via video

As an alternative to "Motion", "Zoneminder" would be offered, which consists of several modules and is operated via a web interface. In addition to Video4Linux to support the cameras, an Apache web server, MySQL, PHP and Perl are required. The Ffmpeg and Libjpeg packages are also used for recording still and moving images. The effort compared to Motion is therefore considerable. The script in the Zoneminder forum can be downloaded and used to make the installation and configuration, which is a bit cumbersome because of the many packages and codecs, easier. You can find out more about Zoneminder on its homepage www.zoneminder.com. The installation script can be found at www.zoneminder.com/forums/viewtopic.php?t=16628 and further information for getting started can be found in an article in LinuxUser 09/2011.

Copy pictures with SSH / SCP

After taking the photo, you could transfer the file to the web server using FTP. This can be done quite easily with the help of the Perl module. The module maps almost all FTP commands to corresponding methods in Perl. The following program scans a directory for image files and transfers all files via the FTP protocol: #! / Usr / bin / perl use strict; use warnings; use Net :: FTP; # ------------------ Configuration Section ---------------------------- my $ scandir = "/ tmp / images"; # there are the pictures my $ server = "ftp.sonstwo-in.de"; # Address of the FTP server my $ ftpuser = "camuser"; # Username on the FTP server my $ ftppass = "secret"; # Password of the FTP user # ------------------------------------------- -------------------------- my (@ files, $ file); opendir (DIR, $ scandir) or die ("Directory unreadable."); @files = grep {(/\.jpg$/) && -f "$ scandir / $ _"} readdir (DIR); closedir (DIR); exit (0) if ($ # files <0); # No image files available # Connect to the server and log in my $ ftp = Net :: FTP-> new ($ server, Debug => 0, Passive => 1) or die ("No connection with $ server."); $ ftp-> login ($ ftpuser,$ ftppass) or die ("Error logging in."); # Transfer mode 'binary' $ ftp-> binary (); for $ file (@files) {$ file = $ scandir. '/'. $ file; # Send file $ ftp-> put ($ file); # delete local file unlink ($ file); } # Terminate the FTP connection $ ftp-> quit (); With FTP, however, all data including username and password are transmitted in plain text, which I don't like. It therefore seems better to me to transfer via SSH / SCP, where everything is nicely encrypted. This is where the Perl module comes in. It not only forms the interface to SCP, but can also use 'Expect' to automate the user and password dialog. Otherwise, the program is the same as the previous one, the target directory is also specified when sending: #! / Usr / bin / perl use strict; use warnings; use Net :: SCP :: Expect; # ------------------ Configuration Section ---------------------------- my $ scandir = "/ tmp / images"; # there are the pictures my $ server = 'server.irgendwo-in.de'; # Address of the SCP server my $ scpuser = "camuser"; # Username on Netzmafia my $ scppass = "secret"; # Password of the SCP user my $ destination = "/ home / camuser /"; # Target directory of the server # --------------------------------------------- ------------------------ my (@ files, $ file); opendir (DIR, $ scandir) or die ("Directory unreadable."); @files = grep {(/\.jpg$/) && -f "$ scandir / $ _"} readdir (DIR); closedir (DIR); exit (0) if ($ # files <0); # No image files available # Connect to the server and log in my $ scp = Net :: SCP :: Expect-> new (host => $ server, user => $ scpuser, password => $ scppass) or die ("None Connect to $ server. "); for $ file (@files) {$ file = $ scandir. '/'. $ file; # Send file $ scp-> scp ($ file, $ destination); # delete local file unlink ($ file); } # Ending the SCP connection is not necessary. When accessing the SCP, you can also log in without specifying a user / password. To do this, a certificate must be created on the camera computer and transferred to the web server, which can be done with the help of the two commands and. First a key pair (public and private key) is created and then the public key is transmitted to the web server.

The private key is then equivalent to the normal password. In contrast to the password, it exists as a file that must be protected from unauthorized access. It is therefore possible to protect the private key with a passphrase. In our case, the passphrase must remain empty, otherwise this passphrase would be requested when the connection is established and no automatic file transfer would be possible.

The following shell script generates the keys and then transfers them (the account "[email protected]" does not exist, of course, it only serves as an example):

#! / bin / bash # Generates public / private keys and copies them to Netzmafia # so that a login or setup of an SSH tunnel is possible without entering a password # # the keys are generated first ssh-keygen -t rsa ssh-keygen -t dsa # now there are four files in the directory ~ / .ssh: # id_rsa id_dsa id_rsa.pub id_dsa.pub # # now the public keys are copied to the Netzmafia and there # to the file ~ / .ssh / authorized_keys appended ssh-copy-id -i ~ / .ssh / id_rsa.pub [email protected] ssh-copy-id -i ~ / .ssh / id_dsa.pub [email protected] From now on the Local users from the cam server log in to Netzmafia as "[email protected]" without entering a password. The file (s) can be transferred with the command line program (or in Perl with the module according to the above example).

If it doesn't work and a password is still requested, the access rights are almost always incorrect. The directory .ssh and the file authorized_keys they may only be accessible to the user (→). The home directory of the respective user may only be writable for this user.

Show pictures on the web

Often you want to share the webcam images with others, although it doesn't always have to be Facebook - the presentation in an appropriate framework on your own website can be more attractive and, above all, you don't give up the rights to the image publication. By the way, it is always a good idea to run the camera server and web server on separate systems. The camera server is at home behind the DSL router and the web server is rented from some provider.

For a webcam application, it is sufficient to save the same image at regular intervals and to save it in the appropriate directory on the web server. You may have to proceed like the Trendnet camera to outsmart the browser. Alternatively, the current image could also be called up using a CGI program. You can also save a certain number of images in a rotating manner (the oldest image is always deleted and a new one is added), as outlined in the following Perl listing ($ MAX contains the number of images to be saved), the files are then called 1.jpg, 2.jpg, 3.jpg etc.:

use strict; use warnings; use LWP :: Simple; my $ MAX = 10; # Maximum number of images my $ file_base = '/ var / www / cam /'; # Image directory # z. B. Request TV-IP100 my $ url = 'http://192.168.2.100/IMAGE.JPG'; ... # "rotate" image files my $ id = $ MAX; while (my $ id> 1) {my $ prev = $ id - 1; my $ old = $ file_base. $ id. '.jpg'; my $ pred = $ file_base. $ prev. '.jpg'; unlink ($ old); rename ($ pred, $ old); $ id--; } # Get the new file $ file = $ file_base. "1" . '.jpg'; my $ res = getstore ($ url, $ file); ...

If each file has an individual name (e.g. made up of date and time), there are many files that should be deleted from time to time depending on age, but there are no direct problems with the presentation. There are enough ready-made tools for picture galleries. But if the name stays the same or, as in the example above, only a few files are "rotated", problems with the browser suddenly appear. The browser stores the image locally in its cache. In Firefox, for example, you can explore the cache by entering "about: cache" in the URL line.

The general problem is that when the website is reloaded (or a refresh via meta instruction) the browser detects that an image with the same name is already in the cache and instead of loading the new image over the network, it shows it the cache. So the sunrise can still be seen at noon. One possible solution would be to deliver the image via a CGI script - the browser is smart enough to know that it has to access it every time. But we can also use the same trick as the Trendnet camera:

The browser is fooled by manipulating the name of the image file. This can be done with some Javascript on the client side. On the one hand, a refresh interval is defined in the body tag so that the page is regularly reloaded and a new image appears accordingly. On the other hand, a Javascript two-line ensures that it is provided with a senseless, random addition that leads the browser to believe that it is something new. The browser then loads the fresh image. It is important that an ID is added to the image link so that Javascript can find the element on the website.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"> <html> <head> <title>Das Bild</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <script type="text/javascript"> RefreshImage = function() // macht neuen Bilderlink { // wo ist das Bild auf der Seite? img = document.getElementById("cam"); // mit "neuem" Namen laden img.src = "/webcam/bild.jpg?rand=" + Math.random(); } </script> </head> <!-- Hier kommt alle 30 Sekunden der Refresh --> <body onload="window.setInterval(RefreshImage, 30*1000);"> <!-- Hier ist das Bild, das automatisch nachgeladen wird --> <div align="center"> <img id="cam" src="/webcam/bild.jpg" /> </div> </body> </html>

The browser only sees the following:

<body onload="window.setInterval(refreshImage, 30*1000);"> <div align="center"> <img id="cam" src="bild.jpg?rand=0.9121381653654791"></img> </div> </body>

The solution described is very simple and works with almost every browser - as long as Javascript is switched on (but since there are hardly any websites without Javascript, this is usually the case). Of course there are other methods, either reloading the image using Javascript and Ajax or, as mentioned above, delivering the image using a script or completely dynamic websites using PHP and a content management system.

Night blind?

Is the camera night blind or should it "see" something at night? Then a small infrared spotlight can help. Some cameras, e.g. B. those from Trendnet, even have a small infrared lighting built in. In the Trendnet camera, a photocell ensures that the six infrared LEDs are only supplied with power when it is dark. But not all electronic cameras are sensitive to infrared light. You can try this out very easily by holding your television remote control in front of the camera (with the infrared LED pointing towards the camera) and pressing a button. You can already see the LED blinking in the camera image, which can at least be an indicator of night use.

The self-made infrared headlight presented here consists of 40 inexpensive infrared LEDs and eight resistors. It can also be easily set up by the beginner on a breadboard. Of course, professionals make a circuit board, especially if more than one headlight is needed. With a 12 V supply voltage, the headlight consumes approx. 200 mA, so it should be possible to switch it on and off from the computer (via relay or MOSFET switching transistor). The circuit of the headlight is so simple that only the wiring has to be shown in the following picture. 5 LEDs and a 100 ohm resistor are connected in series.

A suitable housing with a transparent cover must then be added for outdoor use.

Weatherproof housing

Cameras that hang outdoors have to withstand temperature fluctuations, rain, storms and snow. The housings should be waterproof, otherwise they will attract moisture, fog up the lenses or corrode the electronics inside. Appropriately protected models are in metal housings, their lenses are covered by sealed glass panes. These network cameras, e.g. B. from Mobotix, cost around 500 euros and weatherproof camera housings cost between 80 and 300 euros. For this they are heated from the inside (230 V) and can neither freeze nor fog up.

The alternative is an (unheated) low-cost housing for the USB webcam. A 100 W halogen spotlight from which the entire interior has been removed serves as a weatherproof housing. The housing offers enough space for the camera. For larger cameras, you have to find the right housing (150 or 500 W lamp).

In the case of the "Logilink UA0155", the rear part of the bracket was sawed off, whereby the part with the ball joint remained intact. Then you can simply glue the camera into the housing, whereby the possibility of aligning the camera via the ball joint is retained. In order to be able to lead the USB plug out of the housing, the small connection box had to be removed and the cable entry enlarged a little with a round file. The opening is then closed again and, if necessary, sealed with silicone.

If necessary, the housing can be equipped with a heating resistor that prevents the glass pane from icing over in winter. A separate power supply should always be used for this. On the one hand, heating only has to be carried out at sub-zero temperatures and, on the other hand, the USB interface does not really provide enough power for heating.

Camera server without hard drive

Since the camera server doesn't have much to do, it makes sense to use a so-called thin client, a barbone or even the Raspberry Pi. So that there are no moving parts (wear and tear), a solid state disk (SSD), an internal flash memory card or an SD card can serve as mass storage. The logging can be switched off if necessary (either completely or by redirecting to / dev / null).

However, this results in a serious disadvantage: the internal flash memory card, SD card or SSD of the computer is stressed because the images are first cached on the disk before they are transferred to the web server. With the brightness adjustment described above, possibly even twice. The constant deletion and rewriting is, despite all the clever algorithms of the controller modules, poison for the service life of the solid-state memory. A RAM disk should therefore be set up for all files - with even faster access as a side effect. There are two ways to do this:

Using the tmpfs-File system:
The tmpfs-The file system is actually not a pure RAM file system, but the data ends up in the hard disk swap as soon as the memory space in the RAM becomes scarce. The following shell command creates the directory / root / tmp to the RAM disk:

# mount -t tmpfs none / root / tmp or, if the size should be specified: # mount -t tmpfs -o size = 20M none / root / tmp As many resources are dynamically branched off as are currently needed, even if a size was specified. If the drive is empty, it does not take up any space in RAM. It is possible to mount this partition by default at system start-up by adding a line to the file / etc / fstab inserts: tmpfs / root / tmp tmpfs defaults, size = 20M 0 0 The use of the ramfs-File system:
The ramfs- File system stores in contrast to the tmpfs no data in the swap, so it is a pure RAM file system. The commands are almost identical to the above: sudo mount -t ramfs ramfs / root / tmp This gives you a RAM disk that also dynamically adapts to the required size. In order to mount the partition automatically at system start, add the following line to the file / etc / fstab add: ramfs / root / tmp ramfs defaults 0 0 Das ramfs- Has file system unlike tmpfs no mount options and therefore offers no possibility to limit the size. The system may then no longer have any free main memory available and can only swap out to the hard disk.

Download the entire script


Copyright © Munich University of Applied Sciences, FB 04, Prof. Jürgen Plate
Last update: