Development of a multispectral system for precision agriculture applications using embedded devices

This document shows advances in the development of prototypes to acquire remote sensing information in Unmanned Aerial Vehicles for applications in precision agriculture. We present the development of two prototypes consisting of multispectral cameras for the blue, green, red, and near infrared bands using Tiva® C Series LaunchPad and Raspberry Pi development boards, which presented substantial differences in processing time and images storage. In this document, we describe the design and development of a multispectral information acquisition system to analyze vegetal coverage, initially in an African oil palm plantation. This system couples with an Unmanned Aerial Vehicle, allowing latitude and longitude maneuverability. This improves the data gathering efficiency in small plots, increasing the spatial and temporal resolution from a system controlled on the ground.


I. Introduction
The current global situation in topics related with the environment and optimization of the use of natural resources, mainly in cultivated areas, has increased interest in the estimation of bio-geophysical processes on a major scale regarding their temporal and spatial variability. There is a strong climatic changeability caused by disturbances and changes in the use of soil, modification in patterns of nutrients accumulation, and the growth of the urban population -along with reduction of the rural population-, which produce urgent needs to double agricultural production by 2035, using less water in fewer soils; consequently, this entails higher environmental, economic, and social costs. Due to this, since the last decade of the 20th century, some technological tools have begun to be used in the agricultural sector, which allow the handling of information databases to manage relevant data from crops, within the so-called Precision Agriculture [PA] (Srinivasan, 2006).
PA is an eco-friendly strategy where farmers can modify inputs and cultivation methods at several stages, i.e., in sowing, application of fertilizers, herbicides, pesticides, watering, and other farming techniques, as well as with harvesting and with consideration of spatial and temporal variability in fields (Srinivasan, 2001). This methodology looks for the application of agricultural supplies at the right place and time, in order to enhance crop yield, reduce costs, and decrease the use of agrochemicals (Jiménez, Ravelo & Gómez, 2010). A crop management system based on PA seeks answers to the following generic agricultural questions: • Location: where is it? Useful to determine the kind of crop that exists in a determined place. It is characterized in this case by geographic references like latitude and longitude.
• Condition: how does it look like? Suitable to know what is happening on the land.
• Trend: what has changed since...? An important question, since it involves the previous ones and the answer establishes what differences occur in a determined area or plot through time.
The tools to acquire and analyze information to answer these questions are tele-detection (or remote sensing) systems, crop variable following systems (i.e. phytomonitoring, either with sensor networks or in-field information acquisition systems), systems for harvesting
Tele-detection or remote sensing is the science of gathering and processing information on objects without having direct contact with them. This is possible due to reflectance, i.e., the interaction of electromagnetic energy between the sensor and the studied object. Advances made in hardware/software development have enabled aerial and satellite equipment to acquire information from objects on the surface by measuring reflectance coverage (Jiménez, Jiménez & Fagua, 2013). In consequence, there is ongoing development of processing concepts and techniques due to these improvements, which allow the estimation of spatial and temporal behavior patterns over natural variables that are useful to PA applications (Abril & Butcher, 2001). In-field information processing depends on the number and type of data sources (both in situ and remotely), on the detail level, and on definition of the electromagnetic spectrum suitable for the desired application (Sanabria & Archila, 2010).
The physical basis of tele-detection is object reflectance located at a pre-established distance from the sensing element. For many tele-detection applications, it is essential to record a scene with multispectral images, i.e. multiple images obtained in different bands of the electromagnetic spectrum and acquired in several ways, such as using multiple cameras or vidicons (video cameras), with the corresponding filters to select the wavelength of the desired electromagnetic radiation. Information processed by these systems allows the identification and evaluation of damage to crops, the estimation of crop yield and cultivated surfaces, and the extraction of soil moisture, among other applications (Jiménez, 2010).
When leaves in crops are unhealthy, coffee pigmentation appears due to the reduction of leaf reflectance and transmittance in the 400 to 750 nm wavelength subset. The vegetation spectral sign (the numeric response of several bands from the electromagnetic spectrum) shows low reflectance in the blue, green, and red bands. By contrast, near infrared reflectance (at about 800 nm) is La teledetección o detección remota se define como la ciencia de adquirir y procesar información de los objetos sin necesidad de entrar en contacto con ellos, gracias a la reflectancia, es decir, la interacción de la energía electromagnética que existe entre el sensor y el objeto estudiado. Los avances en el desarrollo de software y hardware han permitido que equipos aéreos y satelitales puedan adquirir información de la superficie terrestre mediante la medición de la reflectancia de las coberturas (Jiménez, Jiménez & Fagua, 2013), desarrollando conceptos y técnicas de procesamiento que permiten estimar patrones de comportamiento espacial y temporal de variables naturales, útiles para aplicaciones de agricultura de precisión (Abril & Butcher, 2001). El procesamiento de información de campo depende del número y tipo de fuentes de datos, tanto in situ como de manera remota, del nivel de detalle del análisis y de la definición de las bandas del espectro electromagnético útiles para la aplicación desarrollada (Sanabria & Archila, 2010).
Acquisition of spectral information is in the form of images of cultivable plots, and satellites like MODIS, QuickBird, Ikonos, Spot, Aster, and LandSat help in this task. These satellites work essentially in the blue, green, red, and near infrared bands. Other useful systems are the RadarSat system, which operates in the wavelengths corresponding to microwaves (Inoue, 1997), and other technologies based on information gathering on the ground, i.e. with spectroradiometers or color sensors (Lee & Searcy, 2000). However, images obtained from these commercial satellite systems usually have either low spatial resolutions where a pixel represents a broad area (Schale, Keller, & Fischer, 2000), or big spatial resolutions with a pixel size less than a meter, but whose cost is prohibitive. These factorss have made small plots very difficult to sense and increase the difficulty of analysis in cloudy conditions.
In the 21st century, the use of Unmanned Aerial Vehicles (UAV) for remote sensing platforms has increased due to GNSS benefits (Nebiker, Annena, Scherrerb, & Oeschc, 2008) and the ease of acquiring economic digital cameras with appropriate features to do so (Zhou, Ambrosia, Gasiewski, & Bland, 2009). The use of micro-UAV as one or multi-propeller helicopters in several research projects allows the gathering of useful data for remote sensing applications (Turner, Lucieer, & Wallace, 2013).
The spatial and temporal resolutions of images obtained from UAV are superior to those obtained from satellite and conventional aerial systems. Besides, these images provide spatial resolution of up to a centimeter per pixel (Hunt, Hively, Fujikawa, Linden, Daughtry, & McCarty, 2010). It is important to note that in 2013, in Japanese agricultural regions, the number of UAV used for supplies application surpassed 2400 units and the number of licensed operators was 14000. This means that UAV are powerful tools, working as hyperspectral remote sensing platforms (Uto, Seki, Saito, & Kosugi, 2013).
In this document, we present the design and development of a multispectral information acquisition system with the goal of analyzing vegetal coverage, initially in paciales muy pequeñas, en donde un píxel cubre un área muy extensa (Schale, Keller, & Fischer, 2000) o resoluciones espaciales grandes, con tamaños de píxel hasta de menos de un metro, pero con un costo elevado. Estas consideraciones hacen que los lotes pequeños sean difíciles de sensar, además de la dificultad de análisis cuando se adquieren imágenes en condiciones de nubosidad.
Los requerimientos de diseño para la adecuación del dispositivo a una plataforma aérea no tripulada establecían que el sistema debía ser: compacto, liviano y controlado por un sistema embebido, para su fácil transporte, uso y bajo costo de de-African oil palm (Elaeis guineensis). This in-field information acquisition module perfectly couples with a UAV, allowing its maneuverability in both latitude and longitude. This is in order to increase the efficiency of data gathering in small plots, subsequently increasing the spectral and temporal resolution with a surface-controlled system.

II. Method
The objective of this research project was the design and implementation of a multispectral system embedded in a UAV to obtain images corresponding to the blue (420-500 nm), green (520-600 nm), red (620-750 nm), and near infrared (750-900 nm) wavelengths. Furthermore, we developed an application with a Graphical User Interface (GUI) that allows the collected information to be imported from the digital camera in every wavelength and the Normalized Differential Vegetation Index (NDVI) to be determined.
The design requirements for adaptation of the device into a UAV were that the system must be compact, light, and controlled by an embedded system (for ease of transportation and low cost). The need to capture information in the visible and infrared bands implies the use of two different image sensors simultaneously. We chose LinkSprite® JPEG Color Camera TTL Interface sensors because they not only fulfill the requirement of infrared capture, but also have a serial interface for their control, which allows the use of microcontrollers.

A. Multispectral camera
One of the LinkSprite® cameras was physically modified, so that it could not capture infrared light. This modification consisted in the addition of an infrared light block filter; this filter is located in the camera lens (see Figure 1) so it blocks light with a wavelength above 700 nm, producing a digital camera that is sensitive only to visible light.
After this infrared filter was set, we carried out some tests to verify its correct operation. Figure 2 shows differences in images with and without the use of the filter. It is easy to observe the variation in the image contrasts, i.e., the capture of the images is correct in the visible light spectrum with the modified camera (a), whilst the other camera is still sensitive to infrared light (b). sarrollo. La necesidad de capturar información de imagen en el visible e infrarrojo implicó el uso de dos sensores de imagen distintos, de manera simultánea. Se eligieron los sensores de imagen LinkSprite™ JPEG Color Camera TTL Interface, que además de cumplir con el requerimiento de captura de imagen en el espectro infrarrojo cercano, cuentan con una interfaz serial para su control, que permite su fácil utilización con un microcontrolador.
Después de colocar el filtro de bloqueo de luz infrarroja a una de las cámaras, se tomaron imágenes comparativas para verificación. La Figura 2 muestra las diferencias de las imágenes con y sin el filtro de bloqueo de luz infrarroja. Se puede observar con claridad la variación en los contrastes, lo que evidencia que la imagen es capturada de manera adecuada en el espectro de luz visible en la cámara modificada (a) y la otra aún es sensible a la luz infrarroja (b).
http://www.icesi.edu.co/revistas/index.php/sistemas_telematica Torres, A., Gómez, A. & Jiménez, A. (2015). Because of project requirements, it was necessary for the two sensors (cameras) to be located as closely as possible and their supporting structure to be as light as possible. Therefore, this support structure was made in a RepRap Prusa Mendel 3D printer. This device requires design models contained in ".stl" extension files; hence, for modeling we used the free version of SketchUp software, considering the dimensions of the sensors to model the structure to be as light and compact as possible. Figure 3 shows the structure design and Figure 4 displays the printed design and sensors located on it.

B. First prototype -Control, acquisition, and storage information system -TM4C-123GH6PM microcontroller
The system implemented in the first stage used the Tiva® C Series LaunchPad evaluation kit made by Texas Instru-ments®, which consists of a development platform of the TM4C123GH6PM microcontroller. Features of this device include one 32-bit ARM® Cortex®-M4 processing core clocked at 80 MHz, 8 UART peripherals, and 32 KB of RAM, among others. For image storage information, we used the µDrive® serial module and we built the software application over Python programming language ( Figure 5).
The base of the system operation is commands sent to the main microcontroller to capture and record frames (photographs). This was possible via instructions transmitted by one of the card serial ports and, optionally, via an external signal emitted through the UAV's radio control or autopilot, allowing remote and programmed functioning. The main card accepts these commands and it sends the capture order to the cameras. Afterwards, this card sends the acquire order and the cameras transmit the captured image to the storage module; this module operates with a File Allocation Table (FAT) file system architecture (compatible with Windows®) for its subsequent processing on a computer.
Control of both the LinkSprite® cameras and the storage module is by their serial ports, using commands that define actions to execute; for every command sent, the camera generates useful responses to verify if this command has been recognized. Likewise, the µDrive® storage module requires similar handling. The microcontroller employed in the system development controls these devices, successfully pursuing the capture of the crop images (Figura 6).
We present the main commands and a brief description in the following paragraphs: • Reset: initializes the camera. El funcionamiento del sistema consiste en el envío de comandos al controlador principal para realizar el proceso de captura y almacenamiento de fotogramas. Esto se realizó mediante comandos enviados por uno de los puertos seriales de la tarjeta y opcionalmente mediante una señal externa enviada a través de un sistema de radiocontrol o el autopiloto del VANT, permitiendo el accionamiento remoto y programado. La tarjeta principal se encarga de aceptar estos comandos y enviar la orden de captura a las cámaras y, posterior a esto, de adquirir los datos de imagen y transmitirlos al módulo de almacenamiento, el cual maneja un sistema de archivos compatible con Windows (FAT) y guarda la imagen para su posterior transferencia a un ordenador, para efectos de procesamiento. mory, saving it until another picture is taken, or until a reset command appears.
• Leer tamaño: this command is for file size acquisition. Return values generated by the camera correspond to the binary values called Most Significant Bit (MSB) and Least Significant Bit (LSB) respectively. This is used to prepare the hardware or software before receiving the files.
• Detener retención de foto: this command appears when a new picture is taken. Invocation of this command entails RAM memory liberation.
The previous commands are those required by the developed application. There are other commands that match various additional camera settings, e.g. compression rate, image size, energy saving mode, serial port speed, etc. We do not modify these parameters because the optimal configuration of the cameras is factory predefined. As the final application requires two images -corresponding to the visible and infrared spectrum respectively-, capture must be simultaneous to generate pictures suitable to calculate the NDVI and other vegetation indices. Control of both cameras (with and without infrared filter) is through a control card (Figure 7) using Energia free software. The card sends initialization, capture, and frames transmission commands through two of the available UART ports, one for each camera. the µDrive® is 100 bytes and the camera manufacturer recommends that the information download should be in multiples of 8, we chose 96 bytes as a suitable packet size for both devices. The microcontroller, assisted by commands generation and data redirection between devices, carries out the image transfer.
We briefly present the command syntax to create a file or add information to an existing one in the µDrive® in the following bullet points: • 0x40 0x74: command type; • 0x80: data adding mode; • nombre_archivo: file name is sent as an ASCII sequence of characters (string) with extension; • 0x00: finisher, to indicate the end of "nombre_archivo"; • tamaño_archivo: 4 bytes are sent, including number of received bytes. In this case, the command "0x00 0x00 0x00 0x60" (96) is sent; and • datos: desired information to store.
The image storage procedure consists in sending -to the cameras-the reading command of the generated file size, in order to decide the number of packets to transfer. Afterwards, the command to create or add data to a file is sent to the µDrive® device and the first camera receives the reading command; consequently, the µDri-ve® sends 96 bytes and this data transfer is made directly between this device and the camera.
This procedure continues until the image corresponding to the first camera is completely transferred and stored; then, data transfer continues with the second camera and the image is stored with a different name. The MicroSD® card in the storage module must be in FAT16 format (to ensure correct image storage). The µDrive® module has two light-emitting diodes (LED) that represent its energy and information transfer operation state, respectively. This last LED is intermittent when the µDrive® is transferring information; once data transmission is over, it turns off.
Once images have been captured, the storage device is extracted and connected to a computer for its observation and processing. Due to the low speed of data transfer over the cameras and storage system (90 seconds to acquire and store images from the two cameras), together with the sequential features of the procedure, we realized that it would be better to change the microcontroller for another with greater speed and versatility.

C. Second prototype-control, acquisition, and storage information system. Raspberry Pi model B
For this second prototype, we use the reduced board computer Raspberry Pi model B (from now, we abbreviate its name to Raspberry only), which is based on a System on a Chip (SOC) Broadcom with one ARM®11 core clocked at 700 MHz, 512 MB of RAM, and a Secure Digital® (SD) card as data storage unit. Other features of this board are one Ethernet port, one HDMI port, and two USB ports. The physical size of the card (85.6 mm x 56.5 mm), its low weight (45 g), and superior computational power make this board ideal to replace the previous Tiva® card we used in the first prototype; Raspberry Pi also adds other functionalities to the system. Figure 9 shows the schematic of this prototype. Operation of the system consists of commands sent to the controller of the Raspberry card from a web interface and/ or through signals sent from the autopilot either manually or automatically. We programmed the Raspberry card in order to control the camera functions -initialization, configuration, and capture. A local web server receives images from the cameras in order to facilitate their visualization and management from external systems. Figure 10 shows the basic scheme of the software architecture.
The web interface allows capture process initialization, visualization of the image files and their management. It also enables image packaging in a compressed file and remote system testing from a web browser (see Figure 11). The Raspberry automatically connects to a wireless access point compatible with the IEEE 802.11b/g/n set of standards; so, the user should enter the IP direction of the Raspberry to gain access to the remote system-testing interface.

C. Segundo prototipo -Sistema de control, adquisición y almacenamiento de información. Raspberry PI modelo B
Se seleccionó la computadora de placa reducida Raspberry PI modelo B, basada en un SOC Broadcom con núcleo ARM11 a 700MHz, 512MB de RAM, y una tarjeta SD como unidad de almacenamiento de datos para el sistema operativo y el usuario, además de contar con puertos de entrada y salida de propósito general configurables, puerto Ethernet, puerto HDMI y dos puertos USB. El tamaño físico de la tarjeta (85.60 mm × 56.5 mm), su bajo peso (45g) y superior capacidad computacional, permitieron su utilización como plataforma de desarrollo idónea para reemplazar la tarjeta de desarrollo TIVA de Texas Instruments y añadir funcionalidades al sistema de adquisición de imágenes. El esquema de este sistema se muestra en la Figura 9. From this web interface, the user can activate two operation modes for the capture system. The first is the test or manual capture mode, which was implemented in order to capture frames to check the system state and to make physical adjustments like lens focus or camera alignment. Besides, in this mode, visualization of images in the web interface is almost immediately after the camera shot. The second operation mode is related with capture through external command and its use is in parallel with the autopilot function of the UAV. In other words, the Raspberry receives a pulse generated from manual operation (by radio control) or from waypoints in autonomous flight plans.
To gain access to image files from another device connected to the same wireless network as the Raspberry, we implemented a web server (lighttpd) capable of executing PHP code, and we modified a non-commercial gallery called Single File PHP Gallery 4.5.6, written by Kenny Svalgaard. This ".php" file is allocated in the Raspberry web server as the main page and it allows the user to visualize images in specific folders, erase the files, execute commands through permission assignments, to try, initialize, and stop program execution, and create compressed files.
El funcionamiento del sistema consiste en el envío de comandos al controlador mediante una interfaz web y/o una señal enviada a través del autopiloto de manera automática o manual. La tarjeta Raspberry Pi se programó de tal manera que controlara las funciones de las cámaras -inicialización, configuración y captura-, el almacenamiento de las imágenes y su publicación en un servidor web local, para su fácil adquisición en un sistema externo que permite la visualización y administración de los archivos generados. El esquema básico de la arquitectura del software se muestra en la Figura 10.
La interfaz web permite la puesta en marcha del proceso de captura, la visualización de archivos de imagen, su administración, empaquetamiento en un archivo comprimido y prueba del sistema mediante el acceso desde un navegador Web (Figura 11). La Raspberry Pi se conecta automáticamente a un punto de acceso inalámbrico compatible con el estándar IEEE 802.11b/g/n; el usuario se debe conectar con un dispositivo adecuado que cuente con navegador Web a este punto de acceso inalámbrico, y acceder a la dirección IP de la Raspberry Pi y a la interfaz web.
Desde la interfaz Web se pueden activar dos modos de operación del sistema de captura: el primero es el de test o captura manual, que se implementó con el fin de realizar captura de fotogramas para comprobar el estado del sistema y realizar ajustes físicos como enfoque o alineación de cámaras, en donde las imágenes se visualizan en la interfaz web tan pronto son capturadas y guardadas; el segundo, es el de captura mediante comando externo, utilizado en conjunto con el autopiloto Ardupilot Mega del VANT, de tal manera que la Raspberry Pi reciba un pulso generado mediante operación manual (por radiocontrol) o mediante waypoints en un plan de vuelo autónomo.
Se recomienda, antes de la utilización del sistema en campo, configurar la red inalámbrica a la cual se conectará; dicha red será normalmente un punto de acceso móvil que puede configurarse en un teléfono móvil, un router inalámbrico o un ordenador portátil. Una vez que la Raspberry Pi se conecta a una fuente de alimentación, el sistema operativo arranca, se inicia el servidor web y se conecta a la red inalámbrica confi- All of this is directly from web browsers and is suitable to execute in mobile devices (smartphones, tablets, etc.).
We recommend, before using the system in the field, setting up the wireless network to use. Typically, this network is a mobile hotspot in mobile devices, a wireless router or a computer. Once the Raspberry is plugged into an energy source, the operating system starts. Subsequently, the web server starts and it automatically connects to the wireless network. After a few seconds, the system is ready to operate. For this prototype, we use the 3D printed structure and we couple it into a shell that contains a control card and its transmission peripherals, along with the cameras, always keeping the lowest possible total weight of the system for adequate incorporation into the aerial platform (see Figure 12).

D. Remote image storage
In order to give commands to capture images remotely, it is necessary to use a wireless communication system. For this purpose, we use a generic aeromodeling radio control system. This system consists of a transmitter ( Figure 13) whose wireless signal propagates at 2.4 GHz and a receiver ( Figure 14) that receives this signal and delivers a Pulse Width Modulation (PWM) signal for each channel. Subsequently, the PWM signal of one channel enters the development card and is decoded. In order to achieve that, the capture process is carried out in a specific position of the remote control.

E. Correction of geometric distortion
The images obtained from the sensors present geometric distortion, mainly caused by optical features of the lens system. The cause of this phenomenon is the absence of orthogonal projection over the images. With the gurada de manera automática. Tras unos segundos el sistema está listo para su operación. Para este prototipo se incorporó la estructura impresa en 3D a un armazón que integra la tarjeta de control y sus diversos dispositivos periféricos de transmisión con las cámaras, pero buscando que el sistema se mantuviera lo más liviano posible para su óptima incorporación en la plataforma aérea (Figura 12).

D. Captura de las imágenes a control remoto
Para dar la orden de captura de las imágenes a distancia es necesario usar un sistema inalámbrico de comunicación. Para este propósito se utilizó un sistema de radio control genérico de aeromodelismo, el cual consiste en un emisor (Figura 13), cuya señal inalámbrica se propaga a una frecuencia de 2,4 GHz, y un receptor (Figura 14), que recibe esta señal y entrega una señal PWM por cada canal. La señal PWM de uno de los canales entra a la tarjeta de desarrollo y es decodificada, de tal manera que en una posición específica de uno de los mandos del control se realice el proceso de captura. aim of developing adequate georeferenced algorithms, it is necessary to make geometric rectifications of the images taken. We checked the distortion type by the use of a consistent template over a squared grid on paper. As the reader can see in Figure 15, the captured scene presents a barrel distortion, causing the representation of grid lines as lines with increased proportional curvature relative to the distance from the image's central axis.
With the purpose of carrying out image orthorectification, we use MATLAB® R2014a software together with Camera Calibrator toolbox, which provide corrected images by modeling lens distortion. The procedure to calibrate and obtain the radial distortion coefficients consists of capturing image sets of a chessboard-like pattern from several angles and distances. Afterwards, the foundation of the image processing is mapping of the camera's spatial position relative to the pattern; this creates a set of intrinsic, extrinsic, and lens distortion parameters.
Equations (1) and (2) describe the parameters of the radially distorted location in images pixels, for X and Y coordinates, respectively: where: x and y are the locations without distortion of pixels, k 1 and k 2 are distortion coefficients, and the following formulation is true r 2 =y 2 +x 2 Moreover, the k 1 and k 2 coefficients of the color camera are -0.4220 and 0.2277, respectively; whilst the coefficients of the infrared camera are -0.4413 and 0.3109. The difference between cameras is due to the optical effect produced by the infrared blocking filter added previously.
Using these coefficients, we made corrections to the distorted images, solving from (1) and (2) the position of pixels without distortion, and obtaining images closer to an orthogonal projection, as Figure 16 shows. This image correction allows an adequate record of captured images from both cameras, matching pixel by pixel to compute the vegetation indices.

IV. Results
The information acquisition system consists of a UAV with a GPS position module, autopilot system, controller,
En tierra está una estación de control que consta de un sistema de telemetría, un sistema de adquisición de imágenes y un radio control, como se aprecia en la Figura 18. La Figura 19 muestra la GUI del sistema que permite manipular las imágenes adquiridas por el sistema multiespectral (PIND Precision Index _ Estimación and payload -the multispectral camera-(see Figure 17).
On the surface, there is a control station conformed by a telemetry system, image attainment system, and radio control, as Figure 18 exhibits. Similarly, Figure  19 shows the system GUI, which permits manipulation of the images obtained by the multispectral system. We called this the PIND or Precision Index Normalized Distribution and it evaluates the NDVI for single plants (African oil plant in this case study). In Figure 20, we show the images captured by the designed device, i.e. the visible spectrum image and near infrared image; these images present the terrestrial coverage behavior and allow the capture of the blue, green, red, and near infrared spectral bands ( Figure 21).
Estimation of the NDVI vegetation index is executed by the PIND application, which takes an image corresponding to the red color (in the visible spectrum band) and an image in gray scale (achieved by the infrared camera); then it matches pixel by pixel and applies (3), which is the definition of this vegetation index: de NDVI para plantas individuales). En la Figura 20 se aprecian las imágenes adquiridas por el dispositivo diseñado, que corresponden a una imagen en el espectro visible y otra en el infrarrojo cercano, que dan cuenta del comportamiento de la cobertura terrestre y permiten la obtención de las bandas espectrales del azul, el verde, el rojo y el infrarrojo cercano (Figura 21) . El cálculo del índice de vegetación normalizada NDVI es realizado por la aplicación PIND, la cual toma la imagen correspondiente al color rojo de la cámara en el rango de espectro visible y la imagen en escala de grises obtenida con la cámara infrarroja, haciendo coincidir pixel a pixel ambas imágenes y aplicando la ecuación (3) de este índice de vegetación. where: NearIR are the pixels of the image in the near infrared spectrum, and Red are the pixels from the image captured by the visible spectrum camera.

V. Conclusions
The system developed over the Tiva® C Series LaunchPad card worked with time and processing limitations (it took up to 90 seconds to obtain one image). We note a considerable improvement by using the Raspberry Pi card (only 5 seconds to capture and store an image). The use of the Raspberry Pi card was favorable to increase the UAV flight time by a few minutes due to its low weight, achieving a greater number of pictures taken by its cameras in less time and increasing the efficiency of the processing time.
The response of the developed multispectral system allowed us to determine the normalized differences vegetation index (NDVI) and observe changes in plants according to the vegetation type, showing that superficial leaves in the studied plants present higher NDVI values. Besides, this index is suitable to establish vegetal health conditions, knowing that healthy plants have elevated infrared levels compared to red levels (i.e. high NDVI) and sick plants have low infrared levels relative to red ones. This is a good indication of the correct functioning of the designed multispectral system. Advances in research projects carried out by the Universidad de los Llanos, especially in digital signal processing, precision agriculture, and unmanned aerial vehicles operation, have permitted the establishment of spatial behavioral models for phenological variables in the Colombian Orinoquía, and contribute to improvements in the development of the region, by developing proper and adequate tools for its needs.