Download PDF
ads:
FERNANDA MENDONÇA SILVEIRA
A LABVIEW BASED SYNCHRONIZED DATA
ACQUISITION SYSTEM WITH INTEGRATED
WEBCAM FOR WELDING PROCESSES
UNIVERSIDADE FEDERAL DE UBERLÂNDIA
FACULDADE DE ENGENHARIA MECÂNICA
2006
ads:
Livros Grátis
http://www.livrosgratis.com.br
Milhares de livros grátis para download.
FERNANDA MENDONÇA SILVEIRA
A LABVIEW BASED SYNCHRONIZED DATA ACQUISITION SYSTEM
WITH INTEGRATED WEBCAM FOR WELDING PROCESSES
Dissertation presented to the Post-graduation
Programme in Mechanical Engineering of Federal
University of Uberlândia, as part of the
requirements to obtain the title of MASTER IN
MECHANICAL ENGINEERING.
Concentration Area: Manufacturing Processes.
Supervisor: Prof. PhD. Américo Scotti
UBERLÂNDIA-MG, BRAZIL
2006
ads:
FICHA CATALOGRÁFICA
Elaborada pelo Sistema de Bibliotecas da UFU / Setor de Catalogação e Classificação
S587l
Silveira, Fernanda Mendonça, 1981-
A labview based synchronized data acquisition
system with integrated webcam for welding processes /
Fernanda Mendonça Silveira. - Uberlândia, 2006.
156f. : il.
Orientador: Américo Scotti.
Dissertação (mestrado) – Universidade Federal de
Uberlândia, Programa de Pós-Graduação em
Engenharia Mecânica.
Inclui bibliografia.
1. Soldagem - Teses. 2. Processos de fabricação -
Teses. I. Scotti, Américo. II. Universidade Federal de
Uberlândia. Programa de Pós-Graduação em
Engenharia Mecânica. III. Título.
621.791
FERNANDA MENDONÇA SILVEIRA
A LABVIEW BASED SYNCHRONIZED DATA ACQUISITION SYSTEM
WITH INTEGRATED WEBCAM FOR WELDING PROCESSES
Dissertation APPROVED by the Post-graduation
Programme in Mechanical Engineering of Federal
University of Uberlândia.
Concentration Area: Manufacturing Processes.
Evaluation Committee:
______________________________________
Prof. PhD. Américo Scotti (UFU) – Supervisor
______________________________________
Prof. Dr. Jair Carlos Dutra (UFSC-SC)
______________________________________
Prof
a
. Dra. Ing. Vera Lúcia D. S. Franco (UFU)
UBERLÂNDIA-MG, BRAZIL
2006
I dedicate this work to my family, especially to my mother Edna and my grandfather
Agenor, who much helped me to reach this point of my life. Thank you!
ACKNOWLEDGEMENT
I would like to thank the Federal University of Uberlândia and the Faculty of
Mechanical Engineering for the opportunity of doing this course.
Cranfield University for the opportunity of using its facilities and the funds provided.
My supervisor in Brazil, Américo Scotti, for spending his time giving me the
orientation I needed to execute this work.
My supervisor in England, David Yapp, for the permission I had to develop this work
at Cranfield University.
My colleagues in Brazil for the moments we spent together.
My colleagues in England, especially Gil and Harry, for the patience and help I
received.
Thanks to CNPq and the Programme AlBan for funding this work.
“This work was supported by the Programme AlBan, the European Union Programme
of High Level Scholarships for Latin America, scholarship no. E04M03892BR”
“This work was also supported by CNPq, from Brazil, scholarship no. 132497/2004-2”
SILVEIRA, F. M. A LabVIEW Based Synchronized Data Acquisition System with
Integrated Webcam for Welding Processes. 2006. p. 156. Master Dissertation, Federal
University of Uberlândia, Uberlândia.
Abstract
The development of data acquisition techniques able to monitor the performance of an
automated welding system could be, for example, applied to a system able to repair pipes full
of operating fluid. In this case, the system must precisely control and monitor the heat input
during the weld. For this purpose, it would be necessary to develop data acquisition
techniques able to monitor the performance of such a system. Digital oscilloscopes, which
are usually the kind of equipment used for this purpose, present high-cost as well as other
solutions offered by the market. Moreover, those systems do not offer a specific solution to
the welding area. Thus, the aim of this work was to develop a synchronized data acquisition
system with integrated webcam that was suitable for the study of the several types of welding
processes. It could be used for automation of pipeline repair systems, as well as, for other
types of welding operations, including experimental evaluations into laboratories. A software
package was developed using LabVIEW as programming language. It is able to do
synchronized acquisition of signals and image and post analysis of the acquired data. The
use of low-cost cameras like webcams was introduced aiming to determine arc
characteristics as arc length, for instance. The acquisition of the emitted sound by the arc
during the welding was also considered, what could increase the feeling of the professional
working away and could be used for studies related to welding quality. Using the existent
resources in the laboratory, a hardware prototype with enough flexibility and suitable for
several types of welding processes was designed. Afterwards, a system with a physical
configuration more appropriate to the work environment and lower cost was built. This
system was assessed with two different welding operations, Gas Metal Arc Welding and
Resistance Spot Welding, when its limitations and advantages were explored and identified.
Keywords: Data Acquisition. Data Analysis. Welding Processes. Image. Webcam.
SILVEIRA, F. M. Sistema de Aquisição de Dados Sincronizado Integrado com Webcam
para Processos de Soldagem Baseado em LabView. 2006. 156 p. Dissertação de
Mestrado, Universidade Federal de Uberlândia, Uberlândia.
Resumo
O desenvolvimento de técnicas de aquisição de dados capazes de monitorar o desempenho
de um sistema de soldagem automatizado pode ser aplicado, por exemplo, à um sistema
capaz de fazer reparos em dutos cheios de fluídos em operação. Nesse caso, o sistema
deve precisamente controlar e monitorar a entrada de calor enquanto a solda é feita. Os
osciloscópios digitais, que são geralmente os aparelhos utilizados para esse fim,
apresentam custo bastante elevado, assim como outras soluções existentes no mercado.
Além disso, esses sistemas não oferecem uma solução específica para a área de soldagem.
Assim, o objetivo desse trabalho foi desenvolver um sistema de aquisição de dados
sincronizado integrado com webcam para uso no estudo dos diversos tipos de processos de
soldagem. Esse sistema poderia tanto ser usado para automação de sistemas de reparo de
dutos assim como para outros tipos de operações de soldagem, inclusive avaliações
experimentais em laboratórios. Desenvolveu-se um programa em LabView capaz de realizar
a aquisição de sinais e imagem de forma sincronizada e ainda oferecer recursos para uma
análise posterior dos dados adquiridos. Introduziu-se o uso de câmeras de baixo custo,
como as webcams, objetivando a determinação de características do arco voltaico, como
por exemplo, o seu comprimento. Também se considerou a aquisição do som emitido pelo
arco durante a soldagem, o que poderia ajudar um profissional trabalhando à distância e
também ser utilizado para estudos relacionados à qualidade da solda. A partir dos recursos
oferecidos em laboratório, construiu-se um protótipo de equipamento flexível o bastante para
atender diversos tipos de processos de soldagem. Em seguida, construiu-se um sistema
com uma configuração física mais apropriada ao ambiente de trabalho. Este sistema foi
avaliado em duas diferentes operações de soldagem, soldagem MIG e soldagem a ponto
por resistência, quando suas limitações e vantagens foram exploradas e identificadas.
Palavras-chave: Aquisição de Dados. Análise de Dados. Processos de Soldagem. Imagem.
Webcam.
LIST OF SYMBOLS AND ABREVIATIONS
PIG Pipeline Inspector Gauge
WERC Welding Engineering Research Centre
LAPROSOLDA Laboratory for the Development of Welding Processes
AWS American Welding Society
AC Alternate Current
DC Direct Current
GMAW Gas Metal Arc Welding
RSW Resistance Spot Welding
CAPS Cranfield Automated Pipewelding System
CCD Charge Coupled Device
DAQ Data Acquisition
DMA Direct Memory Access
PCI Peripheral Component Interconnect
USB Universal Serial Bus
HS High-Speed
NI National Instruments
AIGND Analog Input Ground
CMRR Common-Mode Rejection Ratio
NRSE Non-Referenced Single-Ended
AISENSE Single-Node Analog Input Sense
ADC Analog-to-Digital Converter
LSB Least Significant Bit
DNL Differential Nonlinearity
DAC Digital-to-Analog Converter
I/O Input/Output
RF Radio Frequency
LAN Local Area Network
PCMCIA Personal Computer Memory Card Interface Adapter
GPS Global Positioning System
BNC British Naval Connector
INDEX
CHAPTER 1 - INTRODUCTION.............................................................................................. 1
CHAPTER 2 - LITERATURE REVIEW ................................................................................... 3
2.1
WELDING PROCESSES ..................................................................................................... 3
2.2
ARC WELDING MONITORING AND ANALYSIS ...................................................................... 7
2.3
DATA ACQUISITION ........................................................................................................ 10
2.3.1 Concepts............................................................................................................... 10
2.3.2 Systems................................................................................................................ 16
2.4
LABVIEW ....................................................................................................................... 21
CHAPTER 3 - METHODOLOGY AND DEVELOPMENT ..................................................... 23
3.1
HARDWARE PROJECT .................................................................................................... 24
3.2
SOFTWARE PROJECT..................................................................................................... 32
3.3
CALIBRATION............................................................................................................... 124
3.4
SYSTEM CONFIGURATION ............................................................................................. 125
CHAPTER 4- EVALUATION OF THE SYSTEM "SMART"................................................ 134
4.1
FIRST SET OF EXPERIMENTAL TRIALS SINGLE WIRE GMAW......................................... 134
4.2
SECOND SET OF EXPERIMENTAL TRIALS DOUBLE WIRE GMAW ................................... 139
4.3
THIRD SET OF EXPERIMENTAL TRIALS RSW ............................................................... 144
CHAPTER 5 - DISCUSSION............................................................................................... 149
CHAPTER 6 - FUTURE DEVELOPMENTS........................................................................ 149
CHAPTER 7 - CONCLUSIONS........................................................................................... 154
CHAPTER 8 - REFERENCES............................................................................................. 155
1
CHAPTER 1
INTRODUCTION
Underground oil and gas pipelines are susceptible to corrosion and other damage on
the outside surface of the pipe. Many pipelines have now been in service for periods of up to
50 years and these lines will continue in use for a foreseeable future. It is common for
pipelines to be inspected by an internal Pipeline Inspection Gauge (PIG). This device travels
along inside the pipe propelled by the media flow and uses ultrasonic or other sensing
methods to determine areas where pipe wall thinning may have occurred, as shown by
Figure 1.1 and Figure 1.2. These areas are, then, excavated and the pipe is repaired, while
still carrying its operating fluid (commonly oil or natural gas). Methods have been developed
to restrict the possibility of burn-through while making a weld repair on a live pipeline, but
these methods depend on the accurate control of heat point. This can be most easily
achieved by application of an automated welding system, where heat input can be precisely
controlled and monitored. In addition, the use of an automated system offers the possibility of
remote operation of the equipment. So, that personnel would not be required do be in the
immediate vicinity of the pipeline while the repair takes place.
Figure 1.1 - UltraScan Smart PIG (Pipetronix Inc.)
Figure 1.2 – CalScan Smart Pig (Pipetronix Inc.)
2
Cranfield University - UK, through its Welding Engineering Research Centre (WERC),
has already examined the concept of automating pipeline repair, by using a repair welding
head mounted on two bands encircling the pipe, of the type often used for pipeline
construction. The welding head can, then, be driven round the pipe, and also along the
length of the pipe, with accurate control of the welding torch position at any point round the
pipeline. This system can be then used for making either of the two principal types of welding
repair employed in the field: build-up layers of weld metal to replace the steel lost in thinned
area, or to weld half-pipes round the pipes in a clamshell arrangement, to cover the area
where thinning has occurred.
In order to develop such a project, a microprocessor-based control system must be
also developed to operate the mechanical hardware. For this purpose, one of the required
tasks must be the development of rules from which welding procedures and parameters
could be adjusted to compensate for variations in welding conditions. This task requires
analysis of welding experiments and development of control algorithms to adjust welding
parameters. Hence, it must be also necessary to develop and integrate sensors with the
welding system, and to develop data acquisition techniques to monitor the performance of
welding.
Data acquisition systems have been improved along the years. Most of them acquire
signals while an on-line analysis of the weld performance is carried out. Then, it is possible to
act on the welding parameters in order to adjust a better work setting. The data can also be
recorded for a post-analysis. Since acquisition is usually done in different types of work
environments, these systems have also shown portability as an important characteristic.
However, studies of applications for low-cost cameras integrated to data acquisition systems
are still not very explored. Since they are much cheaper if compared to the high-speed
cameras, this kind of study becomes very worthwhile.
Then, a partnership between the Laboratory for the Development of Welding
Processes (LAPROSOLDA, Brazil) of Federal University of Uberlândia (UFU) and the
Welding Engineering Research Centre (WERC, UK) of Cranfield University was established.
This work is focused on the development of a portable data acquisition system, which could
be used for automation of pipeline repair systems. Besides electrical signals; the
synchronous acquisition of frames of cameras (low-cost and high-speed ones) and audio had
to be also offered, providing a more comprehensive analysis of the phenomenon. This
system should be also suitable for other types of welding operations, including experimental
evaluations into laboratories.
3
CHAPTER 2
LITERATURE REVIEW
2.1 Welding Processes
The American Welding Society (AWS) definition, according to The ARCON Welding
Inc. (2005)
, for a welding process is "a materials joining process which produces
coalescence of materials by heating them to suitable temperatures with or without the
application of pressure or by the application of pressure alone and with or without the use of
filler material". Following this definition, there are two classes of welding processes, fusion
welding (in which the coalescence happens during the fusion of the joint) and pressure
welding (in which the coalescence happens through heating below the fusion temperature
and pressuring over the parts).
According to The Lincoln Electric Inc. (1994), welding used to be done by heating
metals and pounding or ramming them together (pressure welding) until obtaining the
coalescence between them. In the early 1800’s was discovered that a voltaic arc could be
created with a high-voltage electric circuit by bringing two terminals near each other (fusion
welding). The heat produced by the arc could be used for joining the desirable parts. Figure
2.1 illustrates a basic arc welding circuit. Since then, the welding has been developed a lot,
but only in the beginning of the 19
th
century the process became commercially available.
Fusion welding is still an important way of joining materials, being the main heat source the
arc voltaic.
Still according to The Lincoln Electric Inc. (1994), an AC or DC power source, fitted
with whatever controls may be needed, is connected by a ground cable to the workpiece and
by a “hot” cable to an electrode holder of some type, which makes electrical contact with the
welding electrode. When the circuit is energized and the electrode tip touched to the
grounded workpiece, and then withdrawn and held close to the spot of contact, an arc is
created across the gap. The arc produces a temperature more than adequate for melting
most metals. The heat produced melts the base metal in the vicinity of the arc and any filler
metal supplied by the electrode or by a separately introduced rod or wire. A common pool of
4
molten metal is produced and solidifies behind the electrode as it is moved along the joint
being welding. The result is a fusion bond and the metallurgical unification of the workpieces.
Figure 2.1 - The basic arc-welding circuit (The Lincoln Electric Inc., 1994)
Many studies have been done in order to improve the heat efficiency, welding quality
and productivity, besides the adaptability to the work environment and to the many types of
materials. That is why several types of welding processes have shown up. AWS has grouped
them together according to the "mode of energy transfer" as the primary consideration. A
secondary factor is the "influence of capillary attraction in effecting distribution of filler metal"
in the joint. Capillary attraction distinguishes the welding processes grouped under "Brazing"
and "Soldering" from "Arc Welding", "Gas Welding", "Resistance Welding", "Solid State
Welding", and "Other Processes." The welding processes, in their official groupings, are
shown by Table 2.1. This table also shows the American letter designation for each process.
GMAW, illustrated in Figure 2.2, has currently been thought as the most suitable
welding process for large diameter transmission pipelines. The AWS defines GMAW as "an
arc welding process which produces coalescence of metals by heating them with an arc
between a continuous filler metal (consumable) electrode and the work piece. Shielding is
obtained entirely from an externally supplied gas or gas mixture." The electrode wire for
GMAW is continuously fed into the arc and deposited as weld metal. This process has many
variations depending on the type of shielding gas, the type of metal transfer, and the type of
metal welded. A number of recent welding procedure developments have improved
productivity using GMAW.
5
Table 2.1 - Welding and allied processes with the correspondent letter designation by AWS
(according to ARCON Welding Inc., 2005)
Group Welding Process Letter Designation
Arc welding Carbon Arc CAW
Flux Cored Arc FCAW
Gas Metal Arc GMAW
Gas Tungsten Arc GTAW
Plasma Arc PAW
Shielded Metal Arc SMAW
Stud Arc SW
Submerged Arc SAW
Brazing Diffusion Brazing DFB
Dip Brazing DB
Furnace Brazing FB
Induction Brazing IB
Infrared Brazing IRB
Resistance Brazing RB
Torch Brazing TB
Oxyfuel Gas Welding Oxyacetylene Welding OAW
Oxyhydrogen Welding OHW
Pressure Gas Welding PGW
Resistance Welding Flash Welding FW
High Frequency Resistance HFRW
Percussion Welding PEW
Projection Welding RPW
Resistance-Seam Welding RSEW
Resistance-Spot Welding RSW
Upset Welding UW
Solid State Welding Cold Welding CW
Diffusion Welding DFW
Explosion Welding EXW
Forge Welding FOW
Friction Welding FRW
Hot Pressure Welding HPW
Roll Welding ROW
Ultrasonic Welding USW
Soldering Dip Soldering DS
Furnace Soldering FS
Induction Soldering IS
Infrared Soldering IRS
Iron Soldering INS
Resistance Soldering RS
Torch Soldering TS
Wave Soldering WS
Other Welding Processes Electron Beam EBW
Electroslag ESW
Induction IW
Laser Beam LBW
Thermit TW
6
Figure 2.2 - GMAW – Diagram process (CARY, 1995)
This welding process is the most popular method for automated systems because the
electrode wire is continuous (CARY, 1995). As a continuous-wire process, it has a high
operator factor and as due to the high current density capability (short electrodes), this
process provides high deposition rate. Furthermore, it is an all-position welding process,
what allows an orbital welding execution. Those are the main reasons why this welding
process is used by all industrial manufacturing operations. It is also used for field
construction, including pipelines, and for maintenance and repair work.
GMAW can be classified into Single and Double Wire. Single GMAW uses only one
wire through one torch. Double Wire class, in turn, can be divided into two other classes, i.e,
Single Potential (also miscalled twin arc), in which two wires are fed through one contact tip,
and Isolated Potential (so called Tandem), in which there are two contact tips into the same
torch, as shown by Figure 2.3. There is a combination called Dual Tandem GMAW, which
uses two torches, having two wires each one, as Figure 2.4 shows. This is the most
productive GMAW method and many studies about it have been done. That is why the data
acquisition system developed in this work will be based on Dual Tandem GMAW limits.
According to Widgery and Blackman (2001), Cranfield University developed the
concept of Dual Tandem GMAW for pipewelding and received funding from BP Exploration
Operating Company and TransCanada Pipelines to develop the Cranfield Automated
Pipewelding System (CAPS). CAPS involves the use of two tandem welding torches fitted on
one pipe welding bug so that four arcs operate simultaneously. The dual tandem head was
fitted with a sensor based control system, removing the need for a skilled operator to
continuously monitor the weld. Eventually, this will be used for adaptive control of the welding
process.
7
Figure 2.3 - Cranfield Tandem GMAW (WIDGERY; BLACKMAN, 2001)
Figure 2.4 - CAPS Dual Tandem welding torches (WIDGERY; BLACKMAN, 2001)
2.2 Arc Welding Monitoring and Analysis
A low-cost camera, such as a webcam, could be used for capturing the arc welding
image while the weld is done. A recent work (GILSINN et al., 1999) used this type of camera,
which has an embedded web server and can accept aiming and zooming commands, to
export images at about 1Hz. A web-based interface is used for communicating the server
with the camera. Anyone on the internet, running a general purpose browser, can view
images and control the camera. Figure 2.5 shows a top view of a welding rig and a robot arm
with torch. Since discrete part manufacturers using robotic arc welding cells often have
several more geographically distributed plants than welding experts, this kind of control
through the internet allows a remote expert to view the cell in operation and inspect parts
after a weld.
8
Figure 2.5 - Live Video Feed from Weld Camera (GILSINN et al., 1999)
A more recent work (MASON et al., 2005) presents another application for the low-
cost cameras. A great deal of past and present research overlooks one of the most important
components of the resistance spot welding process, the electrode, of which its profile (tip)
has a direct effect on the quality of the weld. As the number of welds increase, the electrode
tip wears down and therefore the applied current is increased in order to maintain the same
current density. Mason et al. (2005) introduced a way to monitor the electrode profile, by
using a compact low-cost camera integrated into a PC.
Another application of a camera monitoring system is arc length measurements. Arc
voltage can give an idea about the arc behaviour; it means to check if the arc is longer or
shorter, but not its length as a measurement. Through arc welding images, the arc length
could be calculated. Using a compact arc light sensor, Li and Zhang (2001) managed to
measure the arc length with adequate accuracy. According to them, the arc length
determines the distribution of the arc energy and, thus, the heat input and width of the weld.
In their work, they aimed at improving the measurement accuracy of arc length by using the
spectrum of arc light at a particular wavelength during gas tungsten arc welding (GTAW) with
argon shield.
Maeda and Ichiyama (1999) developed an adaptive control of arc welding by image
processing. A CCD camera with an electronic trigger shutter was used for acquiring images
of a molten weld pool. For this purpose, an image acquisition algorithm was also developed
aiming to detect the pool edge and the centre of the electrode wire through the image. They
managed to develop a system less susceptible to the external influences as spattering during
welding.
The researches related to welding analysis use high-speed cameras able to show
much more details of the arc welding. This type of camera is mainly used by researchers for
9
metal transfer mode analysis, as shown in Figure 2.6. Bálsamo et. al (2000) shows that the
use of high-speed cameras allows a more precise measurement of both the metal transfer
mode and metal transfer size. Thus, in their work, they correlate the droplet and dimensions
with current and voltage signals, on a time base, by the development of a data acquisition
technique suitable for GMAW metal transfer analysis. Alfaro et. al (2005) also developed
such a system, but suitable for RSW parameters setup.
Figure 2.6 - Photo sequence of transfer in the globular/short circuit mode (FERRARESI;
FIGUEIREDO; ONG, 2003)
Another research trend is the analysis of the sound emitted by the arc welding, which
has been used more and more in studies related to welding quality. A high-speed data
acquisition system was developed by Mansoor and Huissoon (1999) to record and analyse
the arc sound produced during GMAW. The recorded data was processed to obtain time
domain, frequency domain and time-frequency domain descriptors. Relationships between
these descriptors and the originating weld parameter levels and metal transfer mode were
investigated, as there were relationships between the electrical power supplied to the weld
and the arc sound. Results indicate that the arc sound exhibits distinct characteristics for
each metal transfer mode. The occurrence of spatter and short circuits was also found to be
clearly detectable in the arc sound record.
Drouet and Nadeau (1982) proposed another application for acoustic analysis. Based
on measurement of the sound wave produced by the arc welding, their technique determines
the time evolution of the voltage in the column of an electric arc, being possible to control the
arc length. The technique is based on the property of the current modulated arc to generate
a sound wave, whose amplitude is proportional to the value of the arc column voltage. The
measuring device is not connected to the arc power supply, it is well shielded from
electromagnetic fields and the signal detected is related only to the voltage drop across the
arc itself.
10
2.3 Data Acquisition
2.3.1 Concepts
In nowadays concepts, data acquisition involves a lot more than a few sensors for
electrical signals. There are ever-increasing demands on the systems to record other types
of data as well. More data being acquired at the same time requires more powerful
computers and acquisition boards. It means that the capabilities of the computer can
significantly affect the performance of the Data AcQuisition (DAQ) system.
Twenty years ago, PCs were capable of transferring at rates around 5MHz, although
today’s computers can transfer significantly faster (National Instruments Inc. - Application
Note 007). They are capable of DMA (Direct Memory Access) and interrupt data transfers.
DMA transfers increase the system throughput by using dedicated hardware to transfer data
directly into system memory. Using this method, the processor is not burdened with moving
data and it is, therefore, free to engage in more complex processing tasks. To reap the
benefits of DMA or interrupt transfers, the DAQ device must be capable of these transfer
types. While PCI and FireWire devices offer both DMA and interrupt-based transfers,
PCMCIA and USB devices only use interrupt-based transfers. Depending on how much
processing is needed during data transfer, the rate at which the data is transferred from the
DAQ device to PC memory may be affected by the data transfer mechanism.
Besides these capability concerns, the choice of sensors it is also very important in a
DAQ system’s development. Also named transducers, they are devices that convert one type
of physical phenomenon, such as temperature, strain, pressure, or light into another
(National Instruments Inc. - Application Note 048). The most common transducers convert
physical quantities to electrical quantities, such as voltage or resistance. Transducer
characteristics define many of the signal conditioning requirements of the measurement
system. Table 2.2 summarizes the basic characteristics and conditioning requirements of
some common transducers.
The data acquisition devices can have analog input/output, digital input/output or
counters/timers. If it has all of them, it is called multifunction device. The number of channels,
the sampling rate, the resolution and the input range are parameters that specify the analog
inputs.
The number of analog channel inputs is specified for both single-ended and
differential inputs for devices with both input types. Single-ended inputs are all referenced to
a common ground reference. These inputs are typically used when the input signals are high
level (greater than 1V), the leads from the signal source to the analog input hardware are
11
short (less than 15ft), and all input signals share a common ground reference. If the signals
do not meet these criteria, it is necessary to use differential inputs. With differential inputs,
each input has its own ground reference; noise errors are reduced because the common-
mode noise picked up by the leads is cancelled out.
Table 2.2 - Transducers and signal conditioning requirements (National Instruments Inc.)
Sensor Electrical Characteristics Signal Conditioning Requirement
Thermocouple Low-voltage output
Low sensitivity
Nonlinear output
Reference temperature sensor (for
cold-junction compensation)
High amplification
Linearization
RTD Low resistance (100 ohms
typical)
Low sensitivity
Nonlinear output
Current excitation
Four-wire/three-wire configuration
Linearization
Strain gauge Low resistance device
Low sensitivity
Nonlinear output
Voltage or current excitation
High amplification
Bridge completion
Linearization
Shunt calibration
Current output
device
Current loop output (4 -- 20mA
typical)
Precision resistor
Thermistor Resistive device
High resistance and sensitivity
Very nonlinear output
Current excitation or voltage
excitation with reference resistor
Linearization
Active
Accelerometers
High-level voltage or current
output
Linear output
Power source
Moderate amplification
AC Linear
Variable
Differential
Transformer
(LVDT)
AC voltage output AC excitation
Demodulation
Linearization
Differential measurement systems are similar to floating signal sources in that the
measurement is made with respect to a floating ground that is different from the
measurement system ground. Neither of the inputs of a differential measurement system is
tied to a fixed reference, such as the earth or a building ground. Analog multiplexers in the
signal path are used in order to increase the number of measurement channels when only
one instrumentation amplifier exists. In Figure 2.7, the AIGND (analog input ground) pin is
the measurement system ground.
12
Figure 2.7 - Differential measurement system (National Instruments Inc.)
An ideal differential measurement system responds only to the potential difference
between its two terminals - the positive (+) and negative (-) inputs. A common-mode voltage
is any voltage measured with respect to the instrumentation amplifier ground present at both
amplifier inputs. An ideal differential measurement system completely rejects, or does not
measure, common-mode voltage. Rejecting common-mode voltage is useful because
unwanted noise often is introduced as common-mode voltage in the circuit that makes up the
cabling system of a measurement system. However, several factors, such as the common-
mode voltage range and the common-mode rejection ratio (CMRR) parameters, limit the
ability of practical, real-world differential measurement systems to reject the common-mode
voltage.
Referenced and non-referenced single-ended measurement systems are similar to
grounded sources in that the measurement is made with respect to a ground. A referenced
single-ended measurement system measures voltage with respect to the ground, AIGND,
which is directly connected to the measurement system ground. Figure 2.8 shows a 16-
channel referenced single-ended measurement system.
13
Figure 2.8 - Referenced single-ended measurement system (National Instruments Inc.)
DAQ devices often use a non-referenced single-ended (NRSE) measurement
technique, or pseudo-differential measurement, which is a variant of the referenced single-
ended measurement technique. Figure 2.9 shows a NRSE system.
Figure 2.9 - NRSE measurement system (National Instruments Inc.)
In a NRSE measurement system, all measurements are still made with respect to a
single-node analog input sense (AISENSE), but the potential at this node can vary with
respect to the measurement system ground (AIGND). A single-channel NRSE measurement
system works as a single-channel differential measurement system. Figure 2.10 summarizes
ways to connect a signal source to a measurement system.
14
Figure 2.10 - Connecting a Signal Source to a Measurement System (National Instruments
Inc.)
The sampling rate determines how often conversions can take place. A faster
sampling rate acquires more data in a given time and can therefore often form a better
representation of the original signal. Data can be sampled simultaneously, with multiple
converters, or it can be multiplexed, where the analog-to-digital converter (ADC) samples
one channel, switches to the next channel, samples it, switches to the next channel, and so
on. Multiplexing is a common technique for measuring several signals with a single ADC.
The range, resolution, and gain available on a DAQ device determine the smallest
detectable change in voltage. This change in voltage represents 1 least significant bit (LSB)
of the digital value and is often called the code width. The ideal code width is found by
dividing the voltage range by the gain times two raised to the order of bits in the resolution.
15
It is also important to consider the differential nonlinearity (DNL), relative accuracy, settling
time of the instrumentation amplifier and noise. As the level of voltage applied to a DAQ
device is increased, the digital codes from the ADC should also increase linearly (Application
Note 007 - National Instruments Inc.). If voltage versus the output code from an ideal ADC is
plotted, the result would be a straight line. Deviations from this ideal straight line are
specified as nonlinearity. DNL is a measure in LSB (Least Significant Bit) of the worst-case
deviation of code widths from their ideal value of 1 LSB. An ideal DAQ device has a DNL of 0
LSB. In practical terms, a good DAQ device will have a DNL of ±0.5 LSB. Poor DNL reduces
the resolution of the device.
Relative accuracy is a measure in LSBs of the worst-case deviation from the ideal
DAQ device transfer function, a straight line. Relative accuracy is determined on a DAQ
device by connecting a voltage at negative full scale, digitizing the voltage, increasing the
voltage, and repeating the steps until the input range of the device has been covered. When
the digitized points are plotted, the result must be a straight line. Obtaining good relative
accuracy requires that both the Analog-to-Digital Converter (ADC) and the surrounding
analog circuitry are properly designed.
Settling time is the time required for an amplifier, relays, or other circuits to reach a
stable mode of operation. This parameter has to be as lower as possible, even when working
with high gains and sampling rates, in order to avoid delays.
Any unwanted signal that appears in the digitized signal of the DAQ device is noise.
Because the PC is a noisy digital environment, acquiring data on a plug-in device takes a
careful layout on multiple-layer DAQ boards by skilled analog designers. Simply placing an
ADC, instrumentation amplifier, and bus interface circuitry on a one or two-layer board will
likely result in a very noisy device. Designers can use metal shielding on a DAQ device to
help reduce noise. Proper shielding not only should be added around sensitive analog
sections on a DAQ device, but also must be built into the layers of the device with ground
planes.
Analog output circuitry is often required to provide stimuli for a DAQ system. The
settling time, slew rate and output resolution determine the quality of the output signal
produced for the Digital-to-Analog Converter (DAC). Slew rate means the maximum rate of
change that the DAC can produce on the output signal. Settling time and slew rate work
together in determining how quickly the DAC changes the output signal level. Therefore, a
DAC with a small settling time and a high slew rate can generate high-frequency signals
because little time is needed to accurately change the output to a new voltage level.
An example of an application that requires high performance in these parameters is
the generation of audio signals. The DAC requires a high slew rate and small settling time to
16
generate the high frequencies necessary to cover the audio range. In contrast, an example of
an application that does not require fast D/A conversion is a voltage source that controls a
heater. Because the heater cannot respond quickly to a voltage change, fast D/A conversion
is unnecessary.
Digital I/O (DIO) interfaces are often used on PC DAQ systems to control processes,
generate patterns for testing, and communicate with peripheral equipment. In each case, the
important parameters include the number of digital lines available, the rate at which the
system can accept and source digital data on these lines, and the drive capability of the
lines. The number of digital lines, of course, should match the number of processes to be
controlled. The amount of current required to turn the devices on and off must be less than
the available drive current from the device. DIO can also be used in industrial applications, to
verify that a switch is open or closed and to check the voltage levels as high or low. It can
also be used for high-speed handshaking or simple communication methods. In addition,
some devices with digital capabilities will have handshaking circuitry for communication-
synchronization purposes. The number of channels, data rate, and handshaking capabilities
are all important specifications that should be understood and matched to the application
needs.
Counter/timer circuitry is useful for many applications, including counting the
occurrences of a digital event, digital pulse timing, and generating square waves and pulses.
It is possible to implement all these applications using three counter/timer signals – gate,
source, and output. The gate is a digital input used for enabling or disabling the function of
the counter. The source is a digital input that causes the counter to increment each time it
toggles, and therefore provides the time base for the operation of the counter. The output
generates digital square waves and pulses at the output line. The most significant
specifications for operation of a counter/timer are the resolution and clock frequency. The
resolution is the number of bits the counter uses. A higher resolution simply means that the
counter can count higher. The clock frequency determines how fast is possible to toggle the
digital source input. With higher frequency, the counter increments faster and therefore can
detect higher frequency signals on the input and generate higher frequency pulses and
square waves on the output.
2.3.2 Systems
Due to the different types of work environment, the data acquisition systems for
welding should be portable. According to Yapp (2004), many of them have been developed
17
an applied over the last twenty years, from relatively simple system which provide a printout
of average values to microprocessor based systems which incorporate sophisticated data
analysis. Nowadays, it is very common to use a laptop connected to the data acquisition
system for data processing. Since a laptop can be easily substituted if it gets obsolete,
insufficient hard disc capability for instance, this configuration provides flexibility for the
system. Moreover, all the acquired data and analysis can be stored into only one device, the
laptop, being easy to change of work environment. The data can also be transmitted through
a network, making possible a remote control. Figure 2.11 shows the “ArcSentry”, a system
which provides measurements at multiple cells, transferred to the server computer on either
a wire or RF based Ethernet LAN.
Figure 2.11 - ArcSentry System (N.A. Technologies Inc.)
This is a specific solution for welding. There are many other general purpose ones,
which only acquire, store and make the data available for processing. In this case, softwares
should be developed to analyse the acquired data. Figure 2.12 shows the OMB-DAQBOOK
portable data acquisition systems for laptop and desktop PCs which offer 12 or 16-bit and
18
100kHz acquisition rate. They also provide more than 700kbyte/s bidirectional data
communication to the PC via an enhanced parallel port (EPP) or PCMCIA link interfaces. A
Windows-based data logging application for setting up the acquisition applications and
saving acquired data directly to disk is also provided.
Figure 2.12 - OMB-DAQBOOK series (Omega Inc.)
Portable data acquisition systems can also have a briefcase shape. Figure 2.13
shows the DEWE-5000 an industrial PC integrated version and Figure 2.14 shows the
DEWE-BOOK series a notebook integrated version, both from DEWETRON. They are very
powerful data acquisition systems which record several types of signals such as video,
audio, CAN Bus data, position, speed, distance information from GPS satellites and so on,
simultaneously and in synch. Moreover, it is also offered A/D cards with 24-bit resolution,
simultaneous sampling on all channels (up to 25 MS/s sample rate per channel), robust anti-
aliasing filtering, and plug-in signal conditioning modules. Data can also be transmitted by
Ethernet or Wireless LAN.
The SCC DAQ system is another concept for data acquisition systems, proposed by
National Instruments (Signal Conditioning Overview). It consists of a shielded carrier Figure
2.15, signal conditioning modules (SCC modules - Figure 2.16 ), a DAQ device (Figure 2.17),
and a cable (Figure 2.18), providing conditioned analog signals directly passed to the inputs
of the DAQ device. The SCC modules can be connected according to the application offering
flexibility, customization, and ease of use in a single, low-profile package. They are either
single or dual-channel modules that condition analog or digital signals and are available for
thermocouples, RTDs, strain gauges, force/load/torque/pressure sensors, accelerometers,
voltage and current input, isolated voltage and current output frequency-to-voltage
conversion, low-pass filtering, isolated digital I/O, relay switching and bread-boarding for the
custom circuitry.
19
Figure 2.13 - DEWE-5000 series (DEWETRON Inc.)
Figure 2.14 - DEWE-BOOK series (DEWETRON Inc.)
20
They can also provide up to 300V of working isolation to voltage and current
input/output signals from the DAQ device. Optically isolated digital I/O modules can condition
digital lines from the DAQ device or be accessed directly using the 42-pin screw terminal
mounted inside the box. Relay modules add switching to the SCC DAQ system makes
possible to access Analog Input, Analog Output, Digital I/O, Counter/Timer signals as well as
timing and triggering signals from the DAQ device using feedthrough modules. It is also
possible to cascade two SCC analog input modules on a single analog input channel,
passing an analog input signal through both an attenuator module and a filter module, for
instance.
Figure 2.15 - Shielded Carriers (National Instruments Inc.)
Figure 2.16 - SCC Modules (National Instruments Inc.)
Figure 2.17 - DAQ Device (National Instruments Inc.)
21
Figure 2.18 - Shielded Cable (National Instruments Inc.)
2.4 LabView
LabVIEW is a graphical development environment by National Instruments for
creating flexible and scalable test, measurement and control applications rapidly and at
minimal cost. With LabVIEW, engineers and scientists interface with real-world signals,
analyze data for meaningful information and share results and applications. Regardless of
experience, LabVIEW makes development fast and easy for all users.
In Figure 2.19, the first panel is called Block Diagram. It is where the programming is
done. The blocks, which are functions or controls, are connected resulting in a graphical
code for the application. The second panel, called front panel, is where the controls are
placed. They can be graphs, displays, buttons, etc, composing the application interface.
Figure 2.19 - LabVIEW (National Instruments Inc.)
22
Data acquisition is simpler to be done with LabVIEW. National Instruments offers NI
DAQmx tool. This is a programming package installed on LabVIEW 7.0 and above for data
acquisition. With this package, it is possible to set the parameters of the physical channels of
the DAQ devices. It is done by creating virtual channels. It means that for each physical
channel is possible to have many different configurations. Thus, the same physical channel
of the DAQ device can be used by different applications. A virtual channel can be created by
using the DAQmx blocks or by the Measurement & Automation Explorer (MAX), which is an
application included in the LabVIEW package for device configuration. Besides the
configuration, it is also possible to calibrate the DAQ board for different environment
temperatures, what must be done every time the work environment is changed.
In the other hand, programming with LabVIEW is sometimes hard. The programmer
gets limited to its functions and if he or she needs a different one, it is necessary to write the
code in another programming environment and use it as a DLL or other format. Moreover,
the programmer spends a long time learning this new concept. The LabVIEW documentation
is still poor, making programming sometimes a nightmare. National Instruments offers many
training courses, but they are quite expensive. In spite of this, LabVIEW is much
recommended for research development because the code can be easily modified at any
time.
23
CHAPTER 3
METHODOLOGY AND DEVELOPMENT
As mentioned at the end of the Introduction (Chapter 1), the objective of this work
was the development of a data acquisition system projected for the study of the several types
of welding processes. This target, named here SMART, had to be able to do a synchronous
acquisition of electrical signals and frames of an USB camera, and trigger a high-speed (HS)
camera that would start to acquire frames, independently. The acquired data would be stored
for a frame-by-frame post-analysis. It means to correlate each frame of the USB or HS
camera images and the acquired signals (voltage, current, temperature, sound, etc), on a
time-base, characterizing a visual analysis of the arc welding.
Besides that, SMART had to offer portability, too. For this purpose, a PCMCIA
acquisition board and a laptop would be responsible for acquiring and storing data,
respectively. In order to make easy the transportation of the system, a briefcase shape had
to be adopted, where the cables, the signal conditioning module and the laptop could be
kept.
Flexibility had to be offered as well. Therefore, for each different type of signal to be
acquired, a suitable signal conditioning module would be plugged into a signal conditioning
unit. It means to make possible the acquisition of any kind of signal since it is used the
adequate module.
From the methodological point of view, the development of this system had to be
based on National Instruments (NI) resources (hardware and software). Cranfield University
had already bought most of the necessary equipment, which was NI brand, to develop this
system. Moreover, they also had an academic license of LabView, the programming
language used for developing the software. Besides that, the NI resources offer flexibility.
There are many types of signal conditioning modules which can be easily plugged to a signal
conditioning unit, all of them offered by National Instruments. In addition, future modifications
in a LabView based software are, usually, simple to be done.
The system had to be designed for up to the dual tandem welding process limits,
what means 4 voltages and 4 currents. Temperature measurement of at least 2 different
points had to be also available, as well as, an external trigger for the high-speed camera. By
24
using a laptop, these signals, the frames of the USB and the HS cameras had to be
synchronously acquired. Afterwards, it will be described the hardware and software projects
developed to compose this system.
3.1 Hardware Project
The standard configuration established for the hardware has connectors for 4
voltages, 4 currents, 2 temperatures and 1 trigger output. Connectors for digital I/Os, analog
outputs and additional analog inputs are also available. Since the development of the system
is based on National Instruments (NI) resources connected to a laptop, an ideal acquisition
board is the DAQCard-6062E, for PCMCIA bus. It has 16 single-ended analog inputs, an
input resolution of 12-bit and an input range from
±
0.05 to
±
10V. Taking these parameters,
it was possible to characterize the necessary signal conditioning modules for the intended
system configuration.
In order to use the whole scale background, it is necessary to amplify the input
signals making their maximum value equal to 10V. Since the welding signals can be DC
(Direct Current) or AC (Alternate Current), the taken minimum value for the scale background
would be -10V, but for the standard configuration was taken 0V, for DC signals only.
Equation 3.1 represents the gain for making this conditioning. Equation 3.2
represents the
minimum possible detectable value of the input signal after being conditioned, where
scale_gain is given by
Equation 3.3. If the input signal fits the whole scale background,
scale_gain is equal to 1, since conditioned_signal_range is equal to scale_range.
rangesignal
rangescale
gainngconditioni
_
_
_ =
12
2*_
_
gainscale
rangesignal
LSB =
rangescale
rangesignaldconditione
gainscale
_
__
_ =
Equation 3.1
Equation 3.2
Equation 3.3
Afterwards, the necessary conditioning for the standard signals was established.
Since NI-DAQmx coerces the input limits to fit a suitable scale range (see Table 3.1), it was
considered a different scale_range parameter for each signal.
25
Voltage
For signal_range from 0V to 100V the nearest scale_range goes from 0V to 10V.
1.0
0100
010
_ =
=gainngconditioni
mVLSB 25
2
0100
12
=
It means that it is necessary a signal attenuator with gain equal to 0.1 and the lowest
detectable amplitude is 25mV. In this case, the NI SCC-A10 Voltage Attenuator Module is
the most adequate one.
Current
If signal_range goes from 0 to 1000A and current sensor sensibility is equal to
1mV/A, then signal_range goes from 0 to 1V and the nearest scale_range goes from 0 to 1V.
1
)01(
)01(
_ =
=gainngconditioni
mALSB 245
2
)01000(
12
=
It means that amplification is not necessary and the lowest detectable amplitude is
245mA. Since it is not necessary any conditioning module and there is no any suitable signal
input for it in the NI SCC-2345 device (the signal conditioning unit chosen for this system),
devices for making possible these signal inputs were built, as Figure 3.1shows.
Figure 3.1: Input modules for current signals
Temperature
If signal_range goes from 0
o
C to 1600
o
C (considering that welding temperatures are
always positive) and R/S thermocouple sensibility is equal to 10µV/
o
C, then signal_range
Current Input
26
goes from 0 to 16mV and the nearest scale_range would go from 0 to 100mV. But
thermocouple signals must be amplified in order to increase the SNR (Signal-Noise Rate).
Establishing conditioning_gain equal to 100, signal_range goes from 0 to 1.6V and the
nearest scale_range goes from 0 to 2V.
8.0
2
6.1
_ ==gainscale
CLSB
o
5.0
2*8.0
)01600(
12
=
It means that it is necessary a signal amplifier with gain equal to 100 and the lowest
detectable amplitude is 0.5
o
C. In this case, the NI SCC-TC02 Thermocouple Input Module is
the most adequate one.
Since the welding environment is very noisy, it is necessary to use a low-pass filter
for every analog input signal in order to avoid aliasing. For this purpose, a RC low-pass filter
(1 pole), as shown by Figure 3.2, was attached to each analog input connector. The cut-off
frequency is given by Equation 3.4.
Figure 3.2: Butterworth low-pass filter
Table 3.1: Input Ranges for NI 6020E, NI 6040E, NI 6052E, NI 6062E, and NI 6070E/6071E
acquisition boards
Range Configuration Gain Actual Input Range
0 to +10V
1.0
2.0
5.0
10.0
20.0
50.0
100.0
0 to +10V
0 to +5V
0 to +2V
0 to +1V
0 to +500mV
0 to +200mV
0 to +100mV
-5 to +5V
0.5
1.0
2.0
5.0
10.0
20.0
50.0
100.0
-10 to +10V
-5 to +5V
-2.5 to +2.5V
-1 to +1V
-500 to +500mV
-250 to +250mV
-100 to +100mV
-50 to +50mV
27
RC
fc
π
2
1
=
Equation 3.4
For
kHzfc 10=
and nFC 10= ,
=
kR 5.1 .
Actually, the ideal resistor would be R=1.6k, but the nearest value existent in the
laboratory was 1.5k, which produces fc=10.61kHz. This cut-off frequency was chosen in
order to permit high-frequencies acquisition, for instance the audio signals (in this case, an
amplifier module has to be used). The Nyquist Theorem says that the sampling rate has to
be at least 2 times the highest signal frequency to be acquired. It means that a minimum
sampling rate for each channel is 20kS/s. When a narrow frequency range is wanted, a
digital filter has to be used. For this purpose, a Butterworth low-pass digital filter was
implemented for the system. The cut-off frequency has always to be less than half sampling
rate. So, the cut-off frequency range for this filter goes from 1 to 10kHz and can have 1, 2, 3,
4 or 5 poles.
The acquisition board of the system is the DAQCard-6062E which has 16 single-
ended analog inputs, 2 analog outputs, 8 digital I/Os, 2 counter/timers and 1 analog trigger.
The sampling rate is 500kS/s. Since it is used only 12 analog inputs for the standard
configuration, the maximum sampling rate for each channel is around 40kS/s. The resolution
for the analog outputs is also 12-bit, the output range
±
10V and the output rate is 850kS/s.
Following the idea of portability, it was used a briefcase to compose the system, as
shown by Figure 3.3. An aluminium panel was built in order to place the I/O connectors. It
can be opened and the laptop can be kept inside the briefcase, on the lid of the signal
conditioning unit placed on the bottom of the briefcase. Eventually, the briefcase’s lid can be
used for keeping loose cables.
Figure 3.3: Briefcase used for composing the system
I/O connectors
briefcase’s lid
panel’s door
28
Since most of sensors have BNC connectors as standard, it was used 8 female BNC
panel connectors for the 4 voltage and 4 current input signals. In order to connect the voltage
signals to their related inputs, 4 cables (2 meters length each) were built, being one extremity
finished with a male BNC connector and the other one finished with two crocodile
connectors, according to Figure 3.4.
Figure 3.4: Cable built for voltage signals
The current sensors have already their cables. For the temperature signals it was
used 2 mini thermocouple panel connectors and 2 extension cables (2 meters length each),
as Figure 3.5 shows.
Figure 3.5: Cable for temperature signals
For the trigger of the high-speed camera, it was used 2 female jack banana
connectors and 1 cable, being one extremity finished with 2 male jack banana connectors
and the other one finished with 1 female BNC connector, according to Figure 3.6. This cable
has a circuitry as shown by Figure 3.7, which works as an on/off switch.
29
Figure 3.6: Cable for the trigger of the high-speed camera
Figure 3.7: Circuitry for the trigger cable
It was also used 1 female DB9 connector for digital I/Os, just in case of future
modifications in the system need to use it. For additional signals, a hole was left. Figure 3.8
shows the idealized final panel.
Figure 3.8: Panel built for the system’s briefcase
30
The signal conditioning unit lies on the bottom of the briefcase. A top door gives
access to this unit, as Figure 3.9 shows. On this door, the laptop can be placed, composing a
tidy work environment. Figure 3.10 illustrates how the input and output terminals can be used
for connecting signals, as the trigger output, for example.
Figure 3.9: Signal conditioning unit
In order to provide more details of the arc welding, it was attached a lens with an
adapter to the USB camera, according to Figure 3.11. This lens is a Helios 44M and its focal
distance is 58mm.
Filters are necessary to decrease the incident arc brightness and gives definition to
the arc image. They are Cokin brand and have the following specifications:
Ultra-violet (U.V) P231
Neutral Grey ND4 P153
Neutral Grey ND2 P152
Neutral Grey ND8 P154
Dark
The computer used for performing this system is DELL brand with a Xeon ™ Intel®
processor, 2.80GHz, 1GB RAM and USB 2.0 ports to plug the webcam.
signal conditioning
unit’s door
input and output
terminals
31
Figure 3.10: Detail that schemes the input and output terminals of the signal conditioning unit
and shows the trigger output
Figure 3.11: Webcam with magnifying lens and filters attached
32
3.2 Software Project
The software was developed in LabVIEW 7.1 for Windows XP because it is an easy
programming language and the standard in the laboratory, since the university has an
academic license provided by National Instruments. Programming in LabView is relatively
simple. The objects which belong to the function are placed on the “Front Panel”. This
function can be configured to work as a window and will show up, when executed. The code
is built on the “Block Diagram”, where these objects are linked to other functions. Afterwards,
it will be described the software development. The following figures can be better visualized
on the computer, where zoom tools are available, than on the printed version of this thesis.
Figure 3.12 shows the main window. All the signals selected by using the button
“Channels” are plotted on the graph “Signal”. The three buttons on the left and under this
graph make possible to move the cursor and zoom the plots. The boxes on the right show
the current position of the cursor. The three other buttons can be used for configuring the
plots and move the cursor.
The webcam frames are placed on “USB Camera” and the high-speed camera
frames on “HS Camera”. There are five buttons under these image graphs: play, backward,
forward, pause and stop. Pressing play, the webcam frames play until pause or stop is
pressed, or the last frame is reached. Pressing pause, the current frame is shown and it is
the reference for next executions. Pressing stop, the current frame is also shown, but the first
frame (frame zero) will be the next one, when a new execution is done. Pressing the
backward and the forward buttons, the previous and the next frames are shown, respectively.
The box “Frame” displays the number of the frame.
Pressing “Start” the acquisition starts and the red LED is turned on. Pressing “Stop”
the acquisition stops and this LED is turned off. The box “Project” contains the path of the
folder where the project file is saved.
Figure 3.13 shows the code that builds the menus which appear on the top of the
main window when the software is executed. They are File (items: New File, Open File, Save
as xls), Acquisition (items: Setup Signal, Setup Image, Filter) and Camera (item: Settings).
After building the menus, the software becomes a main sequence divided into three
frames. The first one makes the initialization of all variables. The second one makes the
calibration of the system. And the third one, more complex, is the core of the system.
Figure 3.14 shows that first frame, which contains the default values of all variables
used by the software. If there is an USB camera plugged to the computer, a session for its
image is created. If there is no any USB camera plugged to the computer, it does not happen
and the window that sets the parameters of the USB camera cannot be opened.
33
Figure 3.12: Illustration of the main window, referred in the project as function “Main”
Figure 3.13: Illustration of the code that creates the menu of the main window
34
The value of the variable “Delay” is set as 0.0026 seconds. This is a constant value
used for calculating the frame of the HS camera that should be plotted, when a frame-by-
frame analysis is executed. This value was chosen after some trials, as chapter 4 explains.
The value of the variable “dt USB” is 0.1 seconds. This is a constant interval of time
used for acquiring each frame of the USB camera and read the correspondent buffer of data
coming from the acquisition board.
The other variables and objects receive default values, which can be modified during
the execution of the system.
Figure 3.14: Illustration of the code that initializes the variables and objects of the function
“Main”
Afterwards, the system is calibrated. First, it is verified if there is any acquisition board
plugged to the computer. In a negative case, the warning message “The system cannot be
calibrated because there is no acquisition board and/or USB camera connected” appears, as
Figure 3.15 shows. Otherwise, the acquisition board is calibrated and reset, as Figure 3.16
illustrates. It is necessary to reset the acquisition board in order to invalidate every immediate
task associated with it. Then, it becomes immediately available for any new task.
35
Figure 3.15: Illustration of the code that shows a warning message when no acquisition
board is plugged to the computer
Figure 3.16: Illustration of the code that calibrates and resets the acquisition board
After calibrating and resetting the acquisition board, it is also verified if any USB
camera is plugged to the computer. In a negative case, another warning message appears
“The system cannot be calibrated because there is no USB camera connected”, according to
Figure 3.17. Otherwise, a short acquisition is done (one second), as Figure 3.18 shows. This
36
was necessary because the system was presenting a delay before starting to acquire data,
when it was run for the first time. Probably this is due to its dependence on the webcam.
Figure 3.17: Illustration of the code that shows a warning message when no USB camera is
connected
Figure 3.18: Illustration of the code that calibrates the system
37
Figure 3.19 represents the function “Image Test”, which makes this short acquisition.
First, all the initializations are done, according to Figure 3.20. Afterwards, the data acquisition
is carried out, although no any sample is saved, as Figure 3.21 illustrates.
Figure 3.19: Illustration of the front panel of the function “ImageTest”
Figure 3.20: Illustration of the code that initializes the variables and objects of the function
“ImageTest”
38
Figure 3.21: Illustration of the code of the function “ImageTest” that does a short acquisition
for the calibration of the system
The third frame of the sequence is more complex. It is composed by event frames.
The first one is “Menu Selection”, which is a sequence of frames, the items of the menu.
Figure 3.22 shows the first one, “New File…”, where nothing happens if a new project file is
not chosen, otherwise those variables are reset and a new project file is created. This file
receives the extension *.prj and it is where the values of the variables used during the data
acquisition are saved. Thus, it is not necessary to reconfigure the system every time it runs.
Figure 3.23 illustrates the function responsible for creating new files. A window asking
the name of the new project file shows up. If it is not cancelled and the chosen name already
exists, a dialog box does the following question “Do you want to replace <file>?”. If “Yes”, this
file is deleted and a new one is created. If “No”, nothing happens. If the file does not exist, a
new one is straight created. Figure 3.24 and Figure 3.25 illustrate how it works.
The function “Create”, represented by Figure 3.26 is responsible for creating three
new blank files with the extensions *.txt (for signal samples), *.xls (for converting the txt file to
39
an excel file) and *.avi (for videos), with the same name of the project file. If these files
already exist, they are deleted. Figure 3.27 shows how it works.
Figure 3.22: Illustration of the code that shows the item “New File…” of the “Menu Selection”
Figure 3.23: Illustration of the front panel of the function “New File”
Figure 3.24: Illustration of the code that shows the function “New File” asking a name for the
files to be created
40
Figure 3.25: Illustration of the code that shows the function “New File” asking a name for the
file to be straight created
Figure 3.26: Illustration of the front panel of the function “Create”
Figure 3.27: Illustration of the code of the function “Create” that creates three new files with
the extensions txt, xls and avi
The next item of the “Menu Selection” is “Open File…”, as Figure 3.34 shows. The
function responsible for opening the project is illustrated by Figure 3.28. A windows asking its
file name shows up. If it is not cancelled and the file exists, the function “Read” (Figure 3.31)
is called and returns the TXT and USB file paths, as Figure 3.29 shows. Otherwise, these
paths are empty strings, as Figure 3.30 illustrates. Function “Read” reads the project file, as
41
Figure 3.32 shows, and searches the *.txt and *.avi files into the project file’s folder. If they
exist, their paths are also read, as Figure 3.33 shows.
Figure 3.28: Illustration of the front panel of the function “Open Project”
Figure 3.29: Illustration of the code of the function “Open Project” that asks the name of the
project to be opened and returns txt and avi paths
Figure 3.30: Illustration of the code of the function “Open Project” that asks the name of the
project to be opened and returns empty strings for txt and avi paths
Figure 3.31: Illustration of the front panel of the function "Read"
42
Figure 3.32: Illustration of the code of the function “Read” that opens the project and reads its
data
Figure 3.33: Illustration of the code of the function “Read” that looks for the txt and avi files of
the read project
43
Figure 3.34: Illustration of the code that shows the item “Open File…” of the “Menu Selection”
After opening the files and reading their contents, the variables and objects of the
system are set. If the txt path is not empty, the number of samples is read by function “NF
Signal” (Figure 3.37 and Figure 3.38) and the signals are plotted, as Figure 3.35 shows.
Otherwise, only the variable “dt Signal” is set, as shown by Figure 3.36.
Figure 3.35: Illustration of the code that calls the function “NF Signal” and plots the signals
44
Figure 3.36: Illustration of the code that sets a default value for the variable “dt Signal”
Figure 3.37: Illustration of the front panel of the function "NF Signal"
Figure 3.38: Illustration of the code of the function “NF Signal” that reads the txt file and its
number of samples
When a project file is opened, a window shows up asking for the correspondent HS
camera file. If it is not cancelled and the avi path is not empty, the function “NF HS” (Figure
3.42 and Figure 3.43) reads its number of frames. If the chosen file is the same of the USB
camera file, the error message “Error! Choose a different file for the High Speed Camera”
shows up. Figure 3.39, Figure 3.40 and Figure 3.41 illustrate how it works.
If the avi path of the USB camera file is not empty, the function “NF USB” (Figure
3.46 and Figure 3.47) reads its number of frames. Figure 3.44 and Figure 3.45 show how it
works.
Afterwards, some ratios are calculated. The variable “dt HS” is divided by “dt Signal”
and “dt HS” in order to find out how many signal samples and USB frames, respectively,
45
correspond to a HS frame. “Delay” is divided by “dt HS” in order to find out how many HS
frames have to be shifted. And “dt USB” is divided by “dt Signal” and “dt HS” in order to find
out how many signal samples and HS frames, respectively, correspond to an USB frame.
This is illustrated by Figure 3.48. If any of these ratios is less then 1, it is inverted, as shown
by Figure 3.49. Then, Depending on the case (direct or inverse), a different type of execution
is set. It can be 1 (direct), 0 (inverse) or -1 (no camera file is opened), Figure 3.50 illustrates
this case.
Figure 3.39: Illustration of the code that asks the HS camera file to be opened and calls the
function “NF HS”
Figure 3.40: Illustration of the code that shows an error message if the HS camera file is the
same of the USB camera file
46
Figure 3.41: Illustration of the code that sets default values for the variables “NF HS” and “dt
HS”, just in case no HS camera file is opened
Figure 3.42: Illustration of the front panel of the function "NF HS"
Figure 3.43: Illustration of the code of the function “NF HS” that reads the number of frames
of the HS camera file
47
Figure 3.44: Illustration of the code that calls the function “NF USB”
Figure 3.45: Illustration of the code that sets a default value for the variable “NF USB”
Figure 3.46: Illustration of the code of the function “NF USB”
Figure 3.47: Illustration of the code of the function “NF USB” that reads the number of frames
of the USB camera file
48
Figure 3.48: Illustration of the code that calculates the ratios among the frames of each
camera and the signal samples
Figure 3.49: Illustration of the code that calculates the inverse ratios among the frames of
each camera and the signal samples
49
Figure 3.50: Illustration of the code that sets the variables to be used during the frame-by-
frame analysis if no camera file is opened
The next “Menu Selection” frame is “Save as xls…”. This item converts a txt file into a
xls file. First, the message “Saving .xls File…” appears on the toolbar of the main window
and the progress bar is activated, as Figure 3.51 shows. Then, the function “Save XLS”
(Figure 3.56, Figure 3.57 and Figure 3.58) asks the txt file to be converted and the function
“Read txt File” (Figure 3.59, Figure 3.60 and Figure 3.61) creates an xls path. If it is not
cancelled, the progress bar is set to zero, as Figure 3.52 shows. If a file with the created xls
path already exists, it is deleted by function “Delete” (Figure 3.62 and Figure 3.63), as Figure
3.53 illustrates.
Afterwards, the conversion is done by function “xls” (Figure 3.64 and Figure 3.65), as
Figure 3.54 shows. First, Excel is opened by function “Open Excel and Make Visible” (Figure
3.66 and Figure 3.67). Then, a workbook and a worksheet are created, by functions “Open
Specific Workbook” (Figure 3.68 and Figure 3.69) and “Open Specific Worksheet” (Figure
3.70 and Figure 3.71), respectively. After that, function “Row Col To Range Format” (Figure
3.75 and Figure 3.76) converts the column and the row, which are numeric, into a range
format of excel, that identifies the cell where a value is placed. Function “Set Cell Value with
Range” (Figure 3.72, Figure 3.73 and Figure 3.74) places the values into the cells.
Eventually, the toolbar is cleaned and the progress bar is inactivated, as Figure 3.55
shows.
50
Figure 3.51: Illustration of the code that shows a message and a progress bar on the toolbar
of the main window, before starting to convert a txt file into a xls file
Figure 3.52: Illustration of the code that sets a progress bar on the toolbar of the main
window
Figure 3.53: Illustration of the code that calls the function “Delete XLS”
51
Figure 3.54: Illustration of the code that calls the function “XLS”
Figure 3.55: Illustration of the code that inactivates the progress bar and cleans the toolbar of
the main window
Figure 3.56: Illustration of the front panel of the function "Save XLS"
52
Figure 3.57: Illustration of the code of the function “Save XLS” that asks the txt file to be
converted and calls the function “Read txt File”
Figure 3.58: Illustration of the code of the function “Save XLS” that returns an empty string
for the xls and txt files, case no txt file is opened
Figure 3.59: Illustration of the front panel of the function "Read txt File"
Figure 3.60: Illustration of the code of the function “Read txt File” that returns the paths of the
xls and txt files
53
Figure 3.61: Illustration of the code of the function “Read txt File” that returns an empty string
if no any txt file exists
Figure 3.62: Illustration of the front panel of the function "Delete"
Figure 3.63: Illustration of the code of the function “Delete” that deletes an existent xls file
Figure 3.64: Illustration of the front panel of the function "xls"
54
Figure 3.65: Illustration of the code of the function "xls" that converts a txt file into a xls file
Figure 3.66: Illustration of the front panel of the function "Open Excel and Make Visible"
Figure 3.67: Illustration of the code of the function "Open Excel and Make Visible" that opens
an excel application
Figure 3.68: Illustration of the front panel of the function "Open Specific WorkBook"
55
Figure 3.69 Illustration of the code of the function "Open Specific WorkBook" that opens a
specific workbook of an excel application
Figure 3.70: Illustration of the front panel of the function "Open Specific WorkSheet"
Figure 3.71: Illustration of the code of the function "Open Specific WorkSheet" that opens a
specific worksheet of a workbook of an excel application
Figure 3.72: Illustration of the front panel of the function "Set Cell Value with Range"
56
Figure 3.73: Illustration of the code of the function "Set Cell Value with Range" that writes a
value into a excel cell
Figure 3.74: Illustration of the code of the function "Set Cell Value with Range" that does not
write a value into an excel cell, if an error occurs
Figure 3.75: Illustration of the front panel of the function "Row Col To Range Format"
Figure 3.76: Illustration of the code of the function "Row Col To Range Format" that converts
the column and the row, which are numeric, into a range format of excel
57
Figure 3.77 illustrates the item “Exit” of the “Menu Selection”. If the variable “Camera”
is true, means that an USB camera session was previously opened and must be closed.
Otherwise, this is not necessary. Then, the software is quit.
Figure 3.77: Illustration of the code that shows the item “Exit…” of the “Menu Selection” that
closes an USB camera session and quits the software
The item “Setup Signal…” of the “Menu Selection” is shown by Figure 3.78. The
window “Setup Signal” (Figure 3.79) shows up for setting the acquisition parameters:
Channel, Clock Settings, Acquisition Mode and Advanced Clock Settings (actually, these
ones are not used by the software). If the button OK is pressed, the global variables are
upgraded. If the button Cancel is pressed, it does not happen. Pressing any of these buttons,
this window is closed. Figure 3.80, Figure 3.81, Figure 3.82, Figure 3.83 and Figure 3.84
show how it works.
Figure 3.78: Illustration of the code that shows the item “Setup Signal…” of the “Menu
Selection” that calls the function “Setup Signal”
58
Figure 3.79: Illustration of the window "Acquisition Setup", referred in the project as function
"Setup Signal"
Figure 3.80: Illustration of the code of the function "Setup Signal" that sets the values of the
objects, which appear on this window, with the values kept by global variables
59
Figure 3.81: Illustration of the code of the function "Setup Signal" that disables the objects
“Clock Source” and “Active Edge”, case the object “Clock Type” is internal (value 0)
Figure 3.82: Illustration of the code of the function "Setup Signal" that upgrades the global
variables “Clock Source” and “Active Edge”, case the object “Clock Type” is internal (value 0)
60
Figure 3.83: Illustration of the code of the function "Setup Signal" that enables the objects
“Clock Source” and “Active Edge”, case the object “Clock Type” is external (value 1)
Figure 3.84: Illustration of the code of the function "Setup Signal" that upgrades the global
variables “Clock Source” and “Active Edge”, case the object “Clock Type” is external (value
1)
Figure 3.85 shows the item “Setup Image…” of the “Menu Selection”. The window
“Setup Image” (Figure 3.86) is used for selecting the trigger channel and the USB camera.
The box “fps” means frames per second acquired by the USB camera. Normally, the
webcams can acquire from 15 to 30fps. In order to avoid a repeated frame, the chosen value
for this parameter is 10fps. Another reason, it is the quantity of information to be plotted
61
during the acquisition. A bigger interval of time is better because the plotting function is
called less times. Figure 3.87, Figure 3.88, Figure 3.89, Figure 3.90 and Figure 3.91 show
how this setting of parameters is done.
Figure 3.85: Illustration of the code that shows the item “Setup Image…” of the “Menu
Selection” that calls the function “Setup Image”
Figure 3.86: Illustration of the window "Setup Image", referred in the project as function
"Setup Image"
Figure 3.87: Illustration of the code of the function "Setup Image" that sets the values of the
objects, which appear on this window, with the values kept by global variables, and verifies if
any USB camera is plugged to the computer and if any USB camera session was started
(true case)
62
Figure 3.88: Illustration of the code of the function "Setup Image" that verifies if any USB
camera is plugged to the computer and if any USB camera session was started (false case)
Figure 3.89: Illustration of the code of the function "Setup Image" that verifies if USB cameras
are plugged to the computer and lists them
Figure 3.90: Illustration of the code of the function "Setup Image" that upgrades the global
variables, if the button OK is pressed
63
Figure 3.91: Illustration of the code of the function "Setup Image" that does not upgrade any
global variable, if the button Cancel is pressed
Figure 3.92 shows the item “Filter…” of the “Menu Selection”. The window “Filter”
(Figure 3.93) asks the values for the cut-off frequency and the number of poles of the
Butterworth low-pass digital filter. Figure 3.94, Figure 3.95 and Figure 3.96 show how it
works.
Figure 3.92: Illustration of the code that shows the item “Filter…” of the “Menu Selection” that
calls the function “Filter”
Figure 3.93: Illustration of the window "Low- pass filter", referred in the project as function
"Filter"
64
Figure 3.94: Illustration of the code of the function "Filter" that sets the limit values for the
objects that appear on this window
Figure 3.95: Illustration of the code of the function "Filter" that sets the values of the objects
that appear on this window
Figure 3.96: Illustration of the code of the function "Filter" that upgrades the global variables,
if the button OK is pressed and does not upgrade anything if the button Cancel is pressed
The item “Settings…” of the “Menu Selection” (Figure 3.97) calls the window
“Camera” (Figure 3.98), responsible for setting the USB camera parameters. Each webcam
has different types of parameters which can be set. Thus, pushing the button “Image”, it is
65
possible to set the image resolution, for instance. And pushing the button “Video”, it is
possible to configure brightness, saturation, sharpness, gamma, mode and other
parameters.
Figure 3.97: Illustration of the code that shows the item “Settings…” of the “Menu Selection”
that calls the function “Camera”
Figure 3.98: Illustration of the window "Camera Settings", referred in the project as function
"Camera"
After doing this, those parameters are kept and the image acquisition of the USB
camera will be done with the last configuration. Figure 3.99, Figure 3.100, Figure 3.101,
Figure 3.102, Figure 3.103, Figure 3.104, Figure 3.105 and Figure 3.106 show how it works.
66
Figure 3.99: Illustration of the code of the function “Camera” that enables the buttons “Video”
and “Image”, and shows the name of the USB camera indicated by the global variable
“Camera”
Figure 3.100: Illustration of the code of the function “Camera” that initializes a new USB
camera session and shows the USB camera frames, if none of the objects on the window
“Camera Settings” is changed in 10ms
67
Figure 3.101: Illustration of the code of the function “Camera” that initializes a new USB
camera session if the object “USB Cameras” changes, meaning that a new USB camera was
chosen
Figure 3.102: Illustration of the code of the function “Camera” that stops the current USB
camera session, opens the “Image Settings” window of that camera and starts to play the
frames again using the new configuration
68
Figure 3.103: Illustration of the code of the function “Camera” that stops the current USB
camera session, opens the “Video Settings” window of that camera and starts to play the
frames again using the new configuration
Figure 3.104: Illustration of the code of the function “Camera” that upgrades the value of the
global variable “Camera” if the button OK is pressed
69
Figure 3.105: Illustration of the code of the function “Camera” that does not upgrade anything
if the button Cancel is pressed
Figure 3.106: Illustration of the code of the function “Camera” that does not execute anything
if there is no USB camera plugged to the computer
If none of the items of the “Menu Selection” is selected, a default value equal to
“false” is used for keeping the software waiting for an event, without executing any function,
as Figure 3.107 shows.
70
Figure 3.107: Illustration of the code that shows the item “Default” of the “Menu Selection”
that does not do anything if no other event happens
The next frame of the sequence of events is “Channels Button”, as Figure 3.108
shows. The window “Channel” (Figure 3.109) is used for selecting the channels of which
signals will be plotted during the acquisition. If it is cancelled and there is no signal already
acquired, nothing happens. Otherwise, the signals related to the selected channels (variable
“User Channels”) are extracted from the buffer of signals (variable “Signal”) and plotted.
Figure 3.108: Illustration of the code that shows the event “Channels Button” that calls the
function “Channel” and sets the colour of the plots on the main window
Figure 3.109: Illustration of the window "Channels", referred in the project as function
"Channels"
71
When the window “Channel” is opened, the last configuration has to be shown. For
this purpose, the window “Channel” is firstly cleaned and only the names of the channels are
kept, as shown by Figure 3.110. Then, the selected channels receive a “check” symbol
indicating that they are selected. A sequence of colours (size equal to the quantity of
available channels) is built. Each channel receive a colour which will be the same used for
plotting its acquired signal, as Figure 3.111 shows.
Figure 3.110: Illustration of the code of the function “Channel” that cleans the object “Local
User Channels”, responsible for listing the names of the set channels
Figure 3.111: Illustration of the code of the function “Channel” that sets the object “Local
User Channels”
72
Then, the current selected channels are saved just in case the selection of channels
is cancelled later, as Figure 3.112 illustrates. After that, an event is expected. If an item is
selected, a “check” symbol is used. If the “OK” button is pushed, the current “User Channels”
are updated. Otherwise, if the “Cancel” button is pushed, the previous configuration is
recovered. Figure 3.113, Figure 3.114 and Figure 3.115 show how it works.
Figure 3.112: Illustration of the code of the function “Channel” that saves the current selected
channels just in case the selection of channels is cancelled
Figure 3.113: Illustration of the code of the function “Channel” that upgrades the global
variable “User Channels”, if the button OK is pressed
73
Figure 3.114: Illustration of the code of the function “Channel” that rewrites the original
selection of channels on the object “Local User Channels”, if the button Cancel is pressed
Figure 3.115: Illustration of the code of the function “Channel” that puts a check mark in front
of a selected channel, if the object “Local User Channels” is clicked
The next event is “Start”, responsible for doing the data acquisition. First, the current
project is deleted, as shown by Figure 3.116. Then the function “Create” (Figure 3.117)
checks if txt, xls or avi files, which belong to the current project, already exist. In a positive
case, they are also deleted. Eventually, paths for these files are created, as Figure 3.118
shows.
74
Then, the items of the menu on the window “Main” are disabled, as well as the “Start”
button and the buttons related to the cameras. The other objects are reset and the variables
related to the acquisition configuration are saved into the project file following the order
below:
order: it is the order of the low-pass digital filter;
fl: it is the cut-off frequency of the low-pass digital filter;
Camera: it is the position of the webcam in the listed USB cameras plugged to the
computer;
Trigger: it is the name of the channel used for triggering the high-speed camera;
Pulses: it is the number of pulses necessary to trigger the high-speed camera (it is
always equal to 1);
Acquisition Mode: it can be continuous or finite, assuming the values 10123 or 10178,
respectively;
Clock Type: it can be internal or external, assuming the values 0 or 1, respectively;
Active Edge: it can be rising or falling, assuming the values 0 or 1, respectively.
Sample Time: it is the time in seconds case the acquisition mode is finite; it assumes
discrete values between 1 and 60 seconds.
Rate: it is the sampling rate per channel.
Clock Source: it specifies the channel for the sample clock if an external clock is
used;
Channels: these are the channels used for doing the acquisition;
User Channels: these are the selected channels to be plotted;
Color Graph: these are the colours used for the selected channels.
Figure 3.116: Illustration of the first frame of the event “Start” that deletes the current project
75
Figure 3.117: Illustration of the front panel of the function "Create"
Figure 3.118: Illustration of the code of the function “Create” that creates paths for txt, xls and
avi files of the current project
In order to avoid any kind of interference during the acquisition, all the possible
combinations of types of execution had to be previously done. Thus, the variable “Execution”
was created. If there is no webcam plugged, trigger for the high-speed camera is not solicited
and there is at least one selected channel, this variable becomes equal to 3, as Figure 3.119
shows. If there is a webcam plugged, “Execution” becomes equal to 4, as Figure 3.120
illustrates. If the USB camera is not plugged but the trigger is solicited, this variable assumes
value 1, as shown by Figure 3.121. If the webcam is plugged, the trigger is solicited and
there is at least one selected channel, this variable assumes value 2, as Figure 3.122 shows.
The default value for this variable is 0.
76
Figure 3.119: Illustration of the second frame of the event “Start” that sets all objects on the
main window, saves all global variables into the project file and sets the variable “Execution”
with value 3
Figure 3.120: Illustration of the second frame of the event “Start” that sets the variable
“Execution” with value 4
77
Figure 3.121: Illustration of the second frame of the event “Start” that sets the variable
“Execution” with value 1
Figure 3.122: Illustration of the second frame of the event “Start” that sets the variable
“Execution” with value 2
After setting the variable “Execution”, if its value is equal to 0, nothing happens, as
Figure 3.123 shows. Otherwise, the type of execution is selected and the acquisition starts.
The program is divided into 3 threads: one responsible for webcam images acquisition,
another one responsible for reading data from the acquisition board and a main one
responsible for synchronizing the two first threads and plotting the signals and images on the
window “Main”. The communication among them is done by notifiers. This procedure is
shown by Figure 3.124.
78
Figure 3.123: Illustration of the third frame of the event “Start” that does not do anything if the
variable “Execution” is equal to 0
Figure 3.124: Illustration of the third frame of the event “Start” that calls the function “Image
1” and plots the acquired data
79
Since a webcam frame is acquired every 0.1 second (10fps), a buffer of signals has
to be read into the same interval of time. Function “Image1” (Figure 3.125) is responsible for
doing the acquisition when the variable “Execution” is equal to 1. After setting the necessary
variables (Figure 3.126) and checking the acquisition mode, which can be finite or
continuous, the data acquisition starts, as Figure 3.127, Figure 3.128 and Figure 3.129 show.
If the acquisition is continuous, the button “Stop” has to be pushed in order to stop the
acquisition. If the acquisition is finite, the button “Stop” has to be pushed or the “Sample
Time” has to be reached.
Figure 3.125: Illustration of the front panel of the function "Image1"
Figure 3.126: Illustration of the code of the function "Image1" that sets the inputs to be used
by the functions during the data acquisition
80
Figure 3.127: Illustration of the code of the function "Image1" that calls functions “Start2” and
“Butterworth” and acquires data during a determined interval of time
Figure 3.128: Illustration of the code of the function "Image1" that calls functions “Start2” and
“Butterworth” and acquires data until the button Stop of the main window is pressed
81
Figure 3.129: Illustration of the code of the function "Image1" that waits for the fifth USB
camera frame to start the data acquisition
Function “Start2” (Figure 3.130) starts to acquire the signals and shots the trigger in
order to start the HS camera image acquisition, as shown by Figure 3.131 and Figure 3.132.
Being the delay between these two moments deterministic, it is possible to calculate the
correct frame to be plotted, during a frame-by-frame analysis.
The function “Butterworth” (Figure 3.133) is a Butterworth low-pass digital filter which
filters the acquired data. LabVIEW can filter only one signal each time. Thus, 16 filters were
put in parallel and can filter up to 16 channels at the same time, as shown by Figure 3.134.
Figure 3.130: Illustration of the front panel of the function "Start2"
82
Figure 3.131: Illustration of the code of the function "Start2" that starts reading the inputs of
the acquisition board
Figure 3.132: Illustration of the code of the function "Start2" that shots the trigger of the HS
camera
Figure 3.133: Illustration of the front panel of the function “Butterworth"
83
Figure 3.134: Illustration of the code of the function “Butterworth" that filters a data buffer of
each input channel
If the variable “Execution” is equal to 2 (Figure 3.135), the function “Image2” (Figure
3.136) does the acquisition. In this case, the USB camera frames are also acquired. Since
the webcam presented an irregular behaviour when acquiring the first frames, the acquisition
actually starts after disposing the first five ones, as shown by Figure 3.137, Figure 3.138,
Figure 3.139 and Figure 3.140. It assures a regular quantity of brightness and a rate of 10fps
for all frames. This procedure is done even when no webcam image is acquired, what
provides a similar behaviour for all types of executions.
If the variable “Execution” is equal to 3 (Figure 3.141), the acquisition is done by
function “Image 3” (Figure 3.142). In this case, only the signals are acquired and the HS
camera is not triggered, as Figure 3.143, Figure 3.144, Figure 3.145 and Figure 3.146 show.
Thus, the function “Start1” (Figure 3.147) is used for starting the data acquisition, as shown
by Figure 3.148 and Figure 3.149.
If the variable “Execution” is equal to 4 (Figure 3.150), the acquisition is done by
function “Image 4” (Figure 3.151). In this case, the signals are acquired as well as the USB
camera frames, but the HS camera is not triggered, as Figure 3.152, Figure 3.153, Figure
3.154 and Figure 3.155 show.
84
Figure 3.135: Illustration of the third frame of the event “Start” that calls the function “Image2”
and plots the acquired data
Figure 3.136: Illustration of the front panel of the function "Image2"
85
Figure 3.137: Illustration of the code of the function "Image2" that sets the inputs to be used
by the functions during the data acquisition
86
Figure 3.138: Illustration of the code of the function "Image2" that calls functions “Start2” and
“Butterworth” and acquires data during a determined interval of time
87
Figure 3.139: Illustration of the code of the function "Image2" that calls functions “Start2” and
“Butterworth” and acquires data until the button Stop of the main window is pressed
88
Figure 3.140: Illustration of the code of the function "Image2" that waits for the fifth USB
camera frame to start the data acquisition
89
Figure 3.141: Illustration of the forth frame of the event “Start” that calls the function “Image3”
and plots the acquired data
Figure 3.142: Illustration of the front panel of the function "Image3"
90
Figure 3.143: Illustration of the code of the function "Image3" that sets the inputs to be used
by the functions during the data acquisition
Figure 3.144: Illustration of the code of the function "Image3" that calls functions “Start1” and
“Butterworth” and acquires data during a determined interval of time
91
Figure 3.145: Illustration of the code of the function "Image3" that calls functions “Start1” and
“Butterworth” and acquires data until the button Stop of the main window is pressed
Figure 3.146: Illustration of the code of the function "Image3" that waits for the fifth USB
camera frame to start the data acquisition
92
Figure 3.147: Illustration of the front panel of the function "Start1"
Figure 3.148: Illustration of the code of the function "Start1" that starts reading the inputs of
the acquisition board (internal clock case)
Figure 3.149: Illustration of the code of the function "Start1" that starts reading the inputs of
the acquisition board (external clock case)
93
Figure 3.150: Illustration of the fifth frame of the event “Start” that calls the function “Image4”
and plots the acquired data
Figure 3.151: Illustration of the front panel of the function "Image4"
94
Figure 3.152: Illustration of the code of the function "Image4" that sets the inputs to be used
by the functions during the data acquisition
95
Figure 3.153: Illustration of the code of the function "Image4" that calls functions “Start1” and
“Butterworth” and acquires data during a determined interval of time
96
Figure 3.154: Illustration of the code of the function "Image4" that calls functions “Start1” and
“Butterworth” and acquires data until the button Stop of the main window is pressed
97
Figure 3.155: Illustration of the code of the function "Image4" that waits for the fifth USB
camera frame to start the data acquisition
After doing the data acquisition, the button “Stop” is disabled. Then, the system is
prepared to convert the acquired data, which is saved into a txt file, to an xls file. For this
purpose, a dialog box asks if this conversion is desirable. In a positive case, a toolbar, on the
main window, shows a progress bar indicating how the conversion goes. This procedure is
shown by Figure 3.156. If any xls file already exists it is deleted by function “Delete”, as
Figure 3.157 shows. Eventually the conversion starts, as Figure 3.158 illustrates. At the end,
the items of the menu and the other objects on the window “Main” are set with the previous
values, as shown by Figure 3.159.
After acquiring the signals and images and opening the correlated files, a frame-by-
frame analysis can be done. Pressing the button “forward HS”, it is added 1 frame to the
variable “f HS”, as Figure 3.160 shows. If this value is greater than the quantity of acquired
frames, nothing happens. Otherwise, the next HS camera frame is shown as well as the
correspondent signal buffer and USB camera frame.
98
Figure 3.156: Illustration of the sixth frame of the event “Start” that sets objects on the main
window
Figure 3.157: Illustration of the sixth frame of the event “Start” that calls the function “Delete”
Figure 3.158: Illustration of the sixth frame of the event “Start” that asks if it is desirable to
create a xls file; in a positive case, the function “xls” is called
99
Figure 3.159: Illustration of the sixth frame of the event “Start” that sets the main window with
the previous values
Figure 3.160: Illustration of the code of the event “forward HS” that adds 1 frame to the
variable “f HS” (case -1)
100
The variables “dF HS Signal” and “dF HS USB” are the quantity of samples and USB
camera frames, respectively, that have to be considered for each HS camera frame. The
variables “P HS Signal” and “P HS USB” decide how to calculate which signal buffer and
USB camera frame have to be plotted, respectively. If these variables are equal to -1 (Figure
3.160), nothing happens. Otherwise, if they are equal to 0 (Figure 3.161), it means that the
sampling rate of the HS camera frames is less than the sampling rate of the signals and the
USB camera images. In this case, the variable “f HS” is divided by variables “dF HS Signal”
and “dF HS USB”. If they are equal to 1 (Figure 3.164), the sampling rate of the HS camera
frames is greater than the sampling rate of the signals and the USB camera images. In this
case, the variable “f HS” is multiplied by the variables “dF HS Signal” and “dF HS USB”.
The variable “DSignal” is always equal to zero, because the signals samples are not
delayed. Function “Camera USB” (Figure 3.162) is responsible for plotting the calculated
USB camera frame if it is equal or greater than zero and less than the quantity of acquired
frames, as Figure 3.163 shows. Eventually, function “Camera HS” (Figure 3.166 and Figure
3.167) plots the solicited HS camera frame, as shown by Figure 3.165.
Figure 3.161: Illustration of the code of the event “forward HS” that adds 1 frame to the
variable “f HS” (case 0)
101
Figure 3.162: Illustration of the front panel of the function "Camera USB"
Figure 3.163: Illustration of the code of the function "Camera USB" that plots an USB camera
frame
Figure 3.164: Illustration of the code of the event “forward HS” that adds 1 frame to the
variable “f HS” (case 1)
102
Figure 3.165: Illustration of the code of the of the event “forward HS” that calls the function
“Camera HS”
Figure 3.166: Illustration of the front panel of the function "Camera HS"
Figure 3.167: Illustration of the code of the function "Camera HS" that plots a HS camera
frame
Pressing the button “backward HS”, it is decreased 1 frame from the variable “f HS”. If
this value is equal or less than -1, nothing happens. Otherwise, the next HS camera frame is
shown as well as the correspondent signal buffer and USB camera frame. For this purpose,
103
the same procedure previously explained is executed, as illustrated by Figure 3.168 and
Figure 3.169.
Figure 3.168: Illustration of the code of the event “backward HS” that decreases 1 frame of
the variable “f HS” (case 0)
Figure 3.169: Illustration of the code of the event “backward HS” that calls the function
“Camera HS”
Pressing the button “forward USB”, it is added 1 frame to the variable “f USB”. If this
value is greater than the quantity of acquired frames, nothing happens. Otherwise, the next
104
USB camera frame is shown as well as the correspondent signal buffer and HS camera
frame.
The variables “dF USB Signal” and “dF USB HS” are the quantity of samples and HS
camera frames, respectively, that have to be considered for each USB camera frame. The
variables “P USB Signal” and “P USB HS” decide how to calculate which signal buffer and
HS camera frame have to be plotted, respectively. If these variables are equal to -1, nothing
happens. Otherwise, if they are equal to 0 (Figure 3.170), the sampling rate of the USB
camera images is less than the sampling rate of the signals and HS camera images. In this
case, the variable “f USB” is divided by variables “dF USB Signal” and “dF USB HS”. If they
are equal to 1 (Figure 3.171), the sampling rate of the USB camera images is greater than
the sampling rate of the signals and HS camera images. In this case, the variable “f USB” is
multiplied by variables “dF USB Signal” and “dF USB HS”.
Variable “DHS” represents the delay mentioned before, in terms of frames, and it is
used for shifting the HS camera frames. Function “Camera HS” is responsible for plotting the
calculated HS camera frame, if it is equal or greater than zero and less than the quantity of
acquired frames. Eventually, function “Camera USB” plots the USB camera frame.
Figure 3.170: Illustration of the code of the event “forward USB” that adds 1 frame to the
variable “f USB” (case 0)
105
Figure 3.171: Illustration of the code of the event “forward USB” that adds 1 frame to the
variable “f USB” (case 1)
Pressing the button “backward USB”, it is decreased 1 frame from the variable “f HS”.
If the result is equal or less than -1, nothing happens. Otherwise, the next USB camera frame
is shown as well as the correspondent signal buffer and HS camera frame. For this purpose,
the same procedure previously explained is executed, as illustrated by Figure 3.172.
Typing the desired USB frame in the box “f USB”, it is checked if the typed value is
into the acceptable limits. In a negative case, the previous value is shown in the box “f USB”,
as Figure 3.173 illustrates. Otherwise, it is executed the same procedure explained when the
button “forward USB” is pressed, as shown by Figure 3.174.
106
Figure 3.172: Illustration of the code of the event “backward USB” that decreases 1 frame of
the variable “f USB” (case 0)
Figure 3.173: Illustration of the code of the event “f USB” that keeps the same USB camera
frame if the typed frame is not under the acceptable limits
107
Figure 3.174: Illustration of the code of the event “f USB” that plots an USB camera frame if
the typed frame is under the acceptable limits (case 0)
Typing the desired HS frame in the box “f HS”, it is checked if the typed value is into
the acceptable limits. In a negative case, the previous value is shown in the box “f HS”,
Figure 3.175 illustrates. Otherwise, it is executed the same procedure explained when the
button “forward HS” is pressed, as shown by Figure 3.176.
When the button “play HS” is pressed, the current images are disposed and the other
buttons related to the cameras are disabled, except for the buttons “pause HS” and “stop
HS”, as Figure 3.177 shows. These buttons stop playing the frames. If the execution is
paused, the variables “f HS” and “f USB” receive the last played frames. If it is stopped, these
variables receive the value -1. In this case, zero will be the first frame of the next execution.
This procedure is shown by Figure 3.183.
Function “Play Camera HS” (Figure 3.181) plays the HS camera frames using the
playback rate defined when the files are opened for analysis, as Figure 3.182 shows. The
correspondent signal buffer and USB camera frame are calculated and plotted for each
played HS camera frame. In order to avoid any delay, the USB camera file is opened only
once, in the beginning, and closed when the execution is paused or stopped. This procedure
is shown by Figure 3.178, Figure 3.179 and Figure 3.180.
108
Figure 3.175: Illustration of the code of the event “f HS” that keeps the same HS camera
frame if the typed frame is not under the acceptable limits
Figure 3.176: Illustration of the code of the event “f HS” that plots an HS camera frame if the
typed frame is under the acceptable limits (case 0)
109
Figure 3.177: Illustration of the code of the event “play HS” that sets the buttons on the main
window
Figure 3.178: Illustration of the code of the event “play HS” that opens the USB camera file
110
Figure 3.179: Illustration of the code of the event “play HS” that calls the function “Play
Camera HS”
Figure 3.180: Illustration of the code of the event “play HS” that calls the function “Camera
loop”
111
Figure 3.181: Illustration of the front panel of the function "Play Camera HS"
Figure 3.182: Illustration of the code of the function “Play Camera HS” that plays the HS
camera frames
Figure 3.183: Illustration of the code of the event “play HS” that resets the objects on the
main window if the acceptable limits are reached
112
Function “Camera loop” (Figure 3.184) is responsible for playing the USB camera
frames, as Figure 3.185 shows. Eventually, all the buttons are enabled again, as illustrated
by Figure 3.186.
Figure 3.184: Illustration of the front panel of the function "Camera loop"
Figure 3.185: Illustration of the code of the function "Camera loop" that plays the USB
camera frames
When the button “play USB” is pressed, the current images are disposed and the
other buttons related to the cameras are disabled, except for the buttons “pause USB” and
“stop USB”, as shown by Figure 3.187. These buttons stop playing the frames. If the
execution is paused, the variables “f USB” and “f HS” receive the last played frames. If it is
stopped, these variables receive the value -1. In this case, zero will be the first frame of the
next execution. Figure 3.193 illustrates this procedure.
Function “Play Camera USB” (Figure 3.191) plays the USB camera frames at a rate
of 10fps, as Figure 3.192 shows. The correspondent signal buffer and HS camera frame are
113
calculated and plotted for each USB camera frame. In order to avoid any delay, HS camera
file is opened only once, in the beginning, and closed when the execution is paused or
stopped. Figure 3.188, Figure 3.189 and Figure 3.190 show how it works. Function “Camera
loop” (Figure 3.184) is responsible for playing the HS camera frames, as illustrated by Figure
3.185. Eventually, all the buttons are enabled again, as Figure 3.194 shows.
Figure 3.186: Illustration of the code of the event “play HS” that resets the buttons on the
main window
Figure 3.187: Illustration of the code of the event “play USB” that sets the buttons on the
main window
114
Figure 3.188: Illustration of the code of the event “play USB” that opens the HS camera file
Figure 3.189: Illustration of the code of the event “play USB” that calls the function “Play
Camera USB”
115
Figure 3.190: Illustration of the code of the event “play USB” that calls the function “Camera
loop”
Figure 3.191: Illustration of the front panel of the function "Play Camera USB"
Figure 3.192: Illustration of the code of the function “Play Camera USB” that plays the USB
camera frames
116
Figure 3.193: Illustration of the code of the event “play USB” that resets the objects on the
main window if the acceptable limits are reached
Figure 3.194: Illustration of the code of the event “play USB” that resets the buttons on the
main window
117
Moving the cursor and clicking on the graph, the cameras frames correspondent to
the clicked signal value are also shown. The x axis value taken by the cursor is divided by
the variables “dt USB” and “dt HS” (they are the inverse of the camera sampling rate) in
order to find out the correct frames to be plotted. This procedure is shown by Figure 3.195.
If the window “Main” is closed and an image session had been previously initialized, it
is closed as well, as shown by Figure 3.196. All the available images are also disposed,
making free the space they occupied in memory.
All the global variables used in the program are shown by Figure 3.197. They are
necessary because more than one function needs and modifies those variables.
Figure 3.195: Illustration of the code of the event “Graph Plots” that plots the correspondent
frames of the cameras on the main window
Figure 3.196: Illustration of the code of the event “Panel Close?” that closes an USB camera
session if the main window is closed
118
Figure 3.197: Illustration of the front panel that shows all the global variables
Each function created is a thread. It means sequences of code to be executed
following priorities, which can be: background (lowest), normal, above normal and time
critical (highest). The window “Main” and the functions “Image1”, “Image2”, “Image3”,
“Image4”, “Start1” and “Start2” have time critical priority. These functions are responsible for
the data acquisition and need to occupy more times the PC processor, in order to provide a
deterministic execution. The other functions have normal priority.
LabVIEW also permits to set the “Preferred Execution System”, which can be: user
interface, standard, instrument I/O, data acquisition, other1, other2, same as caller. This tool
helps to organize the software structure. All the created windows are set as “user interface”.
The functions previously cited are set as “data acquisition”. All the other ones are set as
“same as caller”.
Figure 3.198 shows how these parameters can be set by the windows “VI Properties”
of LabVIEW.
Figure 3.198: Illustration of the window "VI Properties" that sets the priority and type of
execution of the functions created in the software
119
The diagram shown by Figure 3.199 summarizes the software. It is divided into 3
parts: Menu, Acquisition an Analysis and contains the created functions and their hierarchy.
The full lines mean that the following function belongs to the previous one. And the traced
lines mean that the following function does not belong to the previous one, but it is executed
in that order. Figure 3.200 presents the software’s data flow.
After doing the code, it was necessary to build a stand alone application (the
executable version). LabVIEW offers this option and permits to create an installer as well.
For this purpose, it is necessary to add the files “ImaqDirectShowDLL.dll” and “NIVisSvc.dll”
which can be found into the folder C:\Windows\system32.
Before installing the created application, it is necessary to install the National
Instruments Vision Assistant 7.1 that has a separated license and contains the video
functions used by the application. Besides that, it is also necessary to install the NI-IMAQ for
USB Cameras (can be downloaded from www.ni.com
for free), which contains the USB
functions used for acquiring frames of USB cameras.
This final version was obtained after many trial versions. In order to have
synchronism among signals and images, an ideal situation would be to provide an
instantaneous trigger for the high-speed camera, although, this is not possible by software.
Thus, a suitable solution was to provide a deterministic delay when the trigger is produced.
Then, it would be possible to calculate the correct high-speed camera frame.
A real-time version was considered. For this purpose, it was installed the LabVIEW
Real Time 7.1 (RTX). This version of LabVIEW runs the applications using a real-time
extension for Windows XP. This extension is another operating system, which means
another kernel, it is called RTX and it is provided by Venturcom. An academic license of
LabView does not provide RTX, so it has to be purchased apart and it is quite expensive.
The found solution was to contact Venturcom and ask a trial version of RTX 5.5 (later
versions are not acceptable for LabVIEW Real Time 7.1). It is very important to uninstall
service pack 2 before installing RTX 5.5, otherwise nothing works. It is also necessary to
install DAQmx Base 1.5.1 (can be downloaded from www.ni.com
). This is the package used
for programming data acquisition in a real-time environment. The following steps have to be
executed in order to make LabVIEW Real-Time (RTX) works:
Install RTX 5.5 (if the LabView license does not include it)
Install LabView 7.1 (if it was not previously installed)
Install LabView 7.1 Real Time (RTX)
Install DAQmx Base 1.5.1 (it contains the functions for data acquisition in Real-
Time)
120
Update device driver (the driver of the acquisition board driver has to be changed
to RTX supported)
Figure 3.199: Illustration of a diagram that summarizes how the software works
121
122
123
Figure 3.200: Illustration of the software's data flow
124
The problems in using this configuration are many. First of all, the DAQmx Base does
not have all the functions of the DAQmx and the entire application has to run by the same
kernel, what makes impossible to maintain all the previous code resources. There are no
available drivers for PCMCIA acquisition boards, which means that a laptop could not be
used for doing the acquisition. Finally, there are no available drivers for USB cameras, so the
webcam could not be integrated to the system. Those are the main reasons to not use the
real-time version of LabVIEW.
Another possibility was studied, the LabVIEW Real-Time (ETS). This version
executes the application on hardware targets running the real-time operating system of
Venturcom Phar Lap Embedded Tool Suite (ETS). It provides a real-time operating system
that runs on NI RT Series hardware to meet the requirements of embedded applications that
need to behave deterministically or have extended reliability requirements. This solution was
not suitable because one of the objectives of this work was to build a flexible application to
run on laptops, not an embedded one.
3.3 Calibration
Calibration is necessary in order to achieve higher measurement accuracy when the
work environment changes. There are two types of calibration, device calibration and
channel calibration. Device calibration consists of verifying the measurement accuracy of a
device and adjusting for any measurement error. It means to measure the performance of
the device and compare these measurements to the published specifications. During
calibration, voltage levels or other signals are supplied and read using external standards,
and then the device calibration constants are adjusted. The new calibration constants are
stored in the memory (EEPROM). These calibration constants are loaded from memory as
needed to adjust for the error in the measurements taken by the device.
One type of device calibration is the external calibration, which is typically performed
by a metrology lab, which uses a high-precision voltage source to verify and adjust
calibration constants. This procedure replaces all calibration constants in the EEPROM and
is equivalent to a factory calibration. Because the external calibration procedure changes all
EEPROM constants, it invalidates the original calibration certificate. If an external calibration
is done with a National Institute of Standards and Technology (NIST) certified voltage
source, a new NIST traceability certificate can be issued. It should be done at a regular
interval as defined by the measurement accuracy requirements of the application. National
125
Instruments recommends a complete calibration at least once every year. This interval can
shorten to 90 days or six months.
Another type of device calibration is the self-calibration which does not require any
external connections. It adjusts the calibration constants with respect to an onboard
reference stored on the device. The new calibration constants are defined with respect to the
calibration constants created during the external calibration to ensure that the measurements
are traceable to these external standards. The new calibration constants do not affect the
constants created during an external calibration because they are stored in a different area of
the device memory. A self-calibration can be performed at any time to adjust the device for
use in environments other than those in which the device was externally calibrated.
Channel calibration is necessary in applications where the highest degree of
accuracy is critical, but it does not replace device calibration. Channel calibration
compensates for various errors, including those introduced by cabling, wiring, and sensors.
SMART does a self-calibration every time the system is initialized. The other types of
calibration have to be done if a high accuracy of the measurements is necessary. For
welding parameters measurement, the self-calibration is usually enough.
3.4 System configuration
The first step to configure the system is to create virtual channels through the
Measurement & Automation Explorer (MAX), which can be downloaded from www.ni.com
,
for free. This application also contains the drivers for the NI acquisition boards. Thus, it is
advisable to download its last version, which is supposed to be more complete. Figure 3.201
illustrates how to create the virtual channels.
126
Figure 3.201: Illustration of the panel “Measurement and Automation Explorer” that creates a
Virtual Channel
The measurement type has to be selected as Figure 3.202 shows. Voltage and
current signals are both analog inputs. Figure 3.203 shows how the standard configuration
for voltage signals was done. Since the output of the current sensor is a voltage signal,
current signals have to be configured as voltage inputs. In this case, a scale according to the
specifications of the sensor has to be build. Figure 3.204 and Figure 3.205 show how to
create a scale. Figure 3.206 shows how the standard configuration for current signals was
done.
The configuration of the voltage and current terminals are non-referenced single-
ended (NRSE) because the measurement of these signals is made with respect to the same
node, but the potential at this node can vary with respect to the measurement system ground
(since different types of sensors are used). This type of configuration makes possible to use
a greater number of inputs.
In order to measure temperature, virtual channels were configured as thermocouples
of R/S type, as Figure 3.207 illustrates. In this case, the configuration of the terminals is
differential because the thermocouple outputs are differential as well.
A virtual channel for the external trigger is actually a pulse output, as Figure 3.208
shows. Figure 3.209 illustrates its standard configuration.
127
When the software runs for the first time, it takes a while to calibrate the system and
reset the acquisition board. After that, the next step is to configure the software. First, it is
necessary to create a new file or to open an existent one, as Figure 3.210 illustrates. After
doing this, the name of the current file appears in the box “Project”.
Figure 3.202: Illustration of the panel “Measurement and Automation Explorer” that chooses
the measurement type for voltage and current signals
Figure 3.203: Illustration of the panel “Measurement and Automation Explorer” that
configures the voltage signals
128
Figure 3.204: Illustration of the panel “Measurement and Automation Explorer” that creates a
NI-DAQmx Scale
Figure 3.205: Illustration of the panel “Measurement and Automation Explorer” that selects a
type of scale
129
Figure 3.206: Illustration of the panel “Measurement and Automation Explorer” that
configures a scale to be used for current signals
Figure 3.207: Illustration of the panel “Measurement and Automation Explorer” that
configures the measurement type of the temperature signals
130
Figure 3.208: Illustration of the panel “Measurement and Automation Explorer” that
configures the measurement type of the external trigger
If after opening an existent file a new one is created, the previous parameters are
kept, otherwise the default ones last. It is possible to change signal, image and filter
parameters, as Figure 3.211 shows. In order to change the signal parameters, the desired
channels have to be selected among the created virtual channels. The sampling rate per
channels has also to be defined (it is recommended 20kHz or above, since there is a low-
pass filter with cut-off frequency equal to 10kHz attached to each input connector). If the
acquisition mode is Finite Samples, the Sample Time has to be set (the minimum value is 1
second and the maximum value is 60 seconds). If the acquisition mode is Continuous, it is
advisable to worry about the space available in the HD of the computer, what limits the
acquisition time.
131
Figure 3.209: Illustration of the panel “Measurement and Automation Explorer” that
configures the external trigger
Figure 3.210: Illustration of the menu used for creating or opening a file
In order to set the image parameters, it is necessary to select the channel for the
trigger of the high-speed camera among the created virtual channels and the desired
132
webcam. Eventually, it is necessary to set the cut-off frequency (it has to be less than half of
the sampling rate) and the number of poles for the digital filter.
Figure 3.211: Illustration of the menu used for configuring the data acquisition
After configuring the acquisition parameters, the next step is to configure the
webcam, as Figure 3.212 shows. Different webcams have different types of parameters. The
webcam used for performing the final tests is Logitech brand and was configured as
following:
Brightness = 50%
Saturation = 0%
Sharpness = 50%
Gamma = 0%
Black and White mode
The resolution must be always equal to 640x480 and the diaphragm of the magnify
lens must be closed.
Before starting the acquisition, it is necessary to select the channels of which signals
will be shown on the signal graph. It can be done, pressing the “Channels” button and
selecting the desired channels. After that, pressing the “Start” button, the data acquisition
starts.
133
Figure 3.212: Illustration of the menu used for configuring the USB camera
134
CHAPTER 4
EVALUATION OF THE SYSTEM “SMART”
Two sets of experimental trials were done using the GMAW process. The first one
was performed with single wire and the second one with double wire (tandem). A third set of
experimental trials was performed with Resistance Spot Welding (RSW) process. In all of
them, current and voltage signals were collected, as well as, video camera images.
4.1 First set of experimental trials – single wire GMAW
The objective of these trials was to verify the possibility of the arc welding
visualization by using a webcam. Besides that, the efficiency of the synchronism among
electrical signals (welding current and arc voltage), USB camera frames and HS camera
frames had to be also verified. The HS cameras are usually used by researchers in order to
obtain more detailed images of the arc welding.
Figure 4.1 and Figure 4.2 show the rig configuration used for the first set of trials.
Two plain carbon steel plates were positioned (as illustrated by Figure 4.3) over the carriage,
which moves at a constant speed when it is turned on, by a controller, before starting the
data acquisition. The torch head was driven by a step motor, in such a way that it could be
oscillated in a horizontal movement when wished. The only possible travel angle of this torch
head was the right angle. Then, the first trial of this set was performed just to check any
possible image distortion.
The travel angle refers to the angle in which welding takes place. It may be a right
angle, a push angle or a drag angle, depending on the position of the torch. When the torch
is pointing back to the weld direction, it is known as pulling (or dragging) the weld. When the
torch is pointing forward to the weld direction, it is referred to as pushing the metal. At the
right angle, the torch is perpendicular to the weld direction. Once Figure 4.3 (a) does not
illustrate any of these cases, the denomination “plate angle” will be used for referring to the
angle on that illustration.
The high-speed camera was set with a sampling rate of 5kHz, at a resolution of
256X128 pixels and a playback rate of 5Hz. As explained in section 3.2, the USB camera
135
frames are acquired at 10fps, with a resolution of 640x480, by a Trust webcam. The image
parameters were set as following:
Brightness = 50%
Saturation = 0%
Sharpness = 50%
Gamma = 0%
Black and White mode
The diaphragm of the magnify lens was closed and filters were used, as explained in
section 3.1. Concerning the electrical signals, different values of sampling rates were
adopted in order to verify if it would affect the synchronism between signal samples and HS
camera frames. Table 4.1 summarizes the signal sampling rate, plate angle and torch
oscillation values, used for doing the six experimental trials of this set.
The distance between the torch and the webcam was about 700mm, while the high-
speed camera was positioned about 2000mm away from the torch, both at the same
alignment in relation the longitudinal centreline of the weld bead. No digital filter, as
mentioned in section 3.2, was employed at first.
Figure 4.1: Illustration of the ring configuration used for doing the first set of trials (single wire
GMAW)
Webcam
High speed camera
136
Figure 4.2: (a) side view of the test plate, (b) test plate carriage, (c) torch head and the (d)
webcam, in details
Figure 4.3: Schematic arrangement of the plates (3mm - thick) for welding in relation to the
torch position: (a) at a plate angle of 135
o
; (b) at a plate angle of 90
o
; (c) and a side view of
the test plate
(a)
(b)
(c)
(d)
137
Table 4.1: Parameter setting for the first set of trials
Trial Signal Sampling Rate (kHz) Plate Angle (º) Torch Oscillation (Hz)
test1 5 135 0
test2 5 90 0
test3 5 90 5
test4 13 90 0
test5 4 90 4
test6 5 90 5
Results
:
After setting the system, bead-on-plate welds were carried out. In order to evaluate
the level of similarity between the waveforms provided by this device and the system, the
signal samples were also acquired by a digital oscilloscope at 5kHz. All of these waveforms
seemed to be very similar.
Good results related to the arc image, acquired by the webcam, were presented.
Since there was only one wire, the arc length could be detected, even if the torch oscillated.
However, by comparing Figure 4.4 and Figure 4.5, it is possible to see that the detection of
the arc length would be more difficult for a “plate angle” of 135
o
.
Figure 4.4: Illustration of a shot of the first
set of trials - Test1 (“plate angle” equal to
135
o
) - voltage traces (top), USB frames
(left bottom) and HS frames (right bottom)
Figure 4.5: Illustration of a shot of the first
set of trials - Test2 (“plate angle” equal to
90
o
) - voltage traces (top), USB frames
(left bottom) and HS frames (right bottom)
138
A synchronism between signals and webcam frames was observed. When the arc
welding is turned on, the webcam frame that shows this moment and the buffer of signal
samples, which represents its arc voltage value, are plotted, as shown by Figure 4.6 and
Figure 4.7. In a similar behaviour, when the arc welding is turned off, the webcam frame that
shows this moment and the its buffer of signal samples, which are equal to zero, are plotted,
as shown by Figure 4.8 and Figure 4.9.
Nevertheless, no synchronism between signals and high-speed camera frames was
observed. It is due to a delay existent between the moments that the acquisition board starts
the signal acquisition and triggers the high-speed camera. This delay was not even constant,
making impossible to synchronize those signals. Then, it was necessary to modify the
program in order to guarantee a more deterministic delay, according to section 3.2.
Figure 4.6: Illustration of a shot of the
second set of trials - Test2 (there is still no
arc welding) - voltage traces (top), USB
frames (left bottom) and HS frames (right
bottom)
Figure 4.7: Illustration of a shot of the
second set of trials - Test2 (there is
already an arc welding) - voltage traces
(top), USB frames (left bottom) and HS
frames (right bottom)
139
Figure 4.8: Illustration of a shot of the
second set of trials - Test2 (the arc
welding is being extinguished) - voltage
traces (top), USB frames (left bottom) and
HS frames (right bottom)
Figure 4.9: Illustration of a shot of the
second set of trials - Test2 (the arc
welding is already extinguished) - voltage
traces (top), USB frames (left bottom) and
HS frames (right bottom)
In order to analyse the effect of using different sampling rates for acquisition of signal
samples and high-speed camera frames, it was forced the synchronization of these data by
considering the delay occurred in each trial. When the sampling rate used for acquiring the
signals is not a multiple value of the sampling rate used for acquiring the high-speed camera
frames, the accurate of the frame-by-frame analysis decreases. It is because the system
calculates the quantity of signal measurements that should be considered at each frame, by
dividing the sampling rate used for acquiring signals by the sampling rates used for acquiring
frames of USB and HS cameras. If the results are not integer values, there will be an error
that propagates frame-by-frame.
4.2 Second set of experimental trials – double wire GMAW
Figure 4.10 shows the rig configuration for the second set of trials, which had the
same objective of the first one, although it was performed with double wire GMAW. As it is
140
possible to see, only one plain carbon steel plate (20mm - thick) was positioned over the
carriage (what provides a “plate angle” always equal to 90
o
), which also moves at a constant
speed when it is turned on, by a controller, before starting the data acquisition. The torch
head could not oscillate.
The high-speed camera setting did not change, that is, a sampling rate of 5kHz, at a
resolution of 256X128 pixels, and a playback rate of 5Hz. A Logitech webcam was used for
performing these trials and it was set as the webcam used for performing the first set of trials.
Its frames were also acquired at 10fps, with a resolution of 640x480. The diaphragm of the
magnify lens was also closed and the same filters were used, as explained in section 3.1.
Concerning the electrical signals, a sampling rate of 25kHz for all trials was used. The
low-pass digital filter had already been implemented (as item 3.2 explains) and the
configuration of its parameters was cut-off frequency equal to 1000kHz and number of poles
equal to one.
The cameras were in the same position but not at the same alignment, being the
distance between the torch and the cameras about 700mm.
Figure 4.10: (a) test plate, (b) test plate carriage, (c) torch head, (d) high-speed camera and
(e) the webcam, in details, illustrating the rig configuration used for doing the second set of
trials (tandem GMAW)
(a)
(b)
(c)
(e) (d)
141
Results
:
Four trials for this set were performed. In the first one, the arc image acquired by the
webcam was not satisfactory. Due to the presence of two wires under a out-of-phase pulsed
current, when one arc is turned brighter (pulse current) the other one is turned dimmed (base
current) and vice-versa, so the arc image is not very defined, as Figure 4.11 shows. Thus,
this first trial was disposed and the webcam was not used for performing the next ones.
Figure 4.11: Illustration of a shot of the second set of trials - Test 1 (disposed trial) - voltage
traces (top), USB frames (left bottom) and disabled HS frames (right bottom)
The modifications implemented in order to make constant the delay explained in
section 3.2 were positive. Table 4.2 shows the results obtained by connecting the output that
triggers the high-speed camera to one of the input channels of the acquisition board. Then,
by acquiring this signal, it would be possible to determine the delay, before going to a real
trial, as shown by Figure 4.12. It characterized a simulation of trials, which found a delay
value around 0.002650 seconds. After that, bead-on-plate welds were carried out. Table 4.3
shows the results obtained. It is possible to verify the determinism in the delay values. Once
the sampling rate normally used for acquiring frames of high-speed cameras in the welding
area is around 5kHz, a variation of 0.0002 seconds is acceptable. With this sampling rate,
0.0002 seconds represents 1 frame. Thus, 1 frame can be in advance or delayed, during a
frame-by-frame analysis.
142
Table 4.2: Trigger delay - Simulation of trials
Trial Continuous Samples
Delay (s)
Finite Samples
Delay (s)
1 0.002499 0.002650
2 0.002550 0.002650
3 0.002552 0.002649
4 0.002599 0.002652
5 0.002650 0.002798
6 0.002450 0.002599
Table 4.3: Trigger delay - Final tests
Trial Delay (s)
Test2 0.002509
Test3 0.002600
Test4 0.002597
Figure 4.12: Illustration of a shot of the second set of trials - Test 3 (determination of the
trigger delay, by simulation) - trigger pulse (top), disabled USB frames (left bottom) and
disabled HS frames (right bottom)
143
A frame-by-frame analysis of the high-speed camera frames show that these frames
are adequately synchronized with the electrical signals. When the arc welding of the torch on
the right is turned on (Figure 4.13), the cursor on the graph shows a pulse peak of the
voltage signal. When it is turned off (Figure 4.14), it is shown a pulse base of that signal.
Figure 4.15 and Figure 4.16 show that the same happens for the next peak.
Figure 4.13: Illustration of a shot of the
second set of trials - Test 1 (the arc
welding is turned on) - voltage traces (top),
disabled USB frames (left bottom) and HS
frames (right bottom)
Figure 4.14: Illustration of a shot of the
second set of trials - Test 1 (the arc
welding is turned off) - voltage traces (top),
disabled USB frames (left bottom) and HS
frames (right bottom)
144
Figure 4.15: Illustration of a shot of the
second set of trials - Test 1 (the arc
welding is turned on again) - voltage
traces (top), USB frames (left bottom) and
HS frames (right bottom)
Figure 4.16: Illustration of a shot of the
second set of trials - Test 1 (the arc
welding is turned off again) - voltage
traces (top), USB frames (left bottom) and
HS frames (right bottom)
4.3 Third set of experimental trials – RSW
The objective of these trials was to verify the applicability of the system for a different
type of welding process. In the RSW process, two metal sheets are clamped together and
the spot welding is carried out. There is no presence of arc welding. Thus, besides the
synchronization between electrical signals (arc voltage and welding current) and USB
camera frames, these trials aimed at to grab the movement of the welding clamp and a
circumstantial presence of spatters.
The high-speed camera was not used for performing this set of trials, although, a
Labtec webcam was used for acquiring frames at 10fps, with a resolution of 640x480. Since
there is no interest on arc length at this time, the default image setting of the webcam was
kept and no magnify lens was attached to it.
145
Concerning the electrical signals, it was used a sampling rate of 1kHz for all trials.
The parameters set for the low-pass digital filter were cut-off frequency equal to 500Hz and
number of poles equal to one.
The webcam was about 500mm from the welding rig, in such a way that was possible
to enclose the welding clamp and the metal sheets.
Eight trials were performed for this trial. It was applied a pressure of 86kgf (2.3kN) for
all of them. Table 4.4 shows the duration in cycles of each welding function and the current
used for each one. Higher values of current were used for performing the two last trials in
order to provoke the presence of spatters.
Table 4.4: Setting welding parameters for the third set of trials
Trial I
rms
(A) Pre-Pressure
(cycles)
Pressure
(cycles)
Weld 1
(cycles)
Pulse
(cycles)
Weld 2
(cycles)
test1 3500 30 30 20 0 0
test2 3500 30 30 40 0 0
test3 3500 15 15 40 0 0
test4 3500 15 15 40 10 40
test5 3500 15 30 40 20 40
test6 3500 15 30 40 20 40
test7 6500 10 5 40 0 0
test8 7500 10 5 30 0 0
Results
:
A frame-by-frame analysis of the USB camera frames shows that is possible to
observe that these frames are quite synchronized with the current (in red) and voltage (in
blue) signals. In the beginning, before executing the pre-programmed functions, the welding
clamp was still opened (Figure 4.17). After that, the clamp gets closed (Figure 4.18). Then,
when the metal sheets are clamped, the weld cycles start and it is possible to notice the
presence of spatters (Figure 4.19). The clamp is kept closed during this time (Figure 4.20).
After finishing these cycles the clamp starts to open again (Figure 4.21) and returns to the
initial state (Figure 4.22). By running the software, these shots are more defined and that
sequence can be better realized.
146
Figure 4.17: Test 8 - shot that shows the
clamp still opened - voltage traces (top),
USB frames (left bottom) and disabled HS
frames (right bottom)
Figure 4.18: Test 8 - shot that shows the
clamp getting closed - voltage traces (top),
USB frames (left bottom) and disabled HS
frames (right bottom)
147
Figure 4.19: Test 8 - shot that shows the
metal sheets getting clamped and the
presence of spatter - voltage traces (top),
USB frames (left bottom) and disabled HS
frames (right bottom)
Figure 4.20: Test 8 - shot that shows the
metal sheets clamped during the weld
cycles and the presence of spatter -
voltage traces (top), USB frames (left
bottom) and disabled HS frames (right
bottom)
148
Figure 4.21: Test 8 - shot that shows the
clamp getting opened after the weld cycles
- voltage traces (top), USB frames (left
bottom) and disabled HS frames (right
bottom)
Figure 4.22: Test 8 - shot that shows the
clamp back to the initial position - voltage
traces (top), USB frames (left bottom) and
disabled HS frames (right bottom)
149
CHAPTER 5
DISCUSSION
The focus of this work was on the synchronism between electrical signals and
images, which is not a trivial task. These signals are acquired by different devices which
work by different principles. Thus, to find a solution able to integrate all of them it was not
that simple. It is believed to be a first step for future developments.
In this work, the synchronism consists on acquiring a buffer of electrical signals, an
USB camera frame and a buffer of high-speed camera frames, at regular intervals of time.
Thus, it was projected a system responsible for acquiring signals and USB camera frames,
besides triggering the high-speed camera, which acquires frames independently.
A visual analysis of the signals and the USB camera frames can be done while the
data acquisition takes place, that is, an online analysis. After it finishes, the high-speed
camera frames can be downloaded into the computer. Then, these frames, the acquired
signal samples and the acquired USB camera frames can be used for performing an offline
analysis that relates all these data.
An ideal behaviour of the system is illustrated by Figure 5.1. The acquisition of frames
and signal samples had to start at the same time (0ms), which means no delay among the
acquired data. Besides, each buffer of signal samples and USB camera frame had to be
immediately read after this frame became available. Webcam is the type of USB camera
used for performing this work. It usually provides a sampling rate of up to 30fps. It means
that at every 33.33ms (1/30), a different frame would be, in theory, available for reading, but
this rate is not very accurate. According to the image resolution of the frames, for instance,
640x480, this rate decreases to 15fps.
For each frame read and plotted, a buffer of signal samples must also be read and
plotted. Plotting usually takes a considerable time and the more frequent it has to be done,
the slower the system becomes. Thus, in order to avoid bad performance of the system and
considering a desirable image resolution of 640x480, a sampling rate of 10fps was adopted.
It means that at every 100ms (1/10), a different webcam frame is read, performing the real
behaviour of the system, which is also illustrated by Figure 5.1.
150
Figure 5.1: Illustration of the behaviour of the system along the time
According to the real behaviour of the system, after starting the signal acquisition, the
high-speed camera is triggered. However, there is a delay (dHS) between these two
moments. Moreover, when an USB camera frame becomes available, it cannot be
immediately read, and another delay is presented (dUSB), as mentioned before.
If dHS is deterministic, it is possible to shift the HS camera frames and provide
synchronism when doing the offline analysis. In this case, determinism means that dHS has
the same value every time the acquisition runs, under any circumstance. It is not very simple
to do, since a multitasking operating system does not offer determinism. However,
establishing an acceptable limit for the variation of the delay and triggering the high-speed
camera immediately after starting the signal acquisition, it does not make a serious matter.
In the other hand, the effect of the delay dUSB cannot be treated. Since the webcam
tries to equilibrate the quantity of brightness when it starts to acquire the first frames, the
sampling rate is very irregular during this time and these first frames are very bright and not
33.33 66.66 100 133.3 166.66 200 233.33 266.66
USB camera
HS camera
Signals
33.33 66.66 100 133.3 166.66 200 233.33 266.66
USB camera
HS camera
Signals
dHS
Time
(
ms
)
Ideal behaviour
Real behaviour
dUSB
0
0
Time
(
ms
)
acquired frames
151
good for analysis. In order to avoid this situation, the system starts the webcam, waits for a
while, starts the signal acquisition and triggers the high-speed camera. After that, at every
100ms, the last available webcam frame and a buffer of signal samples are read and plotted.
Thus, dUSB does not present the same value every time the signal acquisition starts. A
smaller irregularity is also presented along the acquisition of the subsequent frames. Thus,
the acquired webcam frame cannot be related to only one signal sample, but to the entire
buffer of signal samples. A solution to increase the accurate of the webcam’s sampling rate
would be to transfer the responsibility of grabbing each frame, from the webcam to the
computer.
Then, the way as the system is currently working, makes possible to do an offline
analysis of signals and images. It can be done by visualization of buffers of signal samples,
USB camera frames and HS camera frames, all of them related one-by-one. However, more
improvements can be done in order to increase the possibilities of applications for this work.
152
CHAPTER 6
FUTURE DEVELOPMENTS
The arc length is directly related to the arc voltage. Then, an arc length control is
usually done by correcting variations in the arc voltage value. For this purpose, an arc
voltage reference that provides the desirable arc length must be set. It is done by visualizing
the arc length. An algorithm could be developed in order to detect the arc length by
processing the USB camera frames. Having the correspondent arc voltage at each frame, it
would be possible to set its reference value automatically.
If a data transmission was implemented, the adjustment of these welding parameters
could also be done by distance. In this case, even if the arc length could not be detected by
an algorithm, the user could remotely see the arc welding and adjust its length by changing
the arc voltage, also by distance.
Another improvement of the system would be to increase the sampling rate of the
webcam frames. As explained in Chapter 5, increasing this rate, the system becomes slower,
since the data plotting becomes more frequent. However, a compromise could be
established. During the acquisition, the data could be plotted using the current rate of 10fps,
but it could be stored at a higher frequency, of 30fps, for example. Thus, it would be possible
to do an online analysis in order to monitor a weld and immediately act on the system by
distance (moving the camera, for example), as well as, more details could be seen during an
offline analysis.
Different types of USB cameras could be performed. An infra-red camera, for
example, could be used for detecting spatters during a weld. Besides that, more than one
USB camera could be integrated to the system, offering a better visual analysis of the
welding phenomena.
The sound emitted by the arc would be another interesting aspect for remote
monitoring. Listening to this sound, the felling of the professional working away is increased.
Besides that, the metal transfer mode could be identified by audio signal analysis. The
acquisition of this signal would be possible if an amplifier was correctly built for it and
connected to the acquisition board of the system. The audio board of the computer could not
be used for acquiring that signal. Since this device is controlled by the operating system, its
synchronization with the other devices (acquisition board, USB camera, HS camera) would
153
not be possible and would decrease the performance of the system. Moreover, the sampling
rates provided by this device could not be suitable for audio signal analysis, since they are
fixed at few possible values.
The sampling rate of the signals also affects the performance of the system. Higher
rates means more points being plotted at each interval of time and more computer memory
has to be allocated for this purpose. Then, even if the acquisition board manages to provide
a high sampling rate, if the computer does not manage to read the data and plot it on the
graph more quickly, the performance of the system decreases. Thus, it is necessary to use
powerful computers for higher sampling rates.
In order to provide a more consistent synchronism among the signals, a master clock
could be used for all devices. Since every acquisition board of National Instruments is very
accurate, its clock could be used for the high-speed camera and the webcam as well.
However, it would be necessary to use a clock divider for the webcam and find a way to take
its control. Thus, the three devices would acquire under the same clock source.
A last concern is about LabVIEW, the programming language used for performing the
system. In spite of being quite easy to program with this language, the documentation about
its functions is still poor. National Instruments has headquarters around the world to help the
developers, but most of time, it is not enough for not ordinary developments. These
circumstances make longer the development of the system. In order to save time when
developing LabVIEW applications, it is advisable to attend appropriate courses for training
offered by National Instruments. In some cases (basic), it can be done on internet for free. In
other cases it has to be paid and it is expensive.
154
CHAPTER 7
CONCLUSIONS
This work shows that it is possible to acquire electrical signals and images
synchronously in an integrated system. The introduction of low-cost cameras was feasible,
by providing images suitable for image analysis. Using the existent resources in the
laboratory, it was possible to build a prototype for the hardware with enough flexibility and
suitable for different types of welding processes. Eventually, a portable format for the system
was developed, which is more appropriate to the different types of work environment.
The system was tried in single wire GMA welding with reasonable success. However,
the system is still incomplete and limited. Concerning the incompleteness, the arc length still
needs to be determined by an algorithm to be developed for this purpose. When tried with
double wire GMA welding the results were not so good, showing some limitations. Due to arc
energy alternation, the acquisition rate of the USB camera is not high enough to focus on
only one wire.
When tried in resistance spot welding, the system was quite successful, too. It was
possible to visualize the movement of the welding clamp and the presence of spatters during
the weld. Data transmission still needs to be integrated to the system in order to provide a
remote monitoring of such events. Thus, the monitoring could be centralized and the
presence of a specialist in the plant would not be necessary.
However, a list of possible improvements were carried out, pointing out the design of
a more comprehensive system, which can be used for enlarge the field of applications. In
order to integrate this work to an automated welding system for pipeline repair, for example,
it is still necessary to develop the control of the welding parameters that would take the data
provided by this work.
Therefore, the system developed in this work is believed to be a first step for future
developments. From this point, it is possible to integrate other resources and implement
different applications in the welding area.
155
CHAPTER 8
REFERENCES
1. ALFARO, S. C. A.; VARGAS, J. H.; VILARINHO, L. O.; WOLFF, M. A. High Speed
Images of Resistance Spot Welding Synchronized with Electrical Signals. 18th
International Congress of Mechanical Engineering. Brazil, p. 8, 2005.
2. ARCON WELDING INC. Welding Process. Available on <www.key-to-steel.com
>.
Access date: 15/01/2005.
3. BÁLSAMO, P. S. S.; VILARINHO, L. O.; VILELA, M.; SCOTTI, A. Development of an
Experimental Technique for Studying Metal Transfer in Welding: Synchronized
Shadowgraphy, In: INT. J. FOR THE JOINING OF MATERIALS, 2000, The European
Institute for Joining of Materials (JOM), Denmark, v. 12, n. 1, pp. 1-12 (ISSN 0905-
6866).
4. CARY, H.B. Arc Welding Automation. New York: Marcel Dekker Inc, 1995. p. 527.
5. DROUET, M.G; NADEAU, F. Acoustic Measurement of the Arc Voltage Applicable to
Arc Welding and Arc Furnaces. Journal of Physics E: Scientific Instruments.
Canada, v. 15, n. 3, p. 2, 1982.
6. FERRARESI, V. A.; FIGUEIREDO, K. M.; ONG, T. H. Metal Transfer in the
Aluminium Gas Metal Arc Welding. Journal of the Brazilian Society of Mechanical
Science & Engineering. Brazil, v. 25, n. 3, p. 6, 2003.
7. GILSINN, J., et al. A Welding Cell That Supports Remote Collaboration. In: NINTH
INTERNATIONAL CONFERENCE ON COMPUTER TECHNOLOGY IN WELDING,
1999, Detroit. p. 7.
8. LI, P.; ZHANG, Y. Robust Sensing of Arc Length. IEEE Transactions on
Instrumentation and Measurement. v. 50, n. 3, p. 8, 2001.
156
9. MAEDA, T.; ICHIYAMA, Y. Development of Adaptive Control of Arc Welding by
Image Processing. In: NINTH INTERNATIONAL CONFERENCE ON COMPUTER
TECHNOLOGY IN WELDING, 1999, Detroit. p. 11.
10. MANSOOR, A.M.; HUISSOON, J. P. Acoustic Identification of the GMAW Process.
In: NINTH INTERNATIONAL CONFERENCE ON COMPUTER TECHNOLOGY IN
WELDING, 1999, Detroit. p. 12.
11. MASON, A., et al. Real Time Image Processing of Electrode Profile in Resistance
Spot Welding. In: GERI ANNUAL RESEARCH SYMPOSIUM 2005, 2005, Liverpool.
12. NATIONAL INSTRUMENTS INC. Data Acquisition Fundamentals, in
APPLICATION NOTE 007. Available on <www.ni.com
>. Access date: 08/02/2005.
13. NATIONAL INSTRUMENTS INC. Intuitive Graphical Programming Language for
Engineers and Scientists. Available on <www.ni.com
>. Access date: 06/03/2005.
14. NATIONAL INSTRUMENTS INC. SCC Signal Conditioning Overview, in
PORTABLE MODULAR DAQ SYSTEMS. Available on <www.ni.com
>. Access date:
20/02/2005.
15. NATIONAL INSTRUMENTS INC. Signal Conditioning Fundamentals for
Computer-Based Measurement Systems, in APPLICATION NOTE 048. Available
on <www.ni.com
>. Access date: 20/02/2005.
16. THE LINCOLN ELECTRIC COMPANY. The Procedure Handbook of Arc Welding.
Cleveland: Lincoln Electric, 1994. ISBN 9994925822.
17. WIDGERY, D.; BLACKMAN, S. Pipelines for Stranded Gas Reserves: Cutting the
Cost, 2001. Cranfield University, UK, 2001. p. 11.
18. YAPP, D. Trends in Data Acquisition for Weld Monitoring. Cranfield University,
UK, 2004. p. 14.
Livros Grátis
( http://www.livrosgratis.com.br )
Milhares de Livros para Download:
Baixar livros de Administração
Baixar livros de Agronomia
Baixar livros de Arquitetura
Baixar livros de Artes
Baixar livros de Astronomia
Baixar livros de Biologia Geral
Baixar livros de Ciência da Computação
Baixar livros de Ciência da Informação
Baixar livros de Ciência Política
Baixar livros de Ciências da Saúde
Baixar livros de Comunicação
Baixar livros do Conselho Nacional de Educação - CNE
Baixar livros de Defesa civil
Baixar livros de Direito
Baixar livros de Direitos humanos
Baixar livros de Economia
Baixar livros de Economia Doméstica
Baixar livros de Educação
Baixar livros de Educação - Trânsito
Baixar livros de Educação Física
Baixar livros de Engenharia Aeroespacial
Baixar livros de Farmácia
Baixar livros de Filosofia
Baixar livros de Física
Baixar livros de Geociências
Baixar livros de Geografia
Baixar livros de História
Baixar livros de Línguas
Baixar livros de Literatura
Baixar livros de Literatura de Cordel
Baixar livros de Literatura Infantil
Baixar livros de Matemática
Baixar livros de Medicina
Baixar livros de Medicina Veterinária
Baixar livros de Meio Ambiente
Baixar livros de Meteorologia
Baixar Monografias e TCC
Baixar livros Multidisciplinar
Baixar livros de Música
Baixar livros de Psicologia
Baixar livros de Química
Baixar livros de Saúde Coletiva
Baixar livros de Serviço Social
Baixar livros de Sociologia
Baixar livros de Teologia
Baixar livros de Trabalho
Baixar livros de Turismo