top of page
psychacebna1982

Applied Signal Processing: A MATLAB Based Proof of Concept with CD-ROM Containing MATLAB Code Files



Applied Signal Processing: A MATLAB-Based Proof of Concept benefits readers by including the teaching background of experts in various applied signal processing fields and presenting them in a project-oriented framework. Unlike many other MATLAB-based textbooks which only use MATLAB to illustrate theoretical aspects, this book provides fully commented MATLAB code for working proofs-of-concept. The MATLAB code provided on the accompanying online files is the very heart of the material. In addition each chapter offers a functional introduction to the theory required to understand the code as well as a formatted presentation of the contents and outputs of the MATLAB code.




Applied Signal Processing: A MATLAB Based Proof of Concept




Each chapter exposes how digital signal processing is applied for solving a real engineering problem used in a consumer product. The chapters are organized with a description of the problem in its applicative context and a functional review of the theory related to its solution appearing first. Equations are only used for a precise description of the problem and its final solutions. Then a step-by-step MATLAB-based proof of concept, with full code, graphs, and comments follows. The solutions are simple enough for readers with general signal processing background to understand and they use state-of-the-art signal processing principles. Applied Signal Processing: A MATLAB-Based Proof of Concept is an ideal companion for most signal processing course books. It can be used for preparing student labs and projects.


Note that the approach here demonstrated as a proof of concept is based on one of the simplest array signal processing techniques22,23,24,41 applied to DAS recordings, and therefore there is margin for further improvements, especially for complex acoustic wave propagation scenarios, like the present one. Indeed, simpler propagation conditions would naturally lead to better processing performance compared to the one reported here, due to the eventual higher coherence among DAS channels. In addition, the methods for signal enhancement and source position estimation can be iteratively implemented to improve TDOA estimations and potentially achieve better results. Supplementary Fig. 7 shows that, despite the existing errors in the source location estimation, the TDOAs calculated using the estimated source position are quite close to those non-blindly calculated based on the actual source location, indicating that potential further improvements could be achieved by an iterative process.


It is worth mentioning that in the present proof of concept, sparse beamforming spatial filtering has been exploited to improve the waveform representation emitted by a single point seismic acoustic source, reducing distortions caused by reflections and reverberations arriving to the sensing fibre with different angles. However, the spatial filtering capabilities of the method can be further explored in many applications to clearly discriminate the acoustic waveforms emitted simultaneously by several acoustic sources, which combined with hyperbolic triangulation can allow us to identify their actual 2D or even 3D coordinates with no specially designed optical fibre installation geometries, provided there exist good angular diversity of the optical fibre orientation. Indeed, it must be noted that in linear (straight) fibre installations, some ambiguities will arise due to the geometric symmetry of the fibre, impeding the array signal processing to identify the side of the optical fibre where the acoustic source is located. However, the use of straight linear fibres may also be inadequate for real-field DAS applications, due to the directional response of a DAS sensor. In particular, if the acoustic signal is broadside to the cable, no strain will be measured. This is an issue affecting all DAS sensors and not only for the method here proposed. Therefore, robust DAS measurements might require the use of an optical fibre installation with different orientations, making the DAS monitoring robust against random directions of the acoustic wave arrival (although some local measurements will still have null or poor response). Incidentally, the use of different fibre orientations also benefits the method here proposed to better identify the actual source position without triangulation ambiguity. Nevertheless, note that the performance of the method highly depends on the acoustic properties of the propagation medium and the positioning of the acoustic sources and sensing optical fibre, which define the acoustic attenuation and reflections, among other phenomena occurring during propagation. In addition, instead of the blind TDOA estimation approach here applied to localise the position of the source, TDOAs can be exhaustively scanned using simple geometry on an entire 2D or 3D region, allowing for the mapping of the total acoustic field existing in a large area or volume and emitted by several sources. This way, the spatial filtering capabilities here demonstrated based on a sparse DAS array configuration can be potentially exploited to implement, for instance, acoustic cameras based on DAS technology, with an optical fibre that does not need to be installed over the analysed zone.


It is important to note that in this proof of concept the beam characteristics of the spatial filter are not designed nor steered to a predefined position, and therefore no spatial information about the sensing fibre is required in this approach. Here only the estimated TDOAs are used to enhance the best measured acoustic signal, represented by the pilot trace. Considering the large number of sophisticated array processing techniques existing in the literature22,23,24,44,45, we are confident that novel approaches will soon emerge to improve the directivity and performance of near- and far-field beamforming techniques for DAS applications. For instance, making use of the optical fibre positioning and local fibre orientations, complex weights can be designed for advanced beamformers to control the directivity pattern and increase the selectivity of the spatial filtering results. Note however that the uneven response and low coherence of DAS acoustic channels would affect any standard beamforming method, which normally assumes identical sensor responses; and therefore, the here demonstrated blind ranking strategy based on the reliability of DAS channels can be used to adapt standard beamforming techniques for DAS applications. In addition, further improvements could be obtained in some scenarios by pre-processing DAS measurements using deconvolution and dereverberation techniques to remove acoustic reflections19,20,21. We believe that the proposed technique is a starting point opening new class of acoustic processing strategies to enhance the capabilities of distributed acoustic sensors, but also other technologies like arrays of fibre Bragg gratings46,47 and multiplexed interferometric fibre sensors48,49, to measure the acoustic field existing outside the optical fibre.


Delay-and-sum23,41 is one of the best known and simplest beamforming techniques and is used here as a proof of concept of the proposed method. The beamforming signal \(\rmBF\left(t\right)\) generated by the delay-and-sum method is described as23,41:


TECHNOLOGY AREA(S): Sensors OBJECTIVE: Develop robust, automated super resolution image processing techniques to use on low resolution video of objects under a changing pose. DESCRIPTION: Super resolution for image processing has been around since the 1980s and multiple techniques exist consisting of various registration, interpolation and restoration methods, with the majority of research and development focused on rigid or static scene applications. Some research is available on using super resolution for moving objects [1] [2] [3], including an approach that addresses translation, rotation and zooming motion [4], but the techniques are not available as automated algorithms to run against real-world data nor have they tested on representative simulated datasets for our area of interest.Currently available super resolution image processing techniques for low resolution IR sensor data apply bias subtraction and image registration; use Drizzle algorithms to recover sampling losses in point structure targets; and then use generic deconvolution algorithms for compensation of effects from Drizzle itself, focal plane charge diffusion, and optical aberrations and diffraction. While the Drizzle algorithm is simple and fast, it requires image registration that is accurate to a small fraction of a pixel and requires a relatively large number of frames. [S] Additionally, only the translation transform from the image registration algorithm is applied in the Drizzle algorithm currently. When run against non-static scenes, these super-resolution algorithms create image artifacts that limit imagery analysis.We are looking for an open technology development [6] that implements non-proprietary algorithms that are less sensitive to image registration errors and account for translation, scale, and rotational transforms of the imagery; the algorithms can build on existing techniques or use alternative techniques. The offeror should consider and place higher priority on implementations that run in a cloud processing environment. The desired end result is an automated algorithm that can run in our large scale near-real-time offline processing environment. The expected output is a super resolved image of 4-lOX improved resolution over the original low resolution data that allows for improved measurements of the object dimensions. The proposal should specify datasets to be used for algorithm verification and validation. The government can provide MATLAB code of the existing technique. PHASE I: The expected product of Phase I is a super-resolution algorithm that takes as input an image sequence containing an object against a uniform background with varying pose and outputs a super resolved image of 4-lOX improved resolution over the original low resolution data that allows for measurement of the dimensions of objects along with an estimated accuracy of the product, documented in a final report and implemented in a proof-of-concept software deliverable. PHASE II: The expected output of Phase II is a prototype automated implementation of the Phase I proof-of-concept algorithm, tested against real sensor data and enhanced to correct any deficiencies identified in Phase I, with documentation of the test results and updated algorithm. PHASE III: Military Application: Surveillance, Technical Intelligence. Commercial Application: Security and police surveillance, Medical imaging. REFERENCES: 1: Antoine Letienne, Frederic Champagnat, Caroline Kulcsar, Guy Le Besnerais, Patrick Viaris De Lesegno, "Fast Super-Resolution on Moving Objects in Video Sequences," 16th European Signal Processing Conference, August 2008. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page