FUNDAMENTALS OF TOMOGRAPHY AND RADAR H.D. Griffiths and C.J. Baker University College London UK Abstract Radar, and in particular imaging radar, has many and varied applica- tions tosecurity. Radar is a day/night all-weather sensor, and imaging radarscarriedbyaircraftorsatellitesareroutinelyabletoachievehigh- resolution imagesoftargetscenes, andtodetectandclassify stationary and moving targets at operational ranges. Different frequency bands may be used, for example high frequencies (X-band) may be used to support high bandwidths to give high range resolution, while low fre- quencies(HForVHF)areusedforfoliagepenetrationtodetecttargets hidden in forests, or for ground penetration to detect buried targets. Thetechniquesoftomographic imaging wereoriginally developedin the context of medical imaging, and have been used with a number of different kinds of radiation, both electromagnetic and acoustic. The purposeofthispresentationistoexploretheapplicationoftomographic imaging techniquesat RFfrequencies to a numberof different applica- tions in security,ranging from airdefence tothedetection of concealed weapons. Of particular interest is the use of ultra narrow band (UNB) transmissions with geometric diversityin a multistatic configuration to image moving targets. In the limit such transmissions could be CW, which would be particularly attractive for operation in a spectrally- congested environment. This arrangement effectively trades angular domain bandwidth for frequency domain bandwidth to achieve spatial resolution. Also of interest is the improvement in target classification performance afforded by multi-aspect imaging. Thepresentationwillreviewthetheoryoftomographicimaging,then discussarangeofapplicationstotheoverallsecurityproblem,therele- vantsystemconfigurationsineachcase,theachievableperformanceand critical factors, and identify promising areas for futureresearch. Keywords: radar; radar imaging; tomography; high resolution; synthetic aper- ture radar; interferometry; polarimetry; Radon transform; projection slice theorem; backprojection. 2 1. Introduction Radar, and in particular imaging radar, has many and varied applica- tions to security. Radar is a day/night all-weather sensor, and imaging radars carried by aircraft or satellites are routinely able to achieve high- resolution images of target scenes, and to detect and classify stationary and moving targets at operational ranges. Short-range radar techniques may be used to identify small targets, even buried in the ground or hid- den behind building walls. Different frequency bands may be used, for example high frequencies (X-band) may be used to support high band- widths to give high range resolution, whilelow frequencies (HF or VHF) are used for foliage penetration to detect targets hiddenin forests, or for ground penetration to detect buried targets. In the notes that follow we consider the formation of high-quality radar imagery, and the means by which it is possible to extract useful target information from such imagery. 2. Imaging and Resolution Firstly we can establish some of the fundamental relations for the resolutionofanimagingsystem. Inthedown-rangedimensionresolution 4r is related to the signal bandwidth B, thus c 4r = . (1) 2B High resolution may be obtained either with a short-duration pulse or by a coded wide-bandwidth signal, such as a linear FM chirp or a step- frequency sequence, with the appropriate pulse compression process- ing. A short-duration pulse requires a high peak transmit power and instantaneously-broadbandoperation; theserequirementscanberelaxed in the case of pulse compression. In the first instance cross-range resolution is determined by the prod- uct of the range and beamwidth θ . The beamwidth is determined by B the size of the aperture d and thus cross-range resolution is given by rλ 4x= rθ ≈ . (2) B d As most antenna sizes are limited by practical aspects (such as fitting to an aircraft) the cross range resolution is invariably much inferior to that inthedownrangedimension. However, thereareanumberoftechniques that can improve upon this. All of these are ultimately a function of the change in viewing or aspect angle. Thus in the azimuth (cross-range) dimension the resolution 4x is related to the change in aspect angle 4θ Fundamentals of Tomography and Radar 3 Figure1. HighresolutionSARimageofapartoftheuniversitycampusinKarlsruhe (Germany). The white arrow refers to a lattice in theleft courtyard,which is shown in more detail in the small picture on the left bottom. The corresponding optical image is shown on the left top (after Brenner and Ender[4]). as follows: λ 4x= . (3) 4θ 4sin 2 (cid:16) (cid:17) For a linear, stripmap-mode synthetic aperture, equation (3) reduces to 4x= d, whichis independentof bothrangeandfrequency. Evenhigher 2 resolution can be obtained with a spotlight-mode synthetic aperture, steering the real-aperture beam to keep the target scene in view for a longer period, and hence forming a longer synthetic aperture. Realistic limits to resolution may be derived by assuming a maximum fractional bandwidth B of 100%, and a maximum change in aspect f0 ◦ angle of 4θ = 30 (higher values than these are possible, but at the expense of complications in hardware and processing). These lead to 4r = 4x= λ. 2 In the last year or so results have appeared in the open literature which approach this limit. Figures 1 and 2 show two examples from a recent conference of, respectively, an urban target scene and of aircraft targets. Critical to the ability to produce such imagery is the ability to characteriseandcompensateformotionerrorsoftheplatform,whichcan be done by autofocus processing [6]. Of course, motion compensation becomes most critical at the highest resolutions. 4 Figure 2. Example of 3-look image yielding 10 cm resolution (after Cantalloube and Dubois-Fernandez[5]) 3. Tomographic Imaging The techniques of tomography were developed originally for medical imaging, to provide 2D cross-sectional images of a 3D object from a set ◦ of narrow X-ray views of an object over the full 360 of direction. The resultsofthereceivedsignalsmeasuredfromvariousanglesaretheninte- gratedtoformtheimage,bymeansoftheProjectionSliceTheorem. The RadonTransformisanequation derivedfromthistheorem whichisused by various techniques to generate tomographic images. Two examples of these techniques are Filtered Backprojection (FBP) and Time Do- main Correlation (TDC). Further descriptions of these techniques may be found in [20]. In radar tomography the observation of an object from a single radar location can be mapped into Fourier space. Coherently integrating the mappings from multiple viewing angles enables a three dimensional pro- jection in Fourier space. This enables a three dimensional image of an object to be constructed using conventional tomography techniques such as wavefront reconstruction theory and backprojection where the imaging parameters are determined by the occupancy in Fourier space. Complications can arise when target surfaces are hidden or masked at any stage in the detection process. This shows that intervisibility char- acteristics of the target scattering function are partly responsible for determining the imaging properties of moving target tomography. In other words, if a scatterer on an object is masked it cannot contribute to the imaging process and thus no resolution improvement is gained. However, if a higher number of viewing angles are employed then this Fundamentals of Tomography and Radar 5 Figure 3. Tomographic reconstruction: the Projection Slice Theorem. can be minimised. Further complications may arise if (a) the point scat- terer assumption used is unrealistic (as in the case of large scatterers introducing translational motion effects), (b) the small angle imaging assumption does not apply and (c) targets with unknown motions (such as non-uniformrotational motions) create cross-productterms that can- not be resolved. 4. The Projection Slice Theorem The Tomographic Reconstruction (TR) algorithm makes use of the Projection-Slice theorem of the Fourier transform to compute the im- age. The Projection-Slice theorem states that the 1D Fourier transform of the projection of a 2D function g(x,y), made at an angle w, is equal to a slice of the 2D Fourier transform of the function at an angle w, see Figure 3. Whereas some algorithms convert the outputs from many radars simultaneously into a reflectivity image using a 2D Fourier trans- form, TR generates an image by projecting the 1D Fourier transform of each radar projection individually back onto a 2D grid of image pixels. This operation gives rise to the term Backprojection. The image can be reconstructed from the projections using the Radon transform. The equation below shows this: π ∞ g(x,y) = P(f)·|f|·ej2πf(xcosw+ysinw)dfdw (4) Z0 Z−∞ where w = projection angle P(f) = the Fourier transform of the 1-D projection p(t). 6 The Filtered Backprojection (FBP) method may be used to process by reconstructing the original image from its projections in two steps: Fil- tering and Backprojection. Filtering the projection: The first step of FB Preconstruction is to perform the frequency integration (the inner integration) of the above equation. This entails filtering each of the projections using a filter with frequency response of magnitude |f|. The filtering operation may be implemented by ascertaining the fil- ter impulse response required and then performing convolution or a FFT/IFFT combination to correlate p(t) against the impulse response. Backprojection: The second step of FB Preconstruction is to perform the angle integration (the outer integration) of theabove equation. This projects the 1D filtered projection p(t) onto the 2D image by following these steps: place a pixel-by-pixel rectangular grid over the XY plane, then place the 1D filtered projection p(t) in position at angle w for each pixel, then get the position of the sample needed from the projection angleandpixelposition. Interpolatethefilteredprojectiontoobtainthe sample. Add this backprojection value multiplied by the angle spacing. Repeat the whole process for each successive projection. 5. Tomography of Moving Targets A development of these concepts has been the idea of imaging of moving targets using measurements from a series of multistatic CW or quasi-CW transmissions, giving rise to the term ‘ultra narrow band’ (UNB) radar. This may be attractive in situations of spectral conges- tion, in which the bandwidth necessary to achieve high resolution by conventional means (equation (1)) may not be available. Narrow band CW radar is also attractive as peak powers are reduced to a minimum, sidelobes are easier to control, noise is reduced and transmitters are generally low cost. Applications may range from surveillance of a wide region, to the detection of aircraft targets, to the detection of concealed weapons carried by moving persons. In general the target trajectory projection back to a given radar location will determine resolution. A random trajectory of constant velocity will typically generate differing resolutions in the three separate dimensions. However, even if there is no resolution improvement there will be an integration gain due to the timeseries of radarobservations. AHammingwindowor similar may be required to reduce any cross-range sidelobe distortions. The treatment whichfollows istaken fromthatof Bonneau, Bascom, Clancy andWicks [3]. Fundamentals of Tomography and Radar 7 Figure 4. Relationship between bistatic sensor geometry and representation in Fourier space (after [3]). Figure 4 shows the relationship between the bistatic sensor geometry and the representation in Fourier space. The bistatic angle is B and the bistatic bisector is the vector uB. The corresponding vector F in Figure 5. Fourierspace samplingand sceneresolution for amonostatic SAR(after [3]). Fourier space is given by 4πf B F = cos uB (5) c (cid:18)2(cid:19) Figure 5 shows the equivalent relationship for a monostatic geometry. The resolutions are inversely proportional to the sampled extents 4u and 4v in Fourier space, thus 2π 2π 4r = 4x = (6) 4u 4v which should be compared to equations (1),(2) and (3). In an UNB radar the finite bandwidth of the radar signal limits the range resolution. However, this resolution can be recovered by multi- static measurements over a range of angles. Figure 6 shows four exam- ples, and the Fourier space sampling corresponding to each. 8 Figure 6. Fourier space sampling and scene resolution for four examples: (i) sta- tionary tx/rx,wideband waveform; (ii) stationary tx,movingrx,CW waveform; (iii) stationarytx,movingrx,widebandwaveform;(iv)monostatictx/rx,widebandwave- form (after [3]). 6. Applications The applications of high resolution radar imagery are hugely varied andnumerous. Invariablyhighresolutionisusedasatooltoimprovethe information quality resident in an electromagnetic backscattered signal. The resulting imagery may be used to gain information over extremely wideareassuchastheearth’soceans, wheredatapertainingtoseastate, current movements, etc. can be derived. Over the land, imagery is used for crop monitoring, evaluation of rain forest clearings, bio mass estima- tion and many other tasks. At the highest of resolution information on single objects is possible and it is here that the security applications are morelikely. Inparticularimproveddetectionandclassificationofobjects such as vehicles, aircraft, ships and personnel, and at the very highest resolution, concealed weapons, are potentially possible. We consider a small sample here. 7. Automatic Target Recognition These examples are illustrative of the potential of synthetic aperture imaging. However, it should be appreciated that the challenge is to extract useful information on the desired targets from such imagery. The problem of determining the class to which a target belongs di- rectlyreliesupontheamountofinformationavailable. ATRsaresystems that contain an input sub-system that accepts pattern vectors from the Fundamentals of Tomography and Radar 9 feature space, and a decision-maker sub-system that has the function of deciding the class to which the sensed attributes belong. Here we interchangeably refer to this process using the terms classification and recognition. Pre-processing raw data is necessary in order to increase the quality of the radar signatures. Principal discriminating factors for classifica- tion purposes are Range Resolution, Side-Lobe Level (SLL) and Noise Level. Higher resolution means better point scatterers separation but the question of compromise regarding how much resolution is needed for good cost-recognition is difficult to resolve. Generally, high SLLs mean clearer range profiles but this also implies deterioration in resolution. Eventually, low noise levels mean high quality range profiles for classi- fication. In this chapter we concentrate on the particular situation in which a single non-cooperative target has been previously detected and tracked by the system. The improvement in performance due to the available multiplicity of perspectives is investigated examining one-dimensional signatures and the classification is performed on raw data with noise floor offset re- moved by target normalization. After generating a target mask in the rangeprofile, thenoiselevel is measuredinthenon-target zoneandthen subtracted from the same area. The result is a more prominent target signature in the range window. Real ISAR turntable data have been used to produce HRR range profiles and images. In view of the fact that the range from the target is approximately constant, no alignment is needed. Three vehicle targets classified as A, B and C form the sub-population problem. Each class is described by a set of one-dimensional signatures covering 360 degrees of rotation on a turntable. After noise normalisation, a 28 dB SNR is achieved. Single chirp returns are compressed giving 30 cm of range 00 resolution. Thegrazing angle of theradar is 8degrees and2 of rotation istheangularintervalbetweentwoconsecutiverangeprofiles. Therefore, 10000 range profiles are extracted from each data file over the complete rotation of 360 degrees. The training set of representative vectors for each class is made by 18 range profiles, taken approximately every 20 degrees for rotation of the target. The testing set of each class consists of the remaining range profiles excluding the templates. Three algorithms have been implemented in both single and multi- perspective environments. In this way any bias introduced by a single algorithm should be removed. The first is the statistical Na¨ıve Bayesian Classifier. Itreducesthedecision-makingproblemto simplecalculations of feature probabilities. Itis based on Bayes’ theorem and calculates the posteriorprobabilityofclassesconditionedonthegivenunknownfeature 10 (cid:1) (cid:15)(cid:16)(cid:19)(cid:17) (cid:15)(cid:16)(cid:19)(cid:18) (cid:15)(cid:16)(cid:19)(cid:4) (cid:7) (cid:6)(cid:16)(cid:15)(cid:16)(cid:19)(cid:2) (cid:17)(cid:18) (cid:16) (cid:15)(cid:16) (cid:15)(cid:16)(cid:19) (cid:15)(cid:16)(cid:17)(cid:17) (cid:15)(cid:16)(cid:17)(cid:18) (cid:1)(cid:1)(cid:2)(cid:2)(cid:3)(cid:3)(cid:4)(cid:4)(cid:5)(cid:5)(cid:6)(cid:6)(cid:7)(cid:7)(cid:8)(cid:8)(cid:9)(cid:9)(cid:10)(cid:10)(cid:6)(cid:6)(cid:11)(cid:11) (cid:1)(cid:1)(cid:2)(cid:2)(cid:3)(cid:3)(cid:4)(cid:4)(cid:12)(cid:12)(cid:2)(cid:2)(cid:13)(cid:13)(cid:13)(cid:13) (cid:15)(cid:16)(cid:17)(cid:4) (cid:1)(cid:1)(cid:2)(cid:2)(cid:3)(cid:3)(cid:4)(cid:4)(cid:14)(cid:14)(cid:15)(cid:15)(cid:13)(cid:13)(cid:13)(cid:13)(cid:9)(cid:9) (cid:15)(cid:16)(cid:17)(cid:2) (cid:1) (cid:2) (cid:3) (cid:4) (cid:5) (cid:6)(cid:6)(cid:7)(cid:7)(cid:8)(cid:8)(cid:9)(cid:9)(cid:10)(cid:10)(cid:7)(cid:7)(cid:11)(cid:11)(cid:12)(cid:12)(cid:13)(cid:13)(cid:14)(cid:14)(cid:7)(cid:7)(cid:9)(cid:9) Figure 7. Multi-perspective classifier accuracies. vector. The second is a rule-based method for classification: K-Nearest Neighbours (K-NN) algorithm. The rule consists of measuring and min- imising the number of K distances from the object to the elements of the training set. The last approach involves Artificial Neural Networks (ANN), where the information contained in the training samples is used to set internal parameters of the network. In this work, Feed-forward ANNs (FANNs) supervised by a back-propagation strategy have been investigated and implemented. We first consider classification based upon a multiplicity of viewing angles rather than using this multiplicity to form a single tomographic image. The combination of views of a target from a number of different aspectswouldbeexpectedintuitively toprovideanimprovementinclas- sificationperformanceasclearlytheinformationcontentshouldincrease. Three different ways of combining the aspects are used here to illustrate possibleperformanceimprovements: ThesearetheNa¨ıve Bayesian Clas- sifier, K-nearest neighbours (KNN), and Feed-forward Artificial Neural Networks (FANN). Details of these algorithms are provided in reference [21]. Figure 7 shows the improvement in classifier performance as a function of number of perspectives. In Figure 7, the classification per- formances of the three implemented classifiers are compared versus the number of perspectives used by the network. As anticipated, because of the nature of the data and the small available number of targets, the classifiersstartfromahighlevel ofperformancewhenusingonlyasingle aspect angle. It can be seen that there is a significant benefit in going from 1 to 2 perspectives, and a small additional benefit from 2 to 3, but rather less from further additional perspectives.
Description: