ebook img

Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector PDF

11.5 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector

January 25, 2017 1:49 paper Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector S. Ambily †, Mayuresh Sarpotdar, Joice Mathew, A. G. Sreejith, K. Nirmal, Ajin Prakash, Margarita Safonova and Jayant Murthy Indian Institute of Astrophysics, Bangalore, India 7 1 MCP-baseddetectorsarewidelyusedintheultraviolet(UV)regionduetotheirlownoiselevels,highsensitivity 0 and good spatial and temporal resolution. We have developed a compact near-UV (NUV) detector for high- 2 altitude balloon and space flights, using off-the-shelf MCP, CMOS sensor, and optics. The detector is designed n tobecapableofworkinginthedirectframetransfermodeaswellinthephoton-countingmodeforsinglephoton a event detection. The identification and centroiding of each photon event are done using an FPGA-based data J acquisition and real-time processing system. In this paper, we discuss various algorithms and methods used in 4 bothoperatingmodes,aswellastheirimplementationonthehardware. 2 ] M I . h 1. Introduction p Access to space has become easier and cheaper with the advent of small satellites such as e.g. Cubesats (e.g - o Brosch et al., 2014). We are developing compact scientific payloads using off-the-shelf components with r t tight constraints on weight, size and power, for atmospheric and astronomical studies in the NUV range s a (200–350 nm). We test and qualify these payloads using high-altitude balloons, with the expectation of [ space flights when the opportunity arises. In particular, we have designed and developed the MCP-based 1 photon counting detector which we plan to fly in the near future. v Photon-counting detectors using MCPs have been the standard for UV payloads because of their low 5 readout noise, large detector area, radiation tolerance and long-wavelength rejection (Kimble et al., 2003). 9 7 There are variations in the number of the MCP stacks, types of filters and photocathodes, and in the 6 anode and readout electronics (Joseph, 1995). For complex payloads with high levels of customization, all- 0 electronic readout mechanisms, such as Cross Strip (XS) anodes, Wedge and Strip anodes, Multi-Anode . 1 Microchannel Arrays (MAMA), or Delay-Line anodes are used (Vallerga et al., 2009). The charge clouds 0 from the MCP electrodes in these system are read out by a custom electronics board which converts the 7 1 amplified electron charge cloud into a corresponding voltage level (Vallerga & Siegmund, 2000). These : readouts are often expensive and, while their cost is justified for high impact missions such as GALEX v i (Siegmund et al., 2004) or FUSE (Sahnow et al., 1996), they may not be needed in less ambitious missions. X Wehaveachievedsignificantsavingsincostbyusingaphosphor-screenanodeasthereadout(e.g.Bonanno r a et al., 2001), and by focusing the image of the anode onto a CMOS sensor using a lens. We have chosen a photocathode(S20)withsensitivityintherange200–900nmforourinitialdevelopment,sothatwecanuse itinboththeNUVandthevisible.ThedetectorworksasaCMOScamerainbrightlight,transferringeach frame serially to the data processing system, and as a photon-counting detector in faint light conditions. In both photon counting and continuous frame-readout modes, the data acquisition and processing unit is the samefield-programmablegatearray(FPGA)board.Therefore,themodescanbechangedbyjustswitching from one code to another on the FPGA, without the necessity of making changes to the hardware. Inthispaper,wedescribethevariousaspectsofthedesignandimplementationofthedetector,withafocus on the data acquisition algorithms that run on the FPGA. We also discuss the hardware implementation of these algorithms, as well as the results of few calibration and characterization tests on the whole sensor. †Corresponding author. E-mail: [email protected] 1 January 25, 2017 1:49 paper 2 S. Ambily 2. Detector Overview The detector (Fig. 1) is an intensified CMOS camera comprising the two parts: (1) An image intensifier with an MCP and its associated high-voltage power supply (HVPS); (2) A readout and processing unit, which includes a relay lens, CMOS sensor and a digital readout card using an FPGA. Fig. 1. Detector Block Diagram 2.1. Image Intensifier We are using an MCP340 assembly from Photek1. It is a 40 mm, Z-stack MCP with an S20 photocathode deposited on a quartz input window and P46 phosphor deposited on the anode. The distance from the photocathode to the MCP front surface is 100 microns and the distance from the MCP output surface to the phosphor is 700 microns. The MCP is powered by the FP632 micro-HVPS from Photek which weighs only 100 gm, ideal for our lightweight (< 3 kg) balloons. The HVPS provides several outputs with voltages from −200 V to 6,900 V. The gain of the MCP is set by the voltage at the MCP output, which automatically adjusts the screen voltage, maintaining a constant MCP-out-Anode voltage difference. The resulting electron cloud from the MCP output electrode is accelerated towards the phosphor screen anode (with a peak response at 530 nm) and the resulting photons are focused on the CMOS surface by a relay lens. 2.2. Relay Optics ElectronsfromtheMCPinducefluorescenceintheP46phosphorscreen(40-mmdiameter)whichisimaged ontotheCMOSreadout(3.4-mmdiameter)usingrelayoptics.Similarcommercialsystemsoftenuseafiber- optic coupling but these are more difficult to physically implement with high precision. We are now using the TVF-3.6X2812IR-BCDN-MD, a board-level varifocal lens from Senko with high precision zoom and focus adjustment which was provided with the CMOS board. The lens system has an F-ratio of F1.2 with an adjustable focus from 3 mm to 10 mm. This system introduces a significant amount of aberrations into the system because of the large demagnifi- cation factor from the MCP to CMOS (17:1) and we are designing a new lens system to better couple the two. 1http://www.photek.com/ January 25, 2017 1:49 paper Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector 3 2.3. CMOS Sensor We have chosen a 1 mega-pixel video image sensor, OV9215 from OmniVision Technologies2, for the CMOS camera. It is a 1/4 inch sensor that provides full-frame, sub-sampled or windowed 8-bit/10-bit images. The advantages of CMOS sensors in easier readout and circuit integration, as well as their low power requirements, make them an obvious choice over CCDs for our applications. The disadvantage of CMOS sensors is that they have an uncertain photometric flat field but this is not relevant in our photon counting context(Uslenghi et al., 2003). The CMOS chip sits on an off-the-shelf headboard, originally designed for use in CCTV applications3, which provides a bias and clock for the sensor. The headboard reads the digital image values and sends them to the FPGA for further processing. It also incorporates a synchronousserialport,whichmaybeusedtomodifytheinternalcameraregisterswhenneeded.Thepixel stream and the three sync signals for the pixel, line, and frame clocks are generated by the CMOS sensor and are connected directly to the FPGA inputs. The master input clock for the CMOS chip is generated by the on-chip crystal oscillator and PLL on the FPGA board. The oscillator frequency is fixed at 12 MHz, but the input clock frequency for the CMOS chip may be varied from 6 to 27 MHz using the programmable PLLs on the FPGA. The successive pixel values are read out as a 2D image matrix and stored in an array on the FPGA memory. The limiting factor in the frame rate is the CMOS frequency, and we are in the process of obtaining and testing faster chips. 2.4. Data Acquisition Board We have chosen the XuLA2-LX25 FPGA prototyping board from XESS Corp.4 for the design of the main processor since the often used commercial development boards require too many resources (power, weight, complexity). This board meets our minimum requirements in a compact form factor. The heart of the FPGA board is a Spartan® 6 series XC6LX25 chip5, which controls and reads out the CMOS sensor, processes the data, stores and transmits the output. The use of FPGAs allows us to rapidly prototype different algorithms without changing the hardware which is invaluable in a development environment. ® The FPGA modules were written using Verilog and synthesized and programmed using the Xilinx ISE (Integrated Synthesis Environment) Design Suite. In addition, we have used standard VHDL modules6 for interfacing the SDRAM (synchronous dynamic random-access memory), PLL (Phase Locked Loop), and ® micro-SD (Secure Digital) card to the Spartan 6 chip on the FPGA board. 3. Implementation The detector is designed to operate in two modes: • Continuous frame transfer mode. • Photon counting mode. We have designed the readout and associated electronics of the detector so that we can switch between both modes with no changes in the hardware. Incoming photons eject electrons from the S20 photocathode which are multiplied by the MCP into a charge pulse that falls on a green phosphor screen. The emitted photons are read by the CMOS sensor with further processing done by the FPGA (Bellis et al., 1991; Fidouh et al., 1993). The overall implementation of the detector has been broken down to (a) development of the front end photonics system, and (b) interfacing of the digital backend electronics. The first task involves coupling the light from the MCP to the CMOS sensor through a relay lens system with minimum distortion and light loss. The second step involves the clocking and synchronization of the CMOS chip, as well as the 2http://www.ovt.com/ 3Soliton Technologies: http://http://www.solitontech.com/ 4http://www.xess.com/shop/product/xula2-lx25/ 5Xilinx, Inc: http://www.xilinx.com/ 6XESS Corp. January 25, 2017 1:49 paper 4 S. Ambily programming and interfacing of the FPGA with the host PC. This part is further divided into four steps, which are described in detail in the following subsections. 3.1. Centroid calculation Any photon event on the photocathode creates a charge cloud that is Gaussian in shape. This cloud is spread over multiple pixels on the sensor which we centroid with sub-pixel accuracy using the FPGA. The centroiding algorithm was implemented using a 3×3 pixel sliding window during our tests which we later expanded to a 5×5 pixel window (Hutchings et al., 2007). This yielded better accuracy within our computational limits. If we operate in the continuous frame transfer mode, all the pixel values will be stored on the SDRAM module which will rapidly exceed our on-board storage. Centroiding on the chip is space-efficient but can be computationally expensive. We implemented a scheme in which we stored only the most recent five rows of the current frame on the FPGA chip (5×1280 pixels). This is illustrated in (Fig. 2, Left) where we have simulated the sample image array in MATLAB®7. The image consists of 25 × 25 pixels and the pixels were read out serially to a 5×25 array. The initial values of the array were assigned to be zero. As each pixel row from the sample image was read, the last row of the array on the simulated FPGA was updated, while the older rows were overwritten with the latest values. This sequence is illustrated in Fig. 3. The flow chart for the centroid calculation steps is given in Appendix A. Fig.2. Left:Simulatedimagearray.Thevaluesretainedafterathresholdlevelarehighlightedandtherestofthevaluesare considered to be zero. Right: Reconstructed image. 3.1.1. Thresholding and detecting the local intensity maxima ThefirstpartofourcentroidingprocedureistoeliminatethedarknoisearisingfromtheS20photocathode, which was measured in the lab (see Sec. 5) to be 1000 cps at 23◦C. We dynamically estimate the local background level for each photon event, and use this value as the event threshold. This is done by taking the lowest corner pixel value in the centroiding window (Hutchings et al., 2007). 7MathWorks, Inc.: http:in.mathworks.com/products/matlab/ January 25, 2017 1:49 paper Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector 5 Fig. 3. The implemented readout: the latest rows are shifted to the FPGA array in a pixel to pixel manner. As and when each pixel is read out, the array containing the last five rows is updated while the older row values are overwritten. The 24 pixels surrounding the last read-out pixel are read into a separate 5×5 window array, and the window is checked for the presence of any local maxima. We flag the hot pixels and multiple events before calculating the centroids. Hot pixels are removed by eliminating any event where only the central pixel is higher than the threshold. Multiple events are marked by adding the maximum-minimum corner values to the output centroid packets (Hutchings et al., 2007). 3.1.2. Calculating the centroid around the maximum points Once we have identified the hot pixels and multiple events, we calculate the centroids for the actual photon events in the current row. We saved the coordinates of the central pixel as the integer parts of the centroid. We computed the sub-pixel values in parallel by calculating the difference in intensities between the neighboring top and bottom pixels and the left and right pixels of the event (Fig. 4). Once calculated, each centroid value consists of the x and y coordinates with their integer and fractional parts and a sign bit to indicate the direction of actual centroid coordinate (top/bottom and left/right) from the central pixel coordinate. 3.2. Data Storage and Transmission 3.2.1. Photon Counting Mode In the centroiding mode, the output data for telemetry is in the form of packets. Each packet contains a frame ID, event ID, integer and fractional values of centroid coordinates, and the intensity value of the central pixel (Fig. 5). These packets are saved on the FPGA by creating a block of registers on the chip. This results in a faster operation: the packets from one frame can be sent over the serial port while the next frame is read. However, the number of photon events that can be saved per frame is limited by the number of logic cells available on-chip. In order to solve this, we save each packet to the SDRAM as and when it is generated. Hardware description languages do not support division operation as it is impossible to realize a divider module in FPGA hardware and division operations are accomplished using repeated subtractions which January 25, 2017 1:49 paper 6 S. Ambily Fig.4. Centroidcomputationfromeachofthewindows.Thevaluesobtainedherearedirectlystoredasapacketandfractional pixel levels of up to 8-bit accuracy are calculated at a later stage. Fig. 5. Data packet for each photon event. The final output packet is optimized to contain 7 or 8 bytes depending upon the sub-pixel accuracy required. add cost to the time and resources. Therefore, when transmitting the packet from the centroiding to the telemetry module, the sub-pixel values are not computed but are simply sent as the numerator and the denominator values. Before transmission, the sub-pixel values can be calculated with an accuracy of 4 bits each by a separate divider module. Because the RS232 transmission runs at a much slower speed than the FPGA clock, the telemetry unit runs the divisions in parallel and transmits the 4-bit fractional values for each packet. It is possible to improve the accuracy to 6 or 8 bits but at the expense of additional clock cycles and hardware resources. We have implemented an SD card interface for in-flight experiments, where instead of sending the data packets to the PC using serial port, we use a much faster Serial Peripheral Interface (SPI) protocol to store the data packets on the micro-SD card on the FPGA board. The values from the SD card can be read out as a text file or through the FPGA serial port communication at a later stage. 3.2.2. Continuous Frame Transfer Mode In this mode, the pixel values are directly transferred from the SDRAM to the SD card on-board after each frame is read. Since the SDRAM data width is 16 bits and our images are of 8 bits or 10 bits, the remaining bits are used to store the frame and row number which help in basic sanity checks of the data (such as, check for missing frames, incomplete rows, etc.). 3.3. Image Reconstruction Image reconstruction is intended to derive a meaningful image from the data packets and is done on the host PC (a single board computer in the case of a balloon flight) which receives the serial data from the January 25, 2017 1:49 paper Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector 7 FPGA board. In continuous imaging mode, the pixel values that are read from the SD card are stored into an image array after basic sanity checks. In the case of photon counting mode, the data stream itself contains a time stamp/frame ID, as well as the x,y coordinates and intensity information, which makes the image reconstruction easier. For example, in Fig. 2 (Right) we display the image reconstructed from the centroids, calculated from the simulated image (Fig. 2, Left, in Section 3.2). The image reconstruction flow chart is given in Appendix B. 4. Simulations ® We have tested the algorithm in MATLAB using a sample image array of 256 × 256, with simulated Gaussian photon events of random dimensions, positions, and intensity. Poisson noise was added to the image, where the output pixel intensity was generated from a Poisson distribution with mean equal to the input pixel value. In addition, salt and pepper noise was added to this image with a density of 2%. The image values were normalized to a range between 0 and 1 to simplify the programming. Pixel values from the image array were read serially in the same way as the CMOS chip readout. We have run a number of simulations and found that the mean number of photon events in the simulated array was 47.3 with an SNR of 14.6. The centroiding algorithm with a 5×5 window and different levels of thresholdingwasappliedonthesampleimagetoseehowthethresholdvalueaffectsthedetectionefficiency and noise rejection in the output image. We have illustrated this for one case which had 49 photon events in Fig. 6 (Left), with the actual photon events circled in red and the isolated hot pixels in green. The blue circles show the points that are actual events but were undetected in the reconstructed image due to their proximity to the edges of the image (numbered 1 and 2). The reconstructed image with the threshold value equal to the mean noise level of the image (0.0025 in this case) is shown in Fig. 6 (Right). In Table 1 we show the differences in the number of detected events as the threshold values are changed. We see that the optimum threshold value has to be at, or slightly higher than, the mean noise level of the original image for detecting the maximum number of photon events with higher signal-to-noise ratio (SNR). Fig.6. Left:Originalimagewiththedetectedcentroidshighlighted.Right:Imageaftercentroidingandreconstructionwitha thresholdvalueof0.0025.Thephotoneventsareshownbycircles,detectedcentroidsarecircledinred.Greencirclesrepresent the isolated hot pixels, and the blue circles are the events that were undetected (numbered 1 and 2). The second step was to implement and test the algorithm in the FPGA hardware. The intensity values of thesamesampleimage(Fig.6,Left)wereapproximatedto8-bitvalues(0−255)duetothespaceconstraints in the FPGA memory. The values were serially read by the centroiding module with the frame and line valid signals from the CMOS chip with the mean noise level as the threshold. The centroid coordinates were transferred to a PC through a serial-to-USB converter. The reconstructed image (in the same case as discussed above) has an SNR of 15.72 and 47 detected photon events, of which 45 are actual events and 2 are hot pixels. The fewer number of detected events (47 vs. 49) were because of the 8-bit approximation January 25, 2017 1:49 paper 8 S. Ambily Table 1. Comparison of simulated images after reconstruction. First column displays the threshold value applied on the originalimage.Secondcolumnshowsthenumberofdetectedphotoneventsafterreconstruction.Thelastcolumndisplaysthe SNR after reconstruction. Threshold value Number of detected events SNR 0.0 66 14.9868 0.001 58 15.3433 0.002 54 15.5892 0.0025 (mean signal value) 51 15.6330 0.004 51 15.6330 applied to the original image. The reconstructed image from the FPGA is shown in Fig. 7, with detected events highlighted in red and hot pixels in green. Fig. 7. Results from the FPGA implementation. Image after centroiding and reconstruction. The photon events are shown by circles, detected events are circled in red, and green circles are the hot pixels. 5. Hardware Implementation and Characterization We are operating the MCP with the gain of 106 for the typical voltage values of −200, 2200, and 6000 V at the cathode, MCPOut, and Anode terminals, respectively. The whole detector is powered by an in-house made power supply board which houses the FP632 power supply for the MCP, 5 V LiPo battery, and a voltage regulator. The distance between the phosphor screen and the front of the relay optics is currently kept at 3.5 cm, with the option of fine adjustments by using shims of about 100µm. This distance was arrived at after a few trials, to provide the best focus for imaging the entire 40 mm diameter of the MCP on the CMOS surface. The CMOS is fixed at the focal plane of the relay lens at 10 mm distance. To perform the characterization of the whole detector, we assembled detector with the MCP, power supply board, relay optics, and the CMOS readout board on the optical table in a dark room (Fig. 8, Right), with ambient temperature of 23◦C. A faint uncollimated light source was provided in the form of a green LED with variable intensity, kept at a distance of about 10 cm from the MCP. Measurement of dark current We have done the basic tests in both photon counting and frame transfer mode, with the gain in the frame transfer mode lower than in photon-counting mode. The dark count test was performed by covering the January 25, 2017 1:49 paper Development of Data Acquisition Methods for an FPGA-Based Photon Counting Detector 9 Fig. 8. Left: Readout assembly of the detector. Right: Detector test setup on the optical table. From left to right: UV light source, collimating lens, and the detector assembly with the MCP and the readout board. detector window and reading the image of the MCP surface on the CMOS in photon counting mode. At room temperature, we saw a dark count of 1000 cps which is expected of an S20 photocathode. The dark count frame is shown in Fig. 9, where the events above the threshold are highlighted. Fig. 9. Dark count frame at 23◦C (34 ms exposure). Highlighted are the detected dark counts. Estimation of event gain To estimate the throughput in photon counting mode, we used a faint light from the LED lamp. Though the lamp is not flux calibrated, its intensity can be varied. The CMOS operates at the maximum frame rate of 30 fps, while storage and transmission units run at 40 MHz. This means ideally we can have up to 22000 cps global event rate, without any significant dead time due to the electronics. But for our current lab tests, we are using an RS232 connection for transferring the data to the PC, which supports count rates of only up to 6000 cps. Under gain saturation, we have a total number of ∼ 4×106 photons reaching the CMOS sensor for each event, which translates to 900 ADUs for the CMOS sensor under the Gaussian profile of the event (these values come from the instruments data sheets). Due to the demagnification from the lens, a single pore is confined to 1/4th of a pixel, but each event can be spread across multiple pores depending upon the front end optics. To better understand the gain characteristics of the detector, we have done the tests for gain variations by measuring the total CMOS ADUs under each event for a number of events due to a faint LED source. January 25, 2017 1:49 paper 10 S. Ambily We measured the event gain by taking a histogram of the events for three different voltage settings. We can see from Fig. 10 that the actual event gain is less than the manufacturer’s value, and that it varies considerably. For the cathode voltage of 2.2 kV (gain of 106), we see a typical event gain of 550 ADUs instead of 900 ADUs, which may be due to the variations in the efficiency of the phosphor screen or the relay optics or both. Fig. 10. Left: Histogram of event gains. Rejection of multiple events Even with a very faint LED source, we see some double events in the detector. Such events are detected by an additional threshold value, which is the difference of maximum and minimum corner pixel values. For a 5×5-window centroiding algorithm, we found out the number of double events that are remaining after applying a particular rejection threshold (Fig. 11). In this figure, by different colors we mark the different levels of intensity of the LED lamp, with blue for the highest and red the lowest setting. Though we do not have calibrated flux, we can that at a higher rate of incoming photons we do see a large number of events marked as multiple events, which are detected by using an appropriate value of for the corner threshold. With fainter incoming flux, this number reduces fast, as expected. Imaging test To test the overall performance of the detector, we took the images of a USAF target in both centroiding and frame transfer mode. For frame transfer mode, we set lower voltage values for the HVPS, and read the image off the MCP at the CMOS surface. The data rate was kept at 30 MBPS for a 24 MHz input clock rate in full frame, although we have transmitted only one frame per second to the host PC to account for the large transmission delays. The image shown in Fig. 5 (Left) is reconstructed by the pixel values read ® out serially from the sensor by a MATLAB program. The image in Fig. 5 (Right) is reconstructed from the centroids with a dynamically estimated threshold value. The data packets, shown in Fig. 5, were read ® through the serial port by another MATLAB program. The centroids for the same frame ID were added together to form the image. The centroid values are accurate up to 1/16th of a pixel, though we have used only the integer parts for the reconstruction. In the current test set-up the resulting spatial resolution was rather low, 1.78 lp/mm, therefore we are not able to resolve within the single MCP pore. One of the reasons we are getting low spatial resolution is the performance of the relay lens (Sec 2.2). We are in the

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.