ebook img

Data compression using artificial neural networks. PDF

102 Pages·1.4 MB·en_US
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Data compression using artificial neural networks.

NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Data Compression using Artificial Neural Networks by Bruce E. Watkins September 1991 Murali Tummala Thesis Advisor: . Approved for public release; distribution is unlimited T259064 Unclassified SECURITV CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE FormApproved OMBNo 07040188 la REPORT SECURITY CLASSIFICATION lb RESTRICTIVE MARKINGS Unclassified 2a SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTION/AVAILABILITY OF REPORT Approved for public release; 2b DECLASSIFICATION/DOWNGRADING SCHEDULE distribution is unlimited. 4 PERFORMING ORGANIZATION REPORT NUMBER(S) 5 MONITORING ORGANIZATION REPORT NUMBER(S) 6a NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION (If applicable) Naval Postgraduate School Code 32 Naval Postgraduate School 6c. ADDRESS (City, State, and ZIPCode) 7b ADDRESS(Cry, State, and ZIPCode) Monterey, California 93943-5000 Monterey, California 93943-5000 8a NAME OF FUNDING/SPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) 8c. ADDRESS(City, State,and ZIPCode) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO NO NO ACCESSION NO 11 TITLE (Include Security Classification) DATA COMPRESSION USING ARTIFICIAL NEURAL NETWORKS 12 PERSONAL AUTHOR(S) Watkins, Bruce E. 13a TYPE OF REPORT 13b TIME COVERED 14. DATE OF REPORT (Year,Month,Day) 15 PAGE COUNT Engineer's Thesis FROM TO 1991 September 93 SUPP^^e^t^y^jT^io^^ 16 this tnesis ^g those ofthe autnor and do not reflect the official policy or position ofthe Department of Defense or the U.S. Government. 17 COSATI codes 18 SUBJECT TERMS (Co/itinue on reverse if necessary, and identify by block number) Neural Networks, Vector Quantization, FIELD GROUP SUB-GROUP Image Coding 19 ABSTRACT (Continue on reverse ifnecessary and identify by block number) This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance. 20 DISTRIBUTION/AVAILABILITY OF ABSTRACT 21 ABSTRACT SECURITY CLASSIFICATION D DBUNCLASSIFIED/UNLIMITED SAME AS RPT DTIC USERS Unclassified 22a NAME OF RESPONSIBLE INDIVIDUAL Tummala, Murali 22b TBLSPHO^^dndncte AreaCode) 22c aFoFdIeCEESCY/MBTOuL DD Form 1473. JUN 86 Previouseditionsareobsolete SECURITY CLASSIFICATION OF THIS PAGE S/N 0102-LF-014-6603 Unclassified Approved for public release; distribution is unlimited Data Compression Using Artificial Neural Networks by Bruce E. Watkins Lieutenant, USN B.S, University of California, Santa Barbara, 1984 Submitted in partial fulfillment of the requirements for the degree of ELECTRICAL ENGINEER from the NAVAL POSTGRADUATE SCHOOL September, 1991 ABSTRACT This thesis investigates the application of artificial neural networks for the com- pression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capabil- ity of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design techniques are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance. in / / 5 Cl TABLE OF CONTENTS INTRODUCTION I. j A. THESIS OBJECTIVE 6 B. THESIS OUTLINE 8 VECTOR QUANTIZATION II. 9 INTRODUCTION A. 9 B. DETAILS OF THE METHOD 9 NEURAL NETWORKS III. 17 INTRODUCTION A. 17 B. NEURAL NETWORK LEARNING 18 SUPERVISED LEARNING 1. 19 UNSUPERVISED LEARNING 2. 20 C. FREQUENCY SENSITIVE COMPETITIVE LEARNING 23 ALGORITHM DEVELOPMENT IV. 31 A. INTRODUCTION 31 B. TREE SEARCHED VECTOR QUANTIZATION 33 C. MULTI STAGE VECTOR QUANTIZATION 42 D. CLASSIFICATION VECTOR QUANTIZATION 50 V. CONCLUSIONS 61 A. ADDITIONAL WORK 64 APPENDIX A: PROGRAM DETAILS 65 REFERENCES 82 INITIAL DISTRIBUTION LIST 83 IV

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.