Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/11310
Full metadata record
DC FieldValueLanguage
dc.contributor.authorManda, Venkateswara Rao-
dc.date.accessioned2014-11-26T08:15:14Z-
dc.date.available2014-11-26T08:15:14Z-
dc.date.issued2009-
dc.identifierM.Techen_US
dc.identifier.urihttp://hdl.handle.net/123456789/11310-
dc.guideAnand, R. S.-
dc.description.abstractSpeech recognition has been a fascinating and interesting topic for researchers for many years. From many years people have been trying to make our machines hear, understand and also speak our natural language. This arduous task can be classified into three relatively smaller tasks, these are Speech recognition to allow the machine to catch words phrases and sentences that we speak, Natural language processing to allow the machine to understand what we speak and Speech synthesis is to allow machines to speak. The work described in this dissertation falls under first category. A speech recognition system is comprised of two distinct blocks, a Feature Extractor and a Recognizer. The Feature Extractor block uses a Mel-frequency cepstral analysis, which translates the incoming speech signal into a feature vectors. Once features are extracted, they are used for matching with the features of stored words. The network output is the words to be recognized. For this purpose recognizer block uses the euclidian distance measurement. For real time applications this system can be implemented with the help of general purpose microprocessor or digital signal processor but these implementations are sequential, with less on-chip memory for buffering, so we require external memory for buffering. Fetching data from this external memory requires certain clock cycles which affects system performance. Also these systems require glue logic for their operations. We can reduce this glue logic and speed up our operations using Application Specific Integrated Circuits (ASICs). But main problem with ASICs are they require large time to market and initial investments are high. Before developing an ASIC we require to prototype our design. Field programmable Gate Arrays (FPGAs) prove to be a better solution for rapid prototyping. FPGAs are reprogrammable, have large number of logic cells suitable for implementing speech recognition system. We can explore the parallelism and pipelining features of FPGA. The objective of this dissertation is design, modeling, simulation and synthesis of speech recognition system. The dissertation aims as developing a prototype of speech recognition processor. This hardware logic is modeled in MATLAB SIMULINK using Xilinx System generator blockset and synthesized on Spartan3E xc3s500e-4fg320 FPGA chip. Then using hardware co-simulation feature of Spartan-3E starter kit, the results obtained in software and hardware simulations (i.e. on FPGA kit), are validated.en_US
dc.language.isoenen_US
dc.subjectELECTRICAL ENGINEERINGen_US
dc.subjectSPEAKER DEPENDENT SPEECH RECOGNITION SYSTEMen_US
dc.subjectFPGAen_US
dc.subjectSPEECH RECOGNITIONen_US
dc.titleDEVELOPMENT OF SPEAKER DEPENDENT SPEECH RECOGNITION SYSTEM ON FPGAen_US
dc.typeM.Tech Dessertationen_US
dc.accession.numberG14366en_US
Appears in Collections:MASTERS' THESES (Electrical Engg)

Files in This Item:
File Description SizeFormat 
EEDG14366.pdf2.42 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.