Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/6577
Authors: Rammohan, Ponnemkunnath Rammohan
Issue Date: 2011
Abstract: Pattern matching has been used in various applications like NIDS, virus detectors, Spam detectors, searching, bioinformatics etc. The security applications make use of pattern matching for detecting and thwarting various threats whereas it is used in bioinformatics to find the similarity between sequences. Increasing the performance of the pattern matching operation is very crucial in increasing the responsiveness of these systems as a whole. Graphics Processing Units (GPU) with their massively parallel architecture can be used to increase the performance of the pattern matching operation. We have suggested an approach for executing regular expressions on the GPU which uses the shared memory for caching the input data. This approach allows the global memory accesses to be coalesced and also handles memory bank conflicts. Experiments have been carried out and the proposed method is proven to be faster by a factor of 4 as compared to the earlier approaches which uses the texture memory for storing the input data and providing a speedup of 11.5X over the single CPU implementation. Various approaches for execution of BLAST on GPU have also been considered. Of all the seeding methods considered the indexing approach is the fastest for large inputs whereas the fastest for smaller inputs is the brute force approach. The finite automaton approach is useful for larger word sizes when the use of indexing approach is infeasible. Out of the 2 extension approaches the 2-Hit implementation gave speedup of up to 27X as compared to the CPU implementation whereas the 1-Hit method gave a speedup of only 7X. The implementations proposed are also scalable with the speedup achieved by using 2-nodes reaching 46X which is almost twice as fast as the single GPU implementation. Thus using GPUs for pattern matching operations provide good speedups and they justify the additional overhead involved in the transfer of data to and from the GPU memory space as also the additional cost involved in procuring them.
Other Identifiers: M.Tech
Research Supervisor/ Guide: Joshi, R. C.
metadata.dc.type: M.Tech Dessertation
Appears in Collections:MASTERS' DISSERTATIONS (E & C)

Files in This Item:
File Description SizeFormat 
ECED G21051.pdf3.58 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.