<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>DSpace Collection:</title>
  <link rel="alternate" href="http://localhost:8081/jspui/handle/123456789/15073" />
  <subtitle />
  <id>http://localhost:8081/jspui/handle/123456789/15073</id>
  <updated>2025-06-30T09:25:32Z</updated>
  <dc:date>2025-06-30T09:25:32Z</dc:date>
  <entry>
    <title>MOBILE CROWD SENSING IN URBAN SPACES USING SEAMLESS INDOOR-OUTDOOR LOCALIZATION</title>
    <link rel="alternate" href="http://localhost:8081/jspui/handle/123456789/15330" />
    <author>
      <name>Kulshrestha, Tarun Kumar</name>
    </author>
    <id>http://localhost:8081/jspui/handle/123456789/15330</id>
    <updated>2022-03-21T06:52:29Z</updated>
    <published>2019-04-01T00:00:00Z</published>
    <summary type="text">Title: MOBILE CROWD SENSING IN URBAN SPACES USING SEAMLESS INDOOR-OUTDOOR LOCALIZATION
Authors: Kulshrestha, Tarun Kumar
Abstract: Smart city integrates various intelligent devices, infrastructures, and services to monitor and control human activities with efficiency. Smart city management requires aggregation of urban informatics for sustainable cities. Some conventional sensing techniques, such as sensor networks are used to gather real-world data. Sensor network deployment is non-trivial because of high installation cost, insufficient space coverage. Therefore, to handle these issues, researchers have proposed a new large-scale sensing paradigm, called Mobile Crowd Sensing, based on the power of user-companioned devices, such as smartphones, smart vehicles, wearable devices. Mobile crowd sensing collects users’ local knowledge, such as local information, ambient context, noise level, and traffic conditions, using sensor-enabled devices participatory and/or non-participatory. The collected information is further aggregated and transferred to the cloud for data processing. It can be used to find the mobility patterns, traffic analysis and planning, public safety, environmental monitoring, and mobile social recommendation.&#xD;
In order to monitor and track the movement patterns of one or more persons in a densely populated area, the persons must be uniquely identified. Human location tracking/monitoring can allow the authorities to find/identify a lost person among thousands in the crowd, to evacuate people during emergencies, to manage crowd movements, to predict crowd in the future and to plan the resources accordingly. Existing wireless tracking based systems either use the packet analyzer software, such as Tcpdump, Wireshark, Kismet, or extra hardware which incurs high cost and makes the system complex. Some systems require RFID/BLE/Bluetooth tags to be provided to each person which is quite challenging and significantly expensive. Moreover, it is not feasible to distribute tags in case of emergencies or disasters. Some systems use expensive tag readers, so the number of tag readers to be deployed in the area should be limited. In addition, several commercial tag readers or scanners use proprietary technology and software which make it difficult to modify and integrate it with other systems.&#xD;
Recently, researchers have started utilizing sensor-enabled smartphones as a tag for large scale human sensing. As the usage of smartphones is increasing in the world, more persons can be tracked without providing any tag in the future. Some of the smartphone-based location tracking systems require an application to be installed on the client’s smartphone. The installed application obtains the location using GPS sensor of smartphone and continuously updates the location to the remote server using Internet connection.&#xD;
ii&#xD;
However, it is rare that users in a large crowd and in remote locations will have the Internet connection all the time. There are many operating systems and versions for smartphones, which makes the development and distribution of application a difficult task. In additions, at many indoor and urban locations, GPS does not work well. Most of the existing systems use two-step process for intercepting the MAC id (first step is to capture the probes from wireless signal then second is to process them). On the other hand, many research works are focusing on single positioning/wireless technology, such as RFID, Wi-Fi enabled devices, BT/BLE tags, and smartphone's GPS/General Packet Radio Service (GPRS) to track human in either indoor or outdoor environment.&#xD;
There is a need to design a portable, low-cost and easy-to-deploy system for tracking a large number of individuals using efficient wireless technology. The proposed system should be able to find a person’s current location as accurately as possible, as well as, upload the current position of a uniquely identified person with minimum delay, power, and network bandwidth. We explore that human identification and monitoring are critical in many applications, such as surveillance, evacuation planning. Human identification and monitoring are non-trivial tasks in the case of a large and densely populated crowd. However, none of the existing solutions consider seamless identification, tracking, and localization of the crowd in both indoor and outdoor environments with significant accuracy.&#xD;
In this dissertation, we propose a unmodified smartphone-based non-participatory human identification, tracking, and monitoring system to monitor the movement patterns of individual(s) in a densely crowded environment. The proposed system uses the smartphone as a sensing unit without any hardware modification to extract the MAC ids from the wireless probe requests emitted from the users’ wireless devices. Our proposed system employs hybrid localization technique (Google location API), a combination of multiple positioning technologies, GPS-Wi-Fi-Cellular to track individual(s) seamlessly in both indoor and outdoor stretches. MAC ids are stored and processed locally for short-term analysis, and then the filtered data is uploaded to the cloud server for extensive analysis and visualization. We also develop a real-time testbed for exploiting location analytics and to identify, track and find mobility patterns and visiting sequences of individual(s) in the data collected from the IIT Roorkee Institute campus and Har-ki-Pauri, Haridwar, India.&#xD;
Further, we develop a fast and scalable human trajectory tracking system. In the proposed system, we enhance the capability of sensing units, where these sensing units can communicate with each other and retrieve the data in real-time. Further, sensing units can&#xD;
iii&#xD;
track an individual with smart devices and can provide a complete analysis of his visited locations, such as stay time, trajectory in real-time. We use the Redis in-memory database and XMPP at the sensing units for fast data retrieval and exchange, respectively. When an individual move to a new location, WebSocket server updates that person’s new location automatically among all sensing units to make the system analysis in real-time. On the other hand, we have explored the access points locations in and around the IIT Roorkee campus and use access points data for localization and trajectory formulation of individuals with smartphones. The IIT campus provided a privileged environment for this research. To find the usability of our proposed system, we develop and deploy a real prototype testbed in IIT Roorkee campus.&#xD;
In the next step of our research work, we propose a real-time surveillance system which can identify, track and monitor a suspicious person (i.e., outlier) in the large-scale crowd where abnormal activities of individuals are considered as an anomaly/outlier. The proposed system handles the MAC randomization through the association/authentication frames and discards the locally assigned MAC addresses. We further propose an optimal sensing units’ selection algorithm to find the latest trajectory of the detected outlier(s). To validate and to show the usability of our proposed system, we develop a real prototype testbed and evaluate it extensively on a real-world dataset collected at IIT Roorkee, India. Optimal sensing units’ selection algorithm selects sensing units with an average selection accuracy of 95.3%.&#xD;
Individuals sharing similar location traces and performing same activities in their daily life over a long/short period of time may have similar interests and lifestyles. The correlation among users’ locations and activities can be used further for finding friends having similar lifestyles. We develop a users’ interaction framework based on recurrent neural networks for users having similar lifestyles/daily routine. By learning from historical users’ daily routine and preferences data, our proposed solution can predict the user’s schedule and suggests friends accordingly. We collect records from 50 users for the time period of six months in real-time to train the model. Further, data is processed and stored in the cloud for finding the users’ working patterns and their location coordinates within a time span. Experimental results show that our prediction module can get a good accuracy of around 92.8% which is well commensurate with the high variation in the user’s daily routine&#xD;
In short, the objective of our research is threefold. First is, to design and develop a portable, low-cost, and easy-to-deploy smartphone-based human identification, tracking,&#xD;
iv&#xD;
and monitoring system. Second is, to analyze the human mobility behavior patterns in real-time (e.g., frequency, order, and periodicity of visits, suspicious mobility pattern) and location analytics (e.g., Number of individuals at given location, arrival and departure from a location over time, stay time at location). Third is, to develop the SmartCST platform for easy prototyping of various MCS-based applications, such as crowd monitoring, human mobility and behavior, modelling human interactions, pilgrim safety, etc., with extensive analysis and visualization of localization data through the cloud server.</summary>
    <dc:date>2019-04-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>EXPLOITING LOCAL INFORMATION FOR TRAJECTORY CLASSIFICATION UNDER SURVEILLANCE</title>
    <link rel="alternate" href="http://localhost:8081/jspui/handle/123456789/15329" />
    <author>
      <name>Saini, Rajkumar</name>
    </author>
    <id>http://localhost:8081/jspui/handle/123456789/15329</id>
    <updated>2022-05-11T06:33:19Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: EXPLOITING LOCAL INFORMATION FOR TRAJECTORY CLASSIFICATION UNDER SURVEILLANCE
Authors: Saini, Rajkumar
Abstract: Object motion trajectory classification is the important task, especially when we aim to detect abnormal&#xD;
movement patterns in order to take appropriate actions for prohibiting unwanted events to occur.&#xD;
Given a set of trajectories recorded over a period of time, they can be clustered for understanding&#xD;
usual flow of movement or segmentation of unusual flows.&#xD;
With the advancement in low-cost sensors object trajectories can be recorded efficiently with&#xD;
ease. These sensors can be RGB video camera, depth camera like Kinect, and Global Positioning&#xD;
System (GPS). With the use of GPS, the real world coordinates of objects along with other related&#xD;
information are tracked and later processed for the real-time analysis of mass flow, crowd analysis,&#xD;
and anomaly detection in the flow.&#xD;
Video based trajectory analysis could be on-line or off-line. In on-line, objects are tracked in&#xD;
the live videos and there motion is analyzed immediately to make the higher order decisions like&#xD;
prohibiting the objects to enter restricted area, unstable areas like fire and floods, and situations&#xD;
like violence. Video trajectory classification is also done off-line, where object trajectories are&#xD;
first extracted from the recorded videos. Next, their motion is analyzed for off-line analysis by&#xD;
classifying the trajectories into different classes.&#xD;
In this thesis, we have focused on off-line analysis for the classification of object trajectories&#xD;
using the publicly available datasets. Using the local information along with global information is&#xD;
an efficient way to improve classification performance. To compute the local cues from trajectories,&#xD;
models could be built by partitioning the trajectories into variable number of segments based on the&#xD;
geometry of the trajectories.&#xD;
A graph based method for trajectory classification has been proposed. Each trajectory has been&#xD;
partitioned into varying number of segments based on its geometry. Complete Bipartite Graph&#xD;
(CBG) is formed between each trajectory pair and there Dynamic Time Warping (DTW) distance&#xD;
is used as the weight of the edge between them. Local costs are computed from the CBG and then&#xD;
fused (using Particle Swarm Optimization (PSO)) with the global cost (global cost computed using&#xD;
iii&#xD;
the same full length trajectory-pair) to improve the classification performance.&#xD;
We have also proposed a kernel transformation followed by trajectory classification framework&#xD;
that make the use of information from local segments. The proposed kernel perform the shrinking of&#xD;
trajectories in such a way they preserve their shape. Modified trajectories have been segmented with&#xD;
the help of segmental HMM and their local responses have been recorded. These local responses&#xD;
along with global responses (from full length trajectories) have been fused using genetic algorithm&#xD;
to make the final decision.&#xD;
A surveillance scene segmentation has been performed based on the results of trajectory classification&#xD;
using HMM. The scene layout is divided into 10   10 local non-overlapping grids and&#xD;
majority voting based scheme is applied to assign each block a label showing the importance of&#xD;
the blocks with the help of region association graph based features. Such off-line analysis helps to&#xD;
understand the flow of motion within the viewing field of video camera.</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>BRAIN LESION DELINEATION AND CLASSIFICATION</title>
    <link rel="alternate" href="http://localhost:8081/jspui/handle/123456789/15328" />
    <author>
      <name>Gautam, Anjali</name>
    </author>
    <id>http://localhost:8081/jspui/handle/123456789/15328</id>
    <updated>2022-03-21T07:11:58Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: BRAIN LESION DELINEATION AND CLASSIFICATION
Authors: Gautam, Anjali
Abstract: Brain stroke is a life threatening medical emergency which requires immediate&#xD;
medical care. This may be caused due to the blockage or bursting of brain's blood&#xD;
vessel. Based on the cause, it is named as ischemic or hemorrhagic stroke. Stroke&#xD;
may occur due to a variety of reasons, including high blood pressure, head trauma,&#xD;
cardiovascular disease, family history and transient ischemic attack. It is the leading&#xD;
cause of death in India as compared to other western industrialized countries. Therefore,&#xD;
a computer-aided diagnosis (CAD) system can help the physicians for proper and early&#xD;
diagnosis of stroke. The contents of this thesis are categorized into two parts -  rst&#xD;
deals with segmentation and second part focuses on classi cation of brain strokes. In&#xD;
the  rst part of the thesis, automatic and semi-automatic systems have been developed&#xD;
for the segmentation of stroke lesion from computed tomography (CT) and magnetic&#xD;
resonance (MR) images. In the second part, new feature extraction methods have been&#xD;
proposed to classify brain CT scan images into two or three categories (e.g. hemorrhage,&#xD;
ischemic and normal).&#xD;
In this thesis, Chapter 1 discusses brain strokes, its types, imaging methods like CT&#xD;
and MR, various challenges in their localization and identi cation, and motivation to&#xD;
do further study in order to overcome those challenges. Segmentation of the region of&#xD;
interest from CT and MR images and their classi cation is a challenging task. Therefore,&#xD;
several techniques have been developed for the segmentation and classi cation of medical&#xD;
images in order to ease the diagnosis process. These techniques are discussed in Chapter&#xD;
i&#xD;
Abstract&#xD;
2. In this thesis, clustering, thresholding and level set methods have been utilized to&#xD;
segment the stroke lesion. In Chapter 3, two methods have been developed for the&#xD;
segmentation of hemorrhagic stroke lesions from CT scan images. The  rst method is&#xD;
based on the fuzzy c-means (FCM) clustering and wavelet based thresholding techniques,&#xD;
and second method is based on a newly proposed distance metric for FCM. Chapter 4 is&#xD;
also based on the segmentation of hemorrhagic stroke lesions, where a newly proposed&#xD;
variant of FCM and distance regularized level set evolution (DRLSE) method have been&#xD;
used to identify the region of interest. The new variant of FCM is used to delineate&#xD;
the stroke. To enhance the segmentation results, the DRLSE method has been utilized.&#xD;
Chapter 5 is based on the segmentation of ischemic stroke from MR images. Initially,&#xD;
the MR images are denoised using wavelet based image denoising technique. Then, two&#xD;
di erent variants of segmentation methods thresholding and random forest with the&#xD;
active contour method of ITK-SNAP have been used to segment the ischemic stroke&#xD;
lesion.&#xD;
Chapter 6 and 7 are on the classi cation of brain strokes by extracting useful&#xD;
features from CT scan images. Feature extraction is the most important part of image&#xD;
classi cation. In this thesis, local, global and deep features have been used to extract&#xD;
meaningful features. In Chapter 6, brain stroke CT images are classi ed into two&#xD;
categories using two di erent methods. The  rst method is based on the convolutional&#xD;
neural network (CNN) framework. First, all the CT images are preprocessed using&#xD;
quadtree based image fusion method. Thereafter, the proposed convolutional neural&#xD;
network (P CNN) model is trained on the preprocessed image dataset, which classi es&#xD;
them into two categories. The second method focuses on extracting both local and&#xD;
global features. The local binary pattern (LBP), completed LBP (CLBP) and gray&#xD;
level gray co-occurrence matrix (GLCM) features have been used to extract these useful&#xD;
features and then classify images using di erent classi ers.&#xD;
Chapter 7 proposes two local feature descriptors which can classify images into three&#xD;
categories. The  rst descriptor is termed as the local neighbourhood pattern (LNP). It&#xD;
is based on the comparison of diagonal neighbours of the center pixel with the mean&#xD;
of whole image intensities. The other neighbours are calculated by comparing with&#xD;
their preceding neighbouring values. Further, the pattern code is calculated for the&#xD;
center pixel. In this way, the codes are computed for all the image pixels. Finally,&#xD;
ii&#xD;
Abstract&#xD;
1D histogram of obtained image codes is generated as the feature vector. The second&#xD;
method is based on calculating the mean (M) of whole image intensities and double&#xD;
gradients of local neighbourhoods of a center pixel of the image (I) in both x and y&#xD;
directions. Then, we generate an image B by comparing neighbours with M in order&#xD;
to compare double gradient images with this image. Thereafter, histograms of all the&#xD;
images are generated and  nally concatenated to form a single feature vector. The&#xD;
proposed method is termed as the local gradient of gradient pattern (LG2P) descriptor.&#xD;
The experimental results obtained by the proposed methods are compared with&#xD;
several previous methods. These results show that they are better than those with&#xD;
an encouraging performance of segmentation and image classi cation. The overall&#xD;
conclusion of the thesis and future scopes are given in the last chapter.</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>EVENT EXTRACTION FROM DIGITAL MEDIA</title>
    <link rel="alternate" href="http://localhost:8081/jspui/handle/123456789/15327" />
    <author>
      <name>Gupta, Swati</name>
    </author>
    <id>http://localhost:8081/jspui/handle/123456789/15327</id>
    <updated>2025-04-29T12:18:54Z</updated>
    <published>2018-09-01T00:00:00Z</published>
    <summary type="text">Title: EVENT EXTRACTION FROM DIGITAL MEDIA
Authors: Gupta, Swati
Abstract: Nowadays, digital media has become a source of huge amount of up-to-date information&#xD;
which increases exponentially day by day. The information is concealed with unstructured&#xD;
data and the end user cannot directly access the desired information from it. The solution to&#xD;
deal with this problem is to collect important facts from unstructured data and store them in&#xD;
such a way that it can help end user to serve their queries. The procedure of specifically organizing&#xD;
and consolidating data that is explicitly expressed or implied in one or more natural&#xD;
language documents, is known as Information Extraction (IE). Generally, the information in&#xD;
digital media is reported in the form of events. Events can be represented as several types&#xD;
such as with its specific names, change of state, situations, actions, relations, etc. Standard&#xD;
dataset such as ACE format represents events as a triplet of event mention, event trigger,&#xD;
event arguments and divides the events among eight categories. Thus, event extraction is&#xD;
an important task of information extraction as it helps in developing various systems like&#xD;
story-telling, news event exploration, social media information fusion, question answering,&#xD;
etc.&#xD;
To tackle the information overload issue, this thesis focuses on extracting information&#xD;
from news media and social media (Twitter) in terms of events and related key-phrases. In&#xD;
particular, the subsequent problems are addressed:&#xD;
• Named event extraction from news headlines dataset using a knowledge-driven approach.&#xD;
The knowledge-driven approach uses patterns or templates that define the&#xD;
expert domain-specific knowledge. The named events are enriched with their type,&#xD;
categories, popular durations, and popularity. The system utilizes the syntactic and&#xD;
semantic patterns of headlines to identify the named events. Named events are short&#xD;
Abstract&#xD;
phrases that represent the name of events like 2016 Rio Olympic Games, 2G Case, and&#xD;
Adarsh Society Scam. Named events are categorized into candidate-level and highlevel&#xD;
categories using URL information, and popular durations of named events are&#xD;
extracted using temporal information of news headlines.&#xD;
• Key-phrase extraction from news content for the purpose of offering the news audience&#xD;
a broad overview of news events, with especially high content volume. Given an input&#xD;
query, the system extracts key-phrases and enriches them by tagging, ranking, and&#xD;
finding the role for frequently associated key-phrases. The system utilizes the syntactic&#xD;
and linguistic features of text to extract the key-phrases from the news media content&#xD;
(text).&#xD;
• Event extraction from a large-scale Twitter repository using an unsupervised approach.&#xD;
The amount of acquired data from streaming media like Twitter is vast in nature. It&#xD;
contains readily available information regarding important events taking place during&#xD;
the time span. Hence, it is indeed difficult to deploy supervised learning strategies for&#xD;
analyzing the tweets for meaningful information extraction. On top of that, the tweets&#xD;
are unstructured in nature given the diversities of the end-users who put the tweets.&#xD;
A self-learning max-margin clustering approach which deploys the notion of Support&#xD;
Vector Machine (SVM) in an unsupervised setup is used to cluster semantically similar&#xD;
tweets.&#xD;
In this thesis, machine learning algorithms and Natural Language Processing (NLP) tools&#xD;
are used to extract the data from news media and Twitter. For each of the previously mentioned&#xD;
subjects, significant literature is studied thoroughly and the limitations of some existing&#xD;
methods are highlighted. The main motive to select the problems defined in this thesis is&#xD;
to prepare the methods that solve those limitations to the feasible extent. News media data&#xD;
(headlines, articles, meta keywords, etc.) and Twitter data are used to evaluate the performance&#xD;
of the proposed methods with respect to relevant state-of-the-art methods.</summary>
    <dc:date>2018-09-01T00:00:00Z</dc:date>
  </entry>
</feed>

