<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://localhost:8081/jspui/handle/123456789/15073">
    <title>DSpace Collection:</title>
    <link>http://localhost:8081/jspui/handle/123456789/15073</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://localhost:8081/jspui/handle/123456789/20451" />
        <rdf:li rdf:resource="http://localhost:8081/jspui/handle/123456789/20447" />
        <rdf:li rdf:resource="http://localhost:8081/jspui/handle/123456789/20331" />
        <rdf:li rdf:resource="http://localhost:8081/jspui/handle/123456789/20308" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-07T20:45:17Z</dc:date>
  </channel>
  <item rdf:about="http://localhost:8081/jspui/handle/123456789/20451">
    <title>A FRAMEWORK FOR METAHEURISTIC BASED  ALGORITHMS FOR TEAM FORMATION</title>
    <link>http://localhost:8081/jspui/handle/123456789/20451</link>
    <description>Title: A FRAMEWORK FOR METAHEURISTIC BASED  ALGORITHMS FOR TEAM FORMATION
Authors: Tukaram, Shingade Sandip
Abstract: Effective collaboration in networks is crucial for successful team formation. The team forma&#xD;
tion problem involves selecting a subset of agents, referred to as a team, from a larger pool,&#xD;
ensuring the team meets certain desirable properties. This research focuses on selecting agents&#xD;
with the necessary skills, previous communication, and shared abilities, thereby minimizing&#xD;
communication costs. A practical application of the proposed approach is in team formation&#xD;
for IT projects and other team selection scenarios. In this study, we use real-world datasets,&#xD;
ACM,Academia Stack Exchange, DBLP and Players_20 football team dataset to evaluate our&#xD;
methods.&#xD;
Wesuggest a single-objective heuristic approach based on the Grey Wolf Optimizer (GWO)&#xD;
with a modified swap operation to improve upon previous team formation work. This method&#xD;
effectively minimizes communication costs while selecting agents with the required skills. Ex&#xD;
perimental results show that the Improved GWO significantly outperforms traditional methods&#xD;
in terms of both performance metrics and communication cost reduction. Building on this, we&#xD;
propose a hybrid metaheuristic approach that combines Particle Swarm Optimization (PSO) and&#xD;
the Jaya algorithm with a modified swap operator(PSO-Jaya).&#xD;
The third approach focuses on improving algorithm efficiency by integrating state space re&#xD;
duction techniques into the metaheuristic framework to address the increasing complexity and&#xD;
computational demands of the previous methods. The Employee Bee Algorithm (EBA) is en&#xD;
hanced with state space reduction, speeding up the computation while maintaining or improving&#xD;
result quality(IEB).&#xD;
Lastly, we consider a multi-objective optimization context for team formation. For this,&#xD;
we compare several metaheuristic approaches, including NSGA-II, NSGA-II with Simulated&#xD;
Annealing (NSGA-II-SA), NSGA with PSO (NSGA-II-PSO), and our approach Differential&#xD;
Evolution-based NSGA-II (NSGA-II-DE).</description>
    <dc:date>2024-09-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://localhost:8081/jspui/handle/123456789/20447">
    <title>INFORMATION RETRIEVAL AND LOCALIZATION IN DOCUMENT IMAGE</title>
    <link>http://localhost:8081/jspui/handle/123456789/20447</link>
    <description>Title: INFORMATION RETRIEVAL AND LOCALIZATION IN DOCUMENT IMAGE
Authors: Ali, Tofik
Abstract: Information retrieval and localization in document images involve extracting, identi&#xD;
fying, and making accessible the relevant information within digital or digitized visual&#xD;
representations of traditional paper-based documents. This process is crucial for manag&#xD;
ing a vast and diverse range of documents encountered in various sectors such as legal,&#xD;
medical, academic, and corporate environments. Document images, which preserve the&#xD;
content, format, and sometimes the texture of the original documents, play a significant&#xD;
role in maintaining the integrity and authenticity of information. Effective retrieval and&#xD;
localization of information from these images require sophisticated techniques in image&#xD;
processing, machine learning, and deep learning to address challenges such as varying&#xD;
image quality, diverse document formats, and the need for accurate and efficient text&#xD;
recognition and interpretation. The goal is to transform static document images into&#xD;
dynamic, actionable data sources, enhancing their utility and accessibility in real-world&#xD;
applications.&#xD;
The digital transformation has revolutionized how information is stored, accessed,&#xD;
and managed across various sectors. Digitized documents offer numerous advantages&#xD;
over their physical counterparts, including ease of access, improved storage efficiency, and&#xD;
enhanced security. However, the real challenge lies in making this digitized information&#xD;
accessible and intelligible to users. Advanced technologies are required to bridge the gap&#xD;
between digitized information and its practical utility, necessitating the development of&#xD;
robust models and algorithms for efficient processing and interpretation.&#xD;
This research addresses the challenges inherent in document image analysis, such as&#xD;
i&#xD;
Abstract&#xD;
variability in image quality, diverse document formats, and complex layouts. It aims&#xD;
to develop advanced computational models for document image analysis to improve&#xD;
the accuracy and efficiency of character recognition, text segmentation, and image&#xD;
understanding. The study focuses on employing multi-task pre-training strategies to&#xD;
enhance the accuracy and efficiency of these technologies. The research methodology&#xD;
involves breaking down the problem into manageable components and systematically&#xD;
addressing each challenge using convolutional neural networks (CNNs), advanced text&#xD;
segmentation and recognition algorithms, and image understanding techniques.&#xD;
Key contributions of this research include the development of high-accuracy character&#xD;
recognition systems, particularly for handwritten scripts, leveraging advanced CNNs;&#xD;
the introduction of the Gated Multiscale Input Feature Fusion (GMIF) scheme for&#xD;
scale-invariant text detection; the development of Fast&amp;Focused-Net (FFN) for small&#xD;
object feature encoding using the Volume-wise Dot Product (VDP) layer; and the&#xD;
introduction of a multi-task pre-training approach that combines text, image, and&#xD;
layout information to enhance document information analysis.&#xD;
The proposed models and techniques have been evaluated on various datasets,&#xD;
demonstrating significant improvements in the accuracy and efficiency of document&#xD;
image analysis tasks. The real-world applications of these advanced technologies are vast&#xD;
and varied, spanning academic institutions, corporate environments, legal industries,&#xD;
and the medical field. This research contributes to transforming static document images&#xD;
into dynamic, actionable data sources, supporting automated workflows, facilitating&#xD;
decision-making, and promoting knowledge discovery.&#xD;
Keywords: Document Image Analysis, Information Retrieval, Text Localization,&#xD;
Machine Learning, Deep Learning, Convolutional Neural Networks (CNNs), Multi-Task&#xD;
Pre-Training, Image Processing, Text Segmentation, Character Recognition, Gated&#xD;
Multiscale Input Feature Fusion (GMIF), Fast&amp;Focused-Net (FFN), Volume-wise Dot&#xD;
Product (VDP) Layer, Entity Recognition, Relationship Extraction, Layout Analysis.</description>
    <dc:date>2024-07-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://localhost:8081/jspui/handle/123456789/20331">
    <title>SIZE AND PRODUCTIVITY METRICS BASED EFFORT ESTIMATION FOR SOFTWARE DEVELOPMENT</title>
    <link>http://localhost:8081/jspui/handle/123456789/20331</link>
    <description>Title: SIZE AND PRODUCTIVITY METRICS BASED EFFORT ESTIMATION FOR SOFTWARE DEVELOPMENT
Authors: Shukla, Suyash
Abstract: Attaining accuracy in effort estimates can yield notable benefits in project planning&#xD;
and is crucial in facilitating efficient project management, frequently exhibiting a substantial&#xD;
correlation with the triumph of a software project. The research community has&#xD;
recently become increasingly interested in applying machine learning (ML) techniques&#xD;
for software effort estimation (SEE). Several researchers have employed different ML&#xD;
techniques for successful SEE over different software datasets. However, selecting the&#xD;
appropriate generalized model for a problem has proven challenging.&#xD;
Obtaining optimal outcomes in SEE by utilizing individual models poses significant&#xD;
challenges. Hence, the researchers proposed an alternate approach wherein multiple&#xD;
models (ensemble) are employed collectively to predict SEE. The ensemble models&#xD;
utilized in previous studies have treated the base learner’s hyperparameter tuning and&#xD;
weight assignment as separate entities, which may result in a trade-off between bias&#xD;
and variance, impacting the ensemble performance. So, we have proposed a SEE model&#xD;
based on the self-adaptive ensembling approach, integrating hyperparameter tuning and&#xD;
model weighting, considering the bias and variance trade-off in decision-making. Also,&#xD;
the SEE datasets contain heterogeneous projects with highly distributed effort values,&#xD;
which may also degrade the prediction model’s performance. So, we have developed a&#xD;
variant of self-adaptive ensembling based on locality to deal with the issues related to&#xD;
data heterogeneity.&#xD;
Accurate SEE is crucial for successfully implementing software projects, and software size plays a major role in it. However, previous methods for estimating effort were&#xD;
founded on metrics such as software lines of code or function points to estimate the size.&#xD;
The increasing need for additional functionalities and the incorporation of new features,&#xD;
such as software reuse, distributed systems, and iterative development, has necessitated&#xD;
the creation of new methodologies for estimating software size and effort. Additionally,&#xD;
previous metrics for software size and approaches for estimating software effort lack&#xD;
automation. They do not utilize Unified Modelling Language (UML) artifacts to reveal&#xD;
software features pertinent to software size.&#xD;
The UML diagrams automatically capture attributes pertinent to the computation&#xD;
of software size. The automation of extracting software size attributes from UML&#xD;
diagrams offers a more efficient approach to calculating software size and estimating&#xD;
software development effort. The Use Case Point (UCP) approach, established by&#xD;
Karner, is a widely recognized and significant early-stage SEE strategy founded on&#xD;
the fundamental elements of use case diagrams (actors and use cases). The utilization&#xD;
of UCP-based approaches is highly appropriate for this particular demand due to its&#xD;
advantageous alignment with two prominent industry practices: (1) the object-oriented&#xD;
(OO) development paradigm and (2) the utilization of use case modeling.&#xD;
Many researchers have conducted investigations utilizing several linear regressionbased&#xD;
models to estimate UCP. While the error estimates derived from existing models&#xD;
demonstrate improvement compared to traditional models, they cannot effectively&#xD;
handle nonlinear interactions within the UCP datasets. So, we have developed UCPbased&#xD;
models to estimate software efforts by utilizing different solo and ensemble models&#xD;
to handle nonlinear relationships in the SEE datasets. Also, the UCP approach consists&#xD;
of size estimation (in UCP) and effort estimation with calculated size. The productivity&#xD;
of a project is one of the main components for estimating effort from the UCP. However,&#xD;
productivity prediction is not explored in the UCP literature. So, we have also proposed&#xD;
a model for productivity prediction based on environmental factors.</description>
    <dc:date>2024-04-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://localhost:8081/jspui/handle/123456789/20308">
    <title>DATA AGGREGATION RELIABLE STORAGE AND DECISION SUPPORT SYSTEM IN SMART AGRICULTURE</title>
    <link>http://localhost:8081/jspui/handle/123456789/20308</link>
    <description>Title: DATA AGGREGATION RELIABLE STORAGE AND DECISION SUPPORT SYSTEM IN SMART AGRICULTURE
Authors: Chaudhary, Ajay</description>
    <dc:date>2024-07-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

