dc.description.abstract |
Mobile computing as compared to traditional computing paradigms enables clients to have
unrestricted mobility while maintaining network connections. Due to mobility, location
identification has naturally become a critical attribute, as it determines the location of mobile
users. The ability to pinpoint a mobile user's location due to the advances in global positioning
technologies, such as Global Positioning System (GPS), along with the advances in wireless
technology has motivated the emergence of a new class of mobile services commonly referred to
as Location-Dependent Information Services (LDISs). The promising applications are travel and
tourist information system, assistant and emergency system, nearest object searching system and
local information access system, to name a few. Users of LDISs face many new challenges
inherent to mobile environment. These challenges include limited bandwidth, intermittent
connectivity, limited storage, slow CPU speed, low battery power and small user interface. Data
management in this paradigm also poses new challenging problems. Thus, sophisticated data
management and resource management techniques are needed to enhance the performance of
data access in LDISs. The work presented in this thesis is an effort to address these issues by
proposing new and efficient cache management schemes for location-dependent data (LDD) in
mobile environment. We used geometric model based location identification technique for
mobile clients.
First part of the thesis addresses the cache invalidation issues. A cache invalidation scheme
maintains data consistency between the client's cache and the server. Unlike the common data,
every item of LDD usually has various values, which aretermed as datainstances of anLDD item.
Each instance is only valid within some specific region, which is termed as the Valid Scope
(VS) of that data instance. For maintaining consistency of the cached LDD, LDIS stores valid
scope of the data item along with its value in the client's cache. The valid scope of a data item is
represented and stored as a convex polygon on the server. Downloading valid scope along with
data consumes substantial bandwidth. The overhead of storing all end points of the polygon in
client's cache is large, so a subset of valid scope is stored that approximates the original valid
scope. But storing the subset of valid scope at client's cache reduces the precision of its validity.
The mobile client may be shown outside of the valid scope even when actually it is within the
original valid scope. Therefore, the problem of selecting the best subset (candidate) of valid
xin
scope that balances the precision and overhead costs becomes crucial. We present a Generalized
Caching Efficiency Based (CEB_G) algorithm which selects thebest suitable candidate for valid
scope that increases caching efficiency and compare its performance with the existing Caching-
Efficiency-Based (CEB) algorithm. Wethen introduce a new metric called Future Access (FA),
which takes into account the future movement behavior of the client and based on it propose
Caching Efficiency with Future Access Based (CEFAB) algorithm, which selects the best
suitable candidate for valid scope using FA. We further propose a generalized CEFAB algorithm
(CEFAB_G). Anumber of simulation experiments are conducted to evaluate the performance of
theproposed cache invalidation schemes. The results show that algorithms CEB_G, CEFAB, and
CEFAB_G give better performance than CEB for different system settings. Among the proposed
algorithms, CEFAB_G gives thebest performance. But, computational overhead at the server for
CEFAB_G and CEB_G is higher than CEFAB. Moreover, in CEFAB and CEFAB_G, the client
has to send additional information to the server, which requires extra computation at the client's
end, as compared to CEB_G. Thus, for low resource client CEB_G is preferred. Depending on
the resources at the server, choice can be made between CEFAB and CEFAB_G.
Second part of the thesis presents new cache replacement algorithms for location-dependent
information services. Due to the limitation of the cache size, it is impossible to hold all accessed
data items in the cache. As a result, cache replacement algorithms are used to find a suitable
subset of data items for eviction when the cache is full and a new data item is to be inserted into
cache. Several location-dependent cache replacement policies have been proposed for LDISs.
None of these cache replacement policies are suitable if client changes its direction ofmovement
quite often. The impact of client's anticipated location or region in deciding cache replacement
still remains unexplored. Existing cache replacement policies only consider the actual data
distance (directional/undirectional), and not the distance based on the predicted region/area
where the client can be in near future. When client movement pattern is random, retaining the
data items in the direction of user movement and discarding the data items that are in the
opposite direction of user movement may not always improve the performance. Therefore our
cache replacement policy considers the predicted region of user presence in near future (rather
than considering the direction of user movement only) while selecting a data item for
replacement. The predicted region is based on the client's current movement pattern. We propose
cache replacement algorithms based on the predicted region of user's presence in near future.
xiv
These algorithms predict an area in the vicinity of client's current position, and give priority to
the cached data items that belong to this area irrespective of the client's movement direction.
Based on the predicted region we propose Predicted Region based Cache Replacement Policy
(PRRP), Prioritized Predicted Region based Cache Replacement Policy (PPRRP) and Weighted
Predicted Region based Cache Replacement Policy (WPRRP). In PRRP, data distance is
calculated such that the data items within the predicted region are given higher priority than the
data items outside the predicted region. In PPRRP, in addition to giving highest priority to the
data items within the predicted region, data items nearer to the client's current position are also
favored over other data items in the same predicted region. WPRRP divides the whole area into
different sub regions: in-direction, out-direction, predicted and non-predicted andthen associates
different weights with each of these sub regions. By changing these weights this scheme can
adapt itself to suit to any situation. We compare our cache replacement policies with other
existing cache replacement policies such as Probability Area Inverse Distance (PAID), Furthest
Away Replacement (FAR), and Manhattan for LDIS. A number of simulation experiments have
been conducted to evaluate the performance of the proposed cache invalidation schemes. The
results show that proposed algorithms with different system settings, give much better
performance than other policies.
The third part of the thesis considers the effects of recency and frequency of data items
accessed, on location-dependent cache replacement. We propose a cache replacement policy
namely CRF Area and Inverse Distance Size (CAIDS) which uses the Combined Recency and
Frequency (CRF) value, valid scope area, data distance and data size of a data item, to select a
data item for replacement. Simulation results show that CAIDS has an edge over existing
policies. Earlier cache replacement policies for LDIS consider only the recency of data items.
But CAIDS considers both recency and frequency factors while deciding the item to be replaced
from the cache when the cache is full and new data item is to be inserted in the cache.
Lastly, the contributions made in the thesis in the area of cache management for LDIS have
been summarized and scope for future work is outlined. |
en_US |