<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Year-2015</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/276" rel="alternate"/>
<subtitle/>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/276</id>
<updated>2026-04-11T00:24:04Z</updated>
<dc:date>2026-04-11T00:24:04Z</dc:date>
<entry>
<title>Economic incentive-based schemes for improving data availaility in mobile-p2p environments</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/282" rel="alternate"/>
<author>
<name>Padhariya, Nilesh</name>
</author>
<author>
<name>Mondal, Anirban (Advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/282</id>
<updated>2017-07-24T17:16:31Z</updated>
<published>2015-09-02T10:24:24Z</published>
<summary type="text">Economic incentive-based schemes for improving data availaility in mobile-p2p environments
Padhariya, Nilesh; Mondal, Anirban (Advisor)
In a Mobile ad hoc Peer-to-Peer (M-P2P) network, mobile peers (MPs) interact with each other in a peer-to-peer (P2P) fashion. Proliferation of mobile devices (e.g., laptops, PDAs, mobile phones) coupled with the ever-increasing popularity of the P2P paradigm (e.g., KaZaa, Gnutella) strongly motivate M-P2P network applications. However, challenges such as free-riding, data accessibility and mobile resource constraints (e.g., energy) need to be addressed for realizing M-P2P applications. In particular, economic incentive schemes become a necessity to entice mobile peers to share their data, given the generally limited resources of mobile devices. Furthermore, in M-P2P networks, data availability is typically low due to rampant free-riding, frequent network partitioning and mobile resource constraints. Hence, this dissertation proposes economic incentive-based schemes for effective data management in M-P2P networks.&#13;
In particular, this dissertation makes the following key research contributions. First, we propose the E-Top economic incentive-based top-k query processing system for M-P2P environments. The system assigns rewards/penalties (payoffs) to MPs for incentivizing their participation and for enabling them to re-evaluate their data item scores for top-k query processing. Furthermore, we extend the system to incorporate the notion of a peer group-based economic incentive scheme. Second, we propose the E-Broker system for improving data availability in M-P2P networks by incentivizing broker MPs to provide value-added routing service, which includes pro-active search for the query results by maintaining an index of the data items (and replicas) stored at other MPs (as opposed to just forwarding queries). Moreover, the system also incentivizes relay peers to act as information brokers for improving data availability and efficient load sharing. Third, we propose the E-VeT system for efficiently managing the vehicular traffic in road networks using economy-based reward/penalty framework with traffic congestion control. In particular, a user is rewarded for following system-assigned paths, while he is penalized for any deviations from the system-assigned paths. Finally, we present an economic incentive system for improving rare data availability by means of licensing (with group-based) replication in M-P2P networks.&#13;
We conducted extensive performance evaluation on our afore-mentioned proposed systems. The results demonstrate significant improvements in the processing of top-k queries in terms of query response times and accuracy at reasonable communication traffic cost, as compared to existing schemes. We also determine the number of brokers, beyond which the mobile peers are better off without a broker-based architecture i.e., they can directly access data from the data-providing peers. Furthermore, our performance study for E-VeT shows that it is indeed effective in managing vehicular traffic in road networks by reducing the average time of arrival and fuel consumption. Finally, our results also indicate considerable improvements in query response times and availability of rare data items in M-P2P networks.
</summary>
<dc:date>2015-09-02T10:24:24Z</dc:date>
</entry>
<entry>
<title>Harnessing auxiliary information : new methods to improve person identification</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/281" rel="alternate"/>
<author>
<name>Bharadwaj, Samarth</name>
</author>
<author>
<name>Vatsa, Mayank (Advisor)</name>
</author>
<author>
<name>Singh, Richa (Advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/281</id>
<updated>2017-07-24T17:15:39Z</updated>
<published>2015-09-02T03:53:21Z</published>
<summary type="text">Harnessing auxiliary information : new methods to improve person identification
Bharadwaj, Samarth; Vatsa, Mayank (Advisor); Singh, Richa (Advisor)
Large scale biometric identification systems still lack the versatility to handle challenging situations such as adverse imaging conditions, missing or corrupt data, and non-conventional operating scenarios. It is well understood that in different operating conditions, evidence of identity obtained from different sources is disparate. In such cases, additional ‘situational’ cues can be utilized to improve the performance and robustness. The primary emphasis of this&#13;
thesis is the formulation of new methods to utilize situational cues such as&#13;
quality of input biometric samples, social cues of co-occurrence, and other background information towards more inclusive biometric systems.&#13;
Biometric sample quality assessment during capture and its integration into&#13;
the recognition system improves performance and reduces the failure-to-enroll rates. The first contribution of this thesis is an in-depth survey along with statistical evaluation of different concepts and interpretations of biometric quality in multiple biometric modalities. The thesis also investigates the effectiveness of holistic representations of faces for classifying them into different quality categories that are derived from matching performance. The experiments on the CASPEAL and SCFace databases containing covariates such as illumination, expression, pose, low-resolution, and occlusion, suggest that the representations can efficiently classify input face images into relevant quality categories and be utilized in face recognition systems. An assessment based quality enhancement framework is also presented that showcases the effectiveness of quality assessment metrics for parameter selection in a denoising method to enhance performance and reduce computational time.&#13;
Multi-modal biometric recognition systems combine evidence from multiple&#13;
sources of information for improving the recognition performance. Existing&#13;
multi-modal biometric recognition techniques are, however, unable to provide&#13;
required levels of accuracy in uncontrolled noisy capture environments. Such algorithms do not adequately scale to variations in data distribution that occur&#13;
due to changing deployment conditions. The second contribution of this thesis is an adaptive context switching algorithm coupled with online learning to address both these challenges of multimodal biometrics. The proposed framework uses the quality of input images to dynamically select the best biometric matcher or fusion algorithm to verify the identity of an individual. The proposed algorithm continuously updates the selection process using online learning to address the scalability and accommodate the variations in data distribution. The results on the WVU multimodal database and a large real&#13;
world multimodal database obtained from a law enforcement agency show the efficacy of the proposed framework.&#13;
Humans are efficient at recognizing familiar faces even in challenging conditions by deducing social context between individuals in group photos. The&#13;
identity of the person in a photo, in such cases, is inferred based on other individuals present in the same photo; using the known or deduced social context&#13;
between them. The third contribution of the thesis is a novel algorithm to&#13;
utilize co-occurrence of individuals as the social context to improve face recognition.&#13;
Association rule mining is utilized to infer multi-level social context&#13;
among subjects from a large repository of social transactions. The results are&#13;
demonstrated on the G-album and on the real-world SN-collection pertaining&#13;
to 4675 identities that is prepared for the purpose of this research from a social&#13;
networking website. An anonymized version of the dataset with match scores from a commercial system is also made available. The results of the proposed approach show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.&#13;
The availability of a large number of unlabelled images from various sources&#13;
facilitates semi-supervised approaches to improve the performance and robustness&#13;
of recognition systems. As the fourth contribution, this thesis introduces&#13;
a novel learning based approach to face recognition towards an affordable and&#13;
friendly biometric for newborns. Biometric recognition of newborns is an opportunity&#13;
for the realization of several useful applications such as improved security against swapping and abduction, accurate census and effective drug delivery. The proposed approach couples learning based encoding method via deep neural networks with a one shot similarity distance metric formulated with an online SVM to match effective features with low semantic gap. To evaluate the approach, the largest publicly available database of 96 newborns&#13;
is collected from various hospitals to study face recognition and is also made&#13;
available to other researchers. Several existing face recognition approaches&#13;
and commercial systems are also evaluated on a common benchmark protocol.&#13;
The proposed approach provides state-of-the-art identification and verification&#13;
performance on the newborns database.
</summary>
<dc:date>2015-09-02T03:53:21Z</dc:date>
</entry>
<entry>
<title>Designing and evaluating techniques to mitigate misinformation spread on microblogging web services</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/277" rel="alternate"/>
<author>
<name>Gupta, Aditi</name>
</author>
<author>
<name>Kumaraguru, Ponnurangam (Advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/277</id>
<updated>2017-07-24T17:15:25Z</updated>
<published>2015-07-08T10:13:17Z</published>
<summary type="text">Designing and evaluating techniques to mitigate misinformation spread on microblogging web services
Gupta, Aditi; Kumaraguru, Ponnurangam (Advisor)
Online social media is a powerful platform for dissemination of information during important real-&#13;
world events. Beyond the challenges of volume, variety and velocity of content generated on online&#13;
social media, veracity poses a much greater challenge for  effective utilization of this content by&#13;
citizens, organizations, and authorities. Veracity of information refers to the trustworthiness /&#13;
credibility / accuracy / completeness of the content. Over last few years social media has also&#13;
been used to disseminate misinformation in the form of rumors, hoaxes, fake images, and videos.&#13;
We aim to address this challenge of veracity or trustworthiness of content posted on social media.&#13;
The spread of such untrustworthy content online has caused the loss of money, infrastructure and&#13;
threat to human lives in the onl ine world. We focus our work on Twitter, which is one of the most&#13;
popular microblogging web service today.&#13;
We provide an in-depth analysis of misinformation spread on Twitter during real-world events. We&#13;
propose and evaluate automated techniques to mitigate misinformation spread in real-time.&#13;
The main contributions of this work are: (i) we analyzed how true versus false content is propagated&#13;
through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source during real-world events; (ii) we showed the effectiveness of automated techniques to&#13;
detect misinformation on Twitter using a combination of content, meta-data, network, user pro le&#13;
and temporal features; (iii) we developed and deployed a novel framework for providing indication&#13;
of trustworthiness / credibility of tweets posted during events. We evaluated the effectiveness of&#13;
this real-time system with a live deployment used by real Twitter users.&#13;
First, we analyzed Twitter data for 25+ global events from 2011-2014 for the spread of fake images,&#13;
rumors, and untrustworthy content. Some of the prominent events analyzed by us are: Mumbai&#13;
blasts (2011), England Riots (2011), Hurricane Sandy (2012), Boston Marathon Blasts (2013),&#13;
Polar Vortex (2014). We identified tens of thousands of tweets containing fake images, rumors, fake&#13;
websites, and by malicious user pro files for these events. We performed an in-depth characterization&#13;
study of how this false versus the true data is introduced and disseminated in the Twitter network.&#13;
Second, we showed how features of meta-data, network, event and temporatl from user-generated&#13;
content can be used e effectively to detect misinformation and predict its propagation during real-&#13;
world events. Third, we proposed and evaluated an automated methodology for assessing credibility&#13;
of information in tweets using supervised machine learning and relevance feedback approach. We&#13;
developed and deployed a real-time version in TweetCred, a system that assigns a credibility score&#13;
to tweets. TweetCred, available as a browser plug-in, has been installed and used by 1,808 real&#13;
Twitter users. During ten months of its deployment, the credibility score for about 12 million tweets&#13;
was computed, allowing us to evaluate TweetCred in terms of accuracy, performance,  effectiveness&#13;
and usability.&#13;
The system TweetCred built as part of this thesis work is used e ectively by emergency responders,&#13;
 re ghters, journalists and general users to obtain credible content from Twitter. This thesis work&#13;
has shown that measuring credibility of the Twitter content is possible using semi-automated&#13;
techniques, and the results can be valuable to the real-world users. The insights obtained from&#13;
this research and deployment provide a basis for building more sophisticated technology to tackle&#13;
similar problems on diff rent social media.
</summary>
<dc:date>2015-07-08T10:13:17Z</dc:date>
</entry>
</feed>
