Year-2013http://repository.iiitd.edu.in/xmlui/handle/123456789/782024-03-19T11:22:05Z2024-03-19T11:22:05ZLabel constrained shortest path estimation on large graphsLikhyani, AnkitaBedathur, Srikanta (Advisor)http://repository.iiitd.edu.in/xmlui/handle/123456789/2752021-12-13T08:54:20Z2015-06-18T08:39:29ZLabel constrained shortest path estimation on large graphs
Likhyani, Ankita; Bedathur, Srikanta (Advisor)
In applications arising in massive on-line social networks, biological networks, and knowledge graphs it is often required to find shortest length path between two given nodes. Recent results have addressed the problem of computing either exact or good approximate shortest path dis- tances efficiently. Some of these techniques also return the path corresponding to the estimated shortest path distance fast.
Many of the real-world graphs are edge-labeled graphs, i.e., each edge has a label that denotes the relationship between the two vertices connected by the edge. However, none of the techniques for estimating shortest paths work very well when we have additional constraints on the labels associated with edges that constitute the path.
In this work, we define the problem of retrieving shortest length path between two given nodes which also satisfies user-provided constraints on the set of edge labels involved in the path. We have developed SkIt index structure, which supports a wide range of label constraints on paths, and returns an accurate estimation of the shortest path that satisfies the constraints. We have conducted experiments over graphs such as social networks, and knowledge graphs that contain millions of nodes/edges, and show that SkIt index is fast, accurate in the estimated distance and has a high recall for paths that satisfy the constraints.
2015-06-18T08:39:29ZMIMANSA : process mining software repositories from student projects in an undergraduate software engineering courseMittal, MeghaSureka, Ashish (Advisor)http://repository.iiitd.edu.in/xmlui/handle/123456789/1152017-07-24T17:13:54Z2014-01-24T04:21:36ZMIMANSA : process mining software repositories from student projects in an undergraduate software engineering course
Mittal, Megha; Sureka, Ashish (Advisor)
An undergraduate level Software Engineering course generally consists of a team-based semester
long project and emphasizes on both technical and managerial skills. Software Engineering
is a practice-oriented and applied discipline and hence there is an emphasis on hands-on de-
velopment, process, usage of tools in addition to theory and basic concepts. We present an
approach for mining the process data (process mining) from software repositories archiving data
generated as a result of constructing software by student teams in an educational setting. We
present an application of mining three software repositories: team wiki (used during require-
ment engineering), version control system (development and maintenance) and issue tracking
system (corrective and adaptive maintenance) in the context of an undergraduate Software En-
gineering course. We propose visualizations, metrics and algorithms to provide an insight into
practices and procedures followed during various phases of a software development life-cycle.
The proposed visualizations and metrics (learning analytics) provide a multi-faceted view to the
instructor serving as a feedback tool on development process and quality by students. We mine
the event logs produced by software repositories and derive insights such as degree of individual
contributions in a team, quality of commit messages, intensity and consistency of commit activi-
ties, bug xing process trend and quality, component and developer entropy, process compliance
and veri cation. We present our empirical analysis on a software repository dataset consisting
of 19 teams of 5 members each and discuss challenges, limitations and recommendations.
2014-01-24T04:21:36ZOCEAN: open-source collation of eGovernment data and networksGupta, SrishtiKumaraguru, Ponnurangam (Advisor)http://repository.iiitd.edu.in/xmlui/handle/123456789/1132017-07-24T17:14:57Z2013-12-03T03:52:32ZOCEAN: open-source collation of eGovernment data and networks
Gupta, Srishti; Kumaraguru, Ponnurangam (Advisor)
The awareness and sense of privacy has increased in the minds of people over the past few years.
Earlier, people were not very restrictive in sharing their personal information, but now they
are more cautious in sharing it with strangers, either in person or online. With such privacy
expectations and attitude of people, it is di cult to embrace the fact that a lot of information is
publicly available on the web. Information portals in the form of the e-governance websites run
by Delhi Government in India provide access to such PII without any anonymization. Several
databases e.g., Voterrolls, Driving Licence number, MTNL phone directory, PAN card serve as
repositories of personal information of Delhi residents. This large amount of available personal
information can be exploited due to the absence of proper written law on privacy in India. PII
can also be collected from various social networking sites like Facebook, Twitter, GooglePlus etc.
where the users share some information about them. Since users themselves put this information,
it may not be considered as a privacy breach, but if the information is aggregated, it may give out
much more information resulting in a bigger threat. For e.g., data from social networks and open
government databases can be combined together to connect an online identity to a real world
identity. Even though the awareness about privacy has increased, the threats possible due to the
availability of this large amount of personal data is still unknown. To bring such issues to public
notice, we developed Open-source Collation of eGovernment data And Networks (OCEAN), 1
a system where the user enters little information (e.g. Name) about a person and gets large
amount of personal information about him / her like name, age, address, date of birth, mother's
name, father's name, voter ID, driving licence number, PAN. On aggregation of information
within the Voter ID database, OCEAN 2 creates a family tree of the user giving out the details
of his / her family members as well. We also calculated a privacy score, which calculates the
risk associated with that individual in terms of how much PII of that person is revealed from
open government data sources. 1,693 users had the highest privacy score making them the most
vulnerable to risks. Using OCEAN, 3 we could collect 8,195,053 Voterrolls; 2,24,982 Driving
licence; 53,419 PAN card numbers; 1,557,715 Twitter; 3,377,102 Facebook; 29,393 Foursquare;
1,86,798 LinkedIn and 28,900 GooglePlus records. There exist several websites like Yasni, 4
PeekYou, 5 Pipl 6 which help in searching a person on the Internet but are not focused for
people living in Delhi. We performed a user evaluation of OCEAN 7 in a survey study to
evaluate the usability, e ectiveness and impact of OCEAN 8 and showed that users like and
nd it convenient to use it in real-world. We received 661 total hits (657 unique visitors) from
the day we released the system, January 21, 2013, until October 10, 2013. To the best of our
knowledge, this is the rst real world deployed tool which provides personal information about
residents of Delhi to everyone free of cost.
2013-12-03T03:52:32ZGeographical visualization approach to perceive spatial scan statistics : an analysis of dengue fever outbreaks in DelhiMala, ShuchiSengupta, Raja (Advisor)http://repository.iiitd.edu.in/xmlui/handle/123456789/1122017-07-24T17:14:48Z2013-11-25T09:57:52ZGeographical visualization approach to perceive spatial scan statistics : an analysis of dengue fever outbreaks in Delhi
Mala, Shuchi; Sengupta, Raja (Advisor)
In India, there is a strong need of a nation-wide disease surveillance system. As of now there
are very few surveillance systems in India to detect disease outbreaks. IDSP (Integrated Disease
Surveillance Project) was launched by Government of India with assistance of World Bank to
detect and respond to disease outbreaks quickly. Still e orts are needed to strengthen the disease
surveillance and response system for early detection of disease outbreaks. The strongest pillar
of an accurate disease surveillance system is data related to cases and various risk factors. After
data collection, the next important step is transformation of the collected data into meaningful
information. Precise statistical methods are then required to analyse the information at hand.
Disease outbreaks are detected using statistical analysis tools but for e ective disease control
a visualization approach is required. Without appropriate visualization it is very di cult to
interpret the results of analysis. In the work presented here, a statistical analysis is performed
to detect space-time disease clusters and then the developed visualization approach is used to
visualize the disease outbreaks. SaTScan software is integrated with the visualization approach
to detect location of disease clusters and to test whether the detected clusters are statistically
signi cant. Without the developed visualization approach users will have to run SaTScan soft-
ware for each disease per data source. Hence, the presented work provides an extremely e cient
and accurate technique for early detection of disease outbreak in the region covered by the
surveillance system.
2013-11-25T09:57:52Z