Information and conclusions as goal is known as Data

Information retrieval on content based searching using
Hidden Markov Model

 

Khyati Sethi

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Kanika Sharma

Jayant Gulati

Parul Yadav

Student,
Department of Information Technology, Bharati Vidyapeeth College of
Engineering, New Delhi, India

Student,
Department of Information Technology, Bharati Vidyapeeth College of
Engineering, New Delhi, India

Student,
Department of Information Technology, Bharati Vidyapeeth College of
Engineering, New Delhi, India

Assistant
professor, Department of Information Technology, Bharati Vidyapeeth College
of Engineering, New Delhi, India

Email Id: [email protected]

Email Id: [email protected]

Email Id: [email protected]

Email Id: [email protected]

 
 

 

 

 

Abstract: Data Analysis
is used to model data with the aim of discovering useful information.
Information is retrieved and Hidden Markov Model is incorporated to identify
the relevance of a document. Relevance, evaluation, and information needs are
the definitive key issues associated with the analysis of data and the
retrieval of information. The relational value of an input given by user in the
form of query, within a dataset, is known as relevance. This relational value
of a document is normally based on a document ranking algorithm. These
algorithms explicitly define how applicable a document is to user query by
defining and using functions that relate interconnections between the query
provided and the documents indexed. An effortless data access mechanism system
is needed that works in a manner that is convenient and appreciated by the
user. Retrieving a large amount of information might be inconvenient in certain
systems. Simultaneously, in other systems, not returning all relevant
information may sometimes be unacceptable. After ascertaining the relevance of
the recovered data using the Hidden Markov Model, we employ concepts such as
precision and recall to estimate and analyse the model.

Keywords: Hidden
Markov, Information retrieval, relevance, precision, recall

I. INTRODUCTION

The process of
inspection, transformation and shaping up of data, keeping the discovery of
useful information and conclusions as goal is known as Data Analysis. This
supports decision making. Information retrieval is carried out by various IR methods
and data is further analyzed. 

Usually, the
evaluation of relevance with the help of some document representations with
respect to the query is done by an IR system. There are various models for
representation documents and queries. Thus, each model has its pros and cons.

Data analysis
has multiple facets and approaches, encompassing diverse techniques under a
variety of names, in different business, science, and social science domains. Data
analysis is closely associated to the visualization and dissemination of data.
The term data analysis is often referred to as data modelling.

Information
retrieval refers to the task of extracting out relevant information resources applicable
to an information requirement from a set of information resources collected. Usually,
metadata or on full-text (or other content) based indexing searches can be
performed.

Hidden Markov models have been successfully
designed and implemented, over the period of last two decades, covering a wide
variety of speech and language related recognition problems which include
speech recognition, named entity ending, optical character recognition, and
topic identification and a lot more1. In the present work, an application of
this technology is described by us with respect to the problem of ad hoc IR
2. In all HMM applications, the observed data is modelled by the output
produced by passing any unknown key through certain noisy channel(s). In the ad
hoc retrieval problem(s), we have the observed data as the query problem, and
an unknown key that makes up a desired relevant document. Thus, for each
document we can compute the probability that it highly probable that this was
the relevant document in the user’s mind, given the query. We then rank the
documents based on this measure.

II. EXISTING
MODEL

Data
mining is a distinct technique for data analysis which does not concentrate
upon purely descriptive purposes, rather, focuses on modelling and discovery of
knowledge for predictive purposes. Data relying excessively on aggregation and
aiming in business information comes under business intelligence. Customer data
and IT tools build the substructure on which a victorious CRM strategy is created.
Also, the quick expansion of the web and related technologies has substantially
extended the number of marketing opportunities. In addition, this has altered
the way relationships between

Companies
and their customers are balanced and managed 3.

Predictive analytics
aims at application of statistical models for estimating or categorization, while statistical,
language-producing, and systemic techniques are applied to text analysis to
acquire and classify information from textual resources.

Retrieving
information from the web incorporates handling the abstractness and volume of
data contained on the internet. When including factors such as word ambiguity and
a large number of typographical errors, it is made increasingly difficult. As estimated,
one word in every two-hundred words, on a web site, will contain a textual
error on an average. There exist a variety of key pitfalls comprehending IR- relevance,
evaluation, and information needs.

However, this is
not the complete set of issues involving IR. Common information retrieval
problems include potential, scalability and paging update occurrences. The
relational value of an input given by user in the form of query, within a
dataset, is known as relevance. This is generally based on a document ranking algorithm.

The larger complications
with web information retrieval that are relevance and evaluation are still significant
subject matters that require attention, amongst others.

The documents
and queries are a collection of terms where every term within the document is
indexed. 1 and 0 denote the presence and absence of some text in a text source
respectively 4,5. Maintenance of an inverted index of every term is necessary
in order to process matching of document and query. However, the Boolean model has
some major limitations as explained further. Binary decision criterion has a
disadvantage that it exists without any notion of grading scale. Another
includes overloading of documents. Some researchers have worked upon this to
overcome the weaknesses of the above said Boolean model by building improvisations
to the existing one. Certain researches have also approached data analysis with
a different search strategy known as the Vector Space model 5.

The Vector Space
Model represents documents and queries internally as vectors. In this model,
every query and document is represented as vectors that exist in a
|V|-dimensional space. Here V is the collection of all distinct terms in the set
of documents. Here, the documents set is the vocabulary 5.

Markov Processes
were first proposed by Russian Mathematician Andrei Markov. A Markov model in
probability theory is a stochastic model. This model is used to model systems
that change randomly. In this model, it is presumed that the future states depend
only on the present ones rather than the sequence of events that occurred prior
to it 1, 2, 6.

There exist four
Markov models that are used in different situations, depending on the
observational degree of every sequential state.

III. MODEL TO BE IMPLEMENTED

A hidden Markov
model (HMM) constitutes of a Markov model that is statistical. Here, the system
to be modelled is presumed to be a Markovian process with states that are hidden,
which implies that the states are unobserved. The simplest dynamic Bayesian
network can refer to as HMM 7.

For
the measurement of effectiveness of spontaneous information retrieval in the
standard way, we require a collection of tests consisting of three things:

?             Collection of documents

?             Test suite of requirements represented
as a set of queries.

?             Set of conclusions, which standardly
is a binary assessment of relevance computed as either relevant or irrelevant
for every text-query pair.

Earlier, various
researchers have used following parameters to evaluate the performance of IR
Systems:

1.
Precision: It is the fraction of documents relevant among the completly
retrieved document. Practically it gives accuracy of the judgement.

Precision=|Ra|/|A|

2.
Recall: The fraction of the documents retrieved and relevant among all relevant
documents is referred to as recall. Practically it gives coverage of result.

Recall
=|Ra|/|R|

 

Where,

Ra:
Set of relevant documents retrieved

R:
Set of all relevant documents

 

In pattern
recognition system and IR with binary classification, precision refers to the
fraction of instances retrieved that are found to be relevant, while recall
refers to the fraction of relevant instances that are extracted and retrieved.
Both precision and recall are henceforth derived from an understanding and
degree of relevance.

A. Language used- Python

For both small
and large scale, Python helps enabling clear programs by providing constructs.
Its features include a dynamic type system and an automatic memory management.
It also has a huge and all-inclusive standard library.

Python’s large
standard library provides tools to users that are suited for numerous tasks.
Modules for creating GUIs, connecting them to relational databases, pseudorandom
number generators, and arithmetic decimals with arbitrary precision, manipulation
of regular expressions are included. It is also capable of performing unit
testing.

B. Dataset used

The
OHSUMED test collection is a combinational set of 348,566 references from
MEDLINE. It is the on-line database for medical information present on World
Wide Web. It has a title, MeSH indexing terms, author, and an abstract with
source as available fields in the database.

The existing
OHSUMED topics define the real requirements. Although, the judgements of
relevance does not have the same coverage as given by the pooling process of
TREC. The information requirements aren’t directly expressed by MeSH but these
terms manage the indexing terms. The standard TREC format provides the topic
statements and includes only

and <desc> fields. <p>The<br> relevant document files are described below which simulate human judgement and<br> contain information for 0 or 1 for every MeSH term expressed in the filtration<br> of any given topic.</p> <p>(1)    <br> OHSUMED<br> relevance judgments (files: qrels.ohsu.*)</p> <p>Four<br> searchers replicate each query. Out of these four, two are physicians who are experienced<br> in searching and the other two are medical librarians. A completely different set<br> of physicians estimate the results for relevance. This judgement is performed<br> on a three point scale. The pointers are: definitely, possibly, or not<br> relevant. Consideration for relevance is done for all documents that are<br> checked to be either definitely relevant or possibly relevant.</p> <p>(2)    <br> MeSH<br> relevance judgments (files: qrels.mesh.*)</p> <p> The document is considered to be relevant to a<br> MeSH topic if its concept is included in the list of MeSH term fields. </p> <p> </p> <p>C. WHOOSH: Python Library</p> <p>Whoosh was<br> created by Matt Chaput. </p> <p>?             Whoosh uses only pure python hence<br> runs anywhere python can, and so is fast. It runs without requiring a compiler.</p> <p>?             Whoosh uses the Okapi BM25F ranking<br> function by default, but can be easily modified.</p> <p>?             Fairly small indexes are created by<br> Whoosh as compared to numerous other search libraries.</p> <p>?             All indexed text in Whoosh must be<br> unicode.</p> <p>Whoosh<br> permits you index free structured text for quickly searching matching documents<br> with respect to either simple or complex search guidelines.</p> <p>Some<br> predefined field types are provided by whoosh:</p> <p>whoosh.fields.TEXT</p> <p>It<br> is used for indexing the text and storing locations for the terms. These<br> positions or locations further allow phrase searching. </p> <p>whoosh.fields.ID</p> <p>The<br> entire value of the field is indexed into a single unit using the ID field,<br> rather than breaking it up into separate terms. </p> <p>whoosh.fields.STORED</p> <p>It<br> is neither an indexed type nor a searchable one. This is useful for displaying<br> the information to the user in the search results.</p> <p>whoosh.fields.KEYWORD</p> <p>An<br> indexed and searchable type, this is created for comma and space separated words.</p> <p> </p> <p>whoosh.fields.NUMERIC</p> <p>This<br> is capable of storing int, long, or floating point numbers in a format that is<br> sortable and compact</p> <p>whoosh.fields.BOOLEAN</p> <p>Indexing<br> of boolean values is done by this field and this type allows users to search<br> for results like: true, false, 1, 0, t, f, yes, no.</p> <p>whoosh.fields.DATETIME</p> <p>Date-time<br> objects are stored in this field in a compact and extremely sortable format.</p> <p> </p> <p>A Format object is<br> made to define the type of information is recorded by a field about each term.<br> It also describes how it has to be stored on the disk. For example, this is how<br> the postings are stored by the Existence format:</p> <p>While on the<br> other hand, this is how the Positions format would do the same:</p> <p>The Unicode<br> string is passed to the field’s format object for a field by the indexing code.<br> An analyser is called by the format object which breaks the string into tokens.<br> Further, encoding of the information is done about each of them.</p> <p>The inverted<br> index performs mapping of the terms to the documents in which they appear. Also,<br> sometimes it is useful to store a term vector that maps all the terms that arise<br> in the documents to the original document sources.</p> <p>For example, inverted<br> index of a field is:For the image above, the respective<br> forward index is:</p> <p>D. Creating An Index<br> Object</p> <p> </p> <p>For<br> opening an existing index in a directory, index.open_dir:</p> <p>import<br> whoosh.index as index</p> <p>ix<br> = index.open_dir(“indexdir”)</p> <p> </p> <p>For<br> creating an index in a directory, index.create_in:</p> <p>import<br> os, os.path</p> <p>from<br> whoosh import index</p> <p>if<br> not os.path.exists(“indexdir”):</p> <p>    os.mkdir(“indexdir”)</p> <p>ix<br> = index.create_in(“indexdir”, schema)</p> <p> </p> <p>The<br> schema using which the index is created is stored with the index itself. Indexes<br> can be kept in the same directory using the index-name keyword.</p> <p> </p> <p>To<br> use the convenience functions</p> <p>ix<br> = index.create_in(“indexdir”, schema=schema, indexname=”usages”)</p> <p>ix<br> = index.open_dir(“indexdir”, indexname=”usages”)</p> <p> </p> <p>To<br> use the Storage object</p> <p>ix<br> = storage.create_index(schema, indexname=”usages”)</p> <p>ix<br> = storage.open_index(indexname=”usages”)</p> <p>The relevance of<br> the documents using Hidden Markov Model is compared with the tf.idf approach.<br> Tf.idf is an approach based on numerical statistic based vector model. It reflects<br> necessity of a word to a document in a corpus. Often, it is used in IR and data<br> mining as a weighting factor. </p> <p>The tf-idf value<br> is proportional to the frequency of appearance of a word given in the document.<br> Although, it is offset by the frequency of the word in the corpus. This helps<br> to relate to the fact that in general some words have more frequency of<br> appearance than others.</p> <p>For the<br> implementation, the first step is to design the schema and then indexing is<br> performed 5. Then tf.idf values are calculated using Whoosh Library in Python.<br> For HMM calculation the data observed is assumed to be the query Q, and an<br> unknown key is assumed to be a relevant document D that is desired. The mind of<br> the user is a noisy channel, who is having either some precise or rough notion of<br> the documents he requires. This channel transforms that expressed notion into<br> the query text Q. Hence, we compute the probability for each document D that it<br> was the relevant one in the user mind, provided that Q was the query which was<br> expressed or produced, i.e. P (D is RjQ). We further rank the documents with respect<br> to this measure 6. This can be incorporated in the form of graphs. These graphical structures represent information<br> about a domain that is uncertain. Particularly, each node represents a random<br> variable and the edges denote the probabilistic dependencies transitioning between<br> all the random variables 8.</p> <p>“Hidden”<br> is the term represents that an observer can view only the output states. But he<br> doesn’t realise the underlying sequence of transitions and states by which the<br> output is generated 9. </p> <p>P (qjD) is the<br> output distribution of any document D. It is set to be the sample distribution<br> on words appearing in that document. For any document Dk, we can explicitly set</p> <p>It is the<br> distribution that has the maximum probability of producing Dk itself by<br> repeatedly sampling the state “General English”. It is estimated by</p> <p>The sum here is<br> taken for all documents present in the corpus. Using the parameters estimated<br> above, the formula for P (QjDk is R) is stated as under:</p> <p>IV. ADVANTAGES</p> <p>1. Hidden Markov<br> models (HMMs) are a formal foundation used for creating probabilistic models for<br> problems of linear sequence ‘labelling’. Just by drawing an intuitive image, a<br> conceptual toolkit is provided. This is very useful for building complex models.<br> They are at the hub of a set of miscellaneous programs. These programs include<br> gene finding, multiple alignments of sequence, profile searches and identification<br> of regulatory site. </p> <p>2. An HMM is a<br> full probabilistic model—the overall ‘scores’ generated for sequences and the<br> parameters calculated are all probabilities 6, 9. Hence, Bayesian probability<br> theory can be incorporated for the manipulation of these numbers in more<br> powerful ways. This includes optimization of parameters and interpretation of<br> the significance of scores 5.</p> <p>3. HMMs can be proved<br> useful for modelling of processes which contain different stages that occur in<br> definite orders 9. </p> <p>If, for example,<br> you want to model the behaviour of a technical system that first boots, then<br> operates, then enters sleep mode, and iteratively changes between sleep and<br> operation later on, you might need three states (boot, operate, sleep) and can<br> use this process model to find out what’s going on in the system at any one<br> time. Similar is the case with a human biological system where the observations<br> can be the sequence of symptoms of a human being. Human genome project also<br> requires the assimilation of HMM for DNA sequencing and RNA structuring 10.</p> <p>V. CHALLENGES</p> <p>Complications<br> like scalability and frequencies of paging update are familiar IR issues. Ranking<br> algorithms are implemented with the usage of methods that elucidate relations<br> between the given query and the accumulated documents. All the feedback given<br> by the information retrieval system has to be evaluated, which is another issue<br> with IR. The way the system behaves, may or may not meet the expectations of the<br> user. All the documents that are returned from the procedure may not be<br> relevant to a given query.</p> <p>The way a user<br> interacts with the IR system is termed as Information needs. Retrieval of a lot<br> of information might be disruptive in a number of systems. On the other hand,<br> in another number of systems, not returning a complete set of relevant data may<br> be inadequate. </p> <p>As experienced,<br> handling a set of voluminous information from the internet might be extremely difficult<br> because of the extremely large size of documents the server manages. </p> <p>A thousand of<br> documents can be returned by a simple retrieval query. Many of those documents<br> are loosely related to the original criteria of retrieval. To deal with this,<br> an IR system is required to have a query management that is efficient enough as<br> well as contains a good level of ability in order to give weight as priority to<br> documents that are closer for relevance to the user query.</p> <p>VI. CONCLUSION</p> <p>In simple terms,<br> high precision relates to a higher degree of relevance than irrelevance<br> returned by an algorithm considerably, while high recall states that most of<br> the relevant results are returned by an algorithm.</p> <p>For the<br> comparison of HMM with the traditionally used model, Indexing and searching<br> were performed followed by applying searching to query multiple words, and<br> successful results were generated. </p> <p>Succeeding the<br> above, Tf.idf values were found, and their precision compared with the ranked<br> HMM values.</p> <p>In the analysis<br> held for comparing tf.idf model with HMM, we find that the precision of HMM is<br> greater than that of values generated by tf.idf. Thus, HMM is capable of<br> retrieving more relevant data than tf.idf does.</p> <p>REFERENCES</p> <p>1A<br> tutorial on Hidden Markov Models and selected applications in Speech<br> Recognition Lawrence R. Rabiner</p> <p>BOOK:<br> Readings in speech recognition</p> <p>Morgan<br> Kaufmann Publishers Inc. San Francisco </p> <p>ISBN:1-55860-124-4</p> <p>2″Hidden<br> Markov Models” by Phil Blunsom, August 19, 2004</p> <p>3 Application<br> of data mining techniques in customer relationship management</p> <p>E.W.T.<br> Ngai a, Li Xiu b, D.C.K. Chau a</p> <p>Expert<br> Systems with Applications: An International Journal archive: Volume 36 Issue 2,<br> March, 2009 </p> <p>4 A SURVEY OF TEXT CLASSIFICATION</p> <p>ALGORITHMS</p> <p>Charu<br> C. Aggarwal, ChengXiang Zhai</p> <p>BOOK:<br> Mining Text Data</p> <p>5 A Review on<br> Important Aspects of Information Retrieval Yogesh Gupta, Ashish Saini, A.K.<br> Saxena</p> <p>World<br> Academy of Science, Engineering and TechnologyInternational Journal of<br> Computer, Electrical, Automation, Control and Information Engineering Vol:7,<br> No:12, 2013</p> <p>6 A Revealing<br> Introduction to Hidden Markov Models</p> <p>Mark<br> Stamp: Department of Computer Science</p> <p>San<br> Jose State University: September 28, 2012</p> <p>7″An<br> Introduction to Hidden Markov Models and Bayesian Networks” by Zoubin<br> Ghahramani, University College London</p> <p>BOOK:<br> Hidden Markov models</p> <p>World<br> Scientific Publishing Co., Inc. River Edge, NJ, USA ©2002 </p> <p>ISBN:981-02-4564-5</p> <p>8 Bayesian<br> networks</p> <p>Irad<br> Ben?Gal</p> <p>Encyclopedia<br> of statistics in quality and reliability: 2007; John Wiley & Sons, Ltd</p> <p>9 Markov<br> Models and Hidden</p> <p>Markov<br> Models A Brief Tutorial</p> <p>Eric<br> FoslerLussier</p> <p>International<br> computer science institute</p> <p>TR-98-041,<br> December 1998</p> <p>10 Durbin, R.,<br> Eddy, S.R., Krogh, A. & Mitchison, G.J. </p> <p>BOOK:<br> Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic<br> Acids</p> <p>Cambridge<br> University Press, Cambridge UK, 1998</p> <p> </p></desc>
x

Hi!
I'm Joan!

Would you like to get a custom essay? How about receiving a customized one?

Check it out