Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave

Your  task is to write an information retrieval engine, which will be able to index a collection of documents, and in response to a keyword query,    retrieve matching documents. The information retrieval model your program will use is the vector-space model.

Describe a Risk and assumptions ?

What is  Programming source files and their functionalities ?

Objective

Retrieving this information can be difficult. Various software programs are being used in the current technological world to assist users to search and output results. These programs simplify the task of manually searching and acquiring information. This test plan has been written to communicate the approach used during the program creation. It includes the source files to be implemented, the source files functionalities, objectives, scope and the approach. In this assignment, I will clearly identify what is considered in and out of scope and the expected test deliverables.

The main objective is to develop an information retrieval engine that will be able to index a collection of documents and will be able to retrieve matching documents in response to a keyword query. The developed program will retrieve the information using the vector space model and will be written in Java. The must-have functionalities are considered the top priority during the design phase of the project.

During program development, all the requirements outlined will be incorporated and tested to ensure that the required output is achieved. Any other requirements that are to be included in the program will also be tested and the results analyzed to ensure their accuracy. At the end of the development stage, the program should;

  1. Compile on the server when issued the javac *.java command.
  2. Run from the command line and either send the output to standard output or store it as a file.
  3. Be invoked from the command line using the given parameters.
  4. Perform the appropriate stemming and tokenization on the source documents.
  5. Assign the proper format for all indexed records.
  6. Return a ranked list of documents matching the query when the search parameter is used.
  7. For each returned document, ensure there is only one line.

In every program development phase, there are bound to be certain risks which unless well mitigated, could cause the whole program to fail. In our case, some of the risks identified are outlined below with their respective mitigation strategies.

Risk identified

Mitigation strategy

Complexity.

Keep the program as simple as possible. Complex designs increase the probability of making an error during the implementation phase (Lewis, 2014).

Accessibility.

In the Java platform, the program can be denied access if the classes are not correctly initialized. To mitigate this, ensure that all the classes in the source files are well initialized prior to any sensitive operation.

In this section, the assumptions specific to this project are outlined.

  1. The source files will compile successfully in any java based platform.

During the development of the program, various functionalities will be included to efficiently execute and retrieve the information. The plan involves creating the modules independently based on their specific function.

The program is called MySearchEngine. This contains the main method to initialize and execute the rest of the source files. In this code file, the searcher, inverted index, tokenizer, indexer and stemmer source files are declared. To compile this program, the user needs to execute the javac *.java runtime command within the source code location or directory.

In this source file, I will design the program to tokenize the raw query as instructed followed by a calculation of all the cosine similarities for all the documents that contain at least one query term. The dot-product query is then build up using query term then divided through by vector norms. Statements to acquire all the documents and their term frequencies for this query term are to be included. The next step is iterating through each document and make tf-IDF to take dot product then add to the previous value for a document or initialize the dot products Hashmap.

Then a function to add to query vector norm is implemented in the above if statement. Now functionalities to build up the cosine similarity scores for each document are written that use a priority queue to automatically store documents in max order. Finally, the source file should print out all the documents in order of cosine similarity.

Scope

This file indexes all the documents stored in collection collection_dir. The process involves constructing a constructor to scan all the documents first. The following function is the added to debug lineNumber++;

String line = scan.nextLine(); This statement reads one line of the text into a string which is then split by space into a string array using fileLines.add(line.trim());

All the frequencies for all corpus terms are then documented. For any given term in corpus, the number of documents it is in is shown. The files are in the form “term” that is “filename1, termfreq, fileName2, termFreq and so on.

the document filename is acquired and iteration through all the tokens in the document is done. A hashmap of term document frequencies will then be added. A functionality to add to term frequencies list hashmap will have to be executed to finish building collections from corpus. Write an inverted index out to file with appended IDF values at the end after iterating through all corpus tokens. The source file should finally calculate the IDF, the round IDF then build the final string to write to file. Each line should contain one stop word.

Calculation of the inverse document frequencies should be done using natural log. For the classical IDF formula, an increment of one should be implemented to allow query terms that are not present in the index. A repeating group that has to appear once will be indicated by {} but this character will not be present in my index. The vector norm value will be precomputed to use in the calculation of cosine similarity without taking the square root hence it’s the norm squared.

The source file generally converts a file into class data structures that have an inverted index that have document term frequencies. The source file will then read one line of the text file into a string and split the line by space into a string array. The last IDF part is taken away and all the term-frequency pairs in the document are taken away. A non-duplicate set of corpus names is then build up and added to document vector norm calculation.

This class will be used to transform a word into its root form. The input word can be provided a character at a time by calling add()) or can be done at once by calling one of the various stem(something) method (Schymik, 2012). After a word has been stemmed it has to be retrieved or reference to the internal buffer is retrieved. This is to be implemented using the Porter stemmer (Stemmer, 1980). Each indexed file will have their fields separated by commas, the lines will be separated by the end line character and all the non-quantity integers will be rounded off to three integers.

The name of the program should be the main method class which is MySearchEngine. Once compiled, all the incorporated source files should produce their respective class files that are part of the final MySearchEngine.class file. Once the program is developed and compiles successfully, a correct outcome is expected such that, after invoking the program from the command line with the parameters java MySearchEngine index collection_dir index_dir stopwords.txt, all the documents are stored in collection_dir. The name of the index file should be index.txt. The stopwords should be contained in the stopwords.txt file. These stopwords should not be considered for stemming into the index terms.

When the parameter java MySearchEngine search index_dir num_docs keyword_list is invoked, a ranked list of the top num_docs matching the keyword_list query. The document that is most relevant will appear top in the list separated by a white command line space.

The final token after tokenization is implemented will not contain any hyphen and any text that is within a single quotation will appear in a single token. Any acronym is to be retained as a single token absent of any period or full stop. Any other text should be split into tokens using delimiters.

Conclusion.

The program should be able to compute all source files effectively and return the expected results in the end. All the source files will be saved with a .java extension and their respective classes created after successful compilation. The program source files can be implemented using any java based platform or compiled from a command line interface. The information retrieval program was found to be effective and invoked all the parameters correctly, outputting the correct and expected results in the end. In conclusion, the program development was successful.

References

Lewis, N. (2014). Java-based malware: Mitigating the threat of JRE vulnerabilities. Retrieved from TechTarget Network: https://searchsecurity.techtarget.com/tip/Java-based-malware-Mitigating-the-threat-of-JRE-vulnerabilities

Schymik, G. (2012). The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval. Arizona: Arizona State University.

Stemmer, P. (1980). An Algorithm For Suffix Stripping. The program, 14(3), 130-137. Retrieved from https://www.tartarus.org/~martin/PorterStemmer

Cite This Work

To export a reference to this article please select a referencing stye below:

My Assignment Help. (2021). Essay: Info Retrieval Engine Dev | Test Plan" (57 Characters). Retrieved from https://myassignmenthelp.com/free-samples/fit5166-information-retrieval-systems/risk.html.

"Essay: Info Retrieval Engine Dev | Test Plan" (57 Characters)." My Assignment Help, 2021, https://myassignmenthelp.com/free-samples/fit5166-information-retrieval-systems/risk.html.

My Assignment Help (2021) Essay: Info Retrieval Engine Dev | Test Plan" (57 Characters) [Online]. Available from: https://myassignmenthelp.com/free-samples/fit5166-information-retrieval-systems/risk.html
[Accessed 13 November 2024].

My Assignment Help. 'Essay: Info Retrieval Engine Dev | Test Plan" (57 Characters)' (My Assignment Help, 2021) <https://myassignmenthelp.com/free-samples/fit5166-information-retrieval-systems/risk.html> accessed 13 November 2024.

My Assignment Help. Essay: Info Retrieval Engine Dev | Test Plan" (57 Characters) [Internet]. My Assignment Help. 2021 [cited 13 November 2024]. Available from: https://myassignmenthelp.com/free-samples/fit5166-information-retrieval-systems/risk.html.

Get instant help from 5000+ experts for
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing: Proofread your work by experts and improve grade at Lowest cost

loader
250 words
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Plagiarism checker
Verify originality of an essay
essay
Generate unique essays in a jiffy
Plagiarism checker
Cite sources with ease
support
close