Page tree
Skip to end of metadata
Go to start of metadata

This project was completed as a intern opportunity with HPCC Systems in 2016. Curious about projects we are offering for future internships? Take a look at our Ideas List

Find out about the HPCC Systems Summer Internship Program.

The project proposal application period for 2020 summer internships is now open. Please see our list of Available Projects. Contact the project mentor for more information and to discuss your ideas. You may suggest a project idea of your own but it must leverage HPCC Systems in some way. Contact us for support from an HPCC Systems mentor with experience in your chosen project area.

Project Description

SVD has many applications. For example, SVD could be applied to natural language processing for latent semantic analysis (LSA). LSA starts with a matrix whose rows represent words, columns represent documents, and matrix values (elements) are counts of the word in the document. It then applies SVD to the input matrix, and uses a subset of most significant singular vectors and corresponding singular values to map words and documents into a new space, called ‘latent semantic space’, where documents are placed near each other measured by co-occurrence of words, even if those words never co-occurred in the training corpus.
The LSA’s notion of term-document similarity can be applied to information retrieval, creating a system known as Latent Semantic Indexing (LSI). An LSI system calculates similarity several terms provided in a query have with documents by creating k-dimensional query vector as a sum of k-dimensional vector representations of individual terms, and comparing it to the k-dimensional document vectors.

The implementation can be done completely in the ECL language and only a knowledge of ECL and distributed computing techniques is required.  A knowledge of linear algebra will be helpful.

Completion of this project involves:

  • Selection of the test data. The test data will be a collection of open data text documents.  The collection must have an open data license or be completely free of copyright restrictions.  The most important aspect of the collection is that you will be familiar with the subjects in the collection so that you can judge the effectiveness of your implementation.  The test text collection should be composed of 1000 to 10000 documents.  
  • Development of the algorithm using ECL.
  • Testing the algorithm for correctness and performance.

By the GSoC mid term review we would expect you to have:

  • Written the ECL needed to process the text documents into a dataset of term vectors.
Mentor

John Holt
Contact Details

Backup Mentor: Edin Muharemagic
Contact Details  

Skills needed
  • Knowledge of ECL. Training manuals and online courses are available on the HPCC Systems website.
  • Knowledge of distributed computing techniques
Deliverables
  • Test code demonstrating the correctness and performance of the algorithm.
  • Supporting documentation.
Other resources


  • No labels