Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

HPCC Systems Platform 4.x has the ability to integrate with Java directly. This page takes you through a few steps that you have to implement to configure Java correctly. In this particular case we will see how to make it work on a Ubuntu 13.04 system

Configuring HPCC for Java Integration

1. Download and Install the HPCC Systems platform with plugins 4.x

You will need the distro beginning with hpccsystems-platform_community-with-plugins-, NOT hpccsystems-platform_community-

You must install the packages that have the plug-ins using the --nodeps option. For example: sudo rpm -Uvh --nodeps <rpm file name>

Follow the installation instructions [here] for downloading HPCC & installing it. Specific instructions for installing the package with plugins are on page 17 of http://cdn.hpccsystems.com/releases/CE-Candidate-4.2.0/docs/Installing_and_RunningTheHPCCPlatform-4.2.0-1.pdf

2. Install OpenJDK 1.7 

In some cases you may have to install the default-jdk package as well.

Check that Java Plugins are Working

  1. If you haven't run java on the cluster before, verify that Javaembed is functioning correctly on the cluster

    1. Run sample java like the following. JavaCat is installed on the HPCC Cluster by default.

IMPORT java;
integer add1(integer val) := IMPORT(java, 'JavaCat.add1:(I)I');
output(add1(10));

If you get output, you have installed java successfully.

If you get a "unable to load libjvm.so" error you should reinstall java, or try a different Java package.

If you get an error about missing javaembed, the HPCC install does not include the plugin feature and needs to be reinstalled from a distro with the plugin feature.

Deploy your Java Jar File

  1. Verify that your java class was not compiled with a more recent version of java than is on the cluster.
    1. You can check this by running "rpm -qa|grep java" on one of the cluster nodes.

  2. Copy the Java jar or class file to all of the THOR nodes on a cluster.
    1. The default location for java files is /opt/HPCCSystems/classes.

    2. This can be done manually or by running something like:

      for x in `seq 8`;do scp myjava.jar 10.173.147.$x:/opt/HPCCSystems/classes/ ;done    

       

  3. Set the classpath in the HPCC Systems configuration file   

    1. The JAR file itself can be physically located anywhere. You can add the JAR file to the classpath by adding it to /etc/HPCCSystems/environment.conf, or by adding it to the Java global classpath environment variable.

    2. Edit the environment.conf in your favorite editor and add your java class/jar to the classpath entry

      1. If you are adding a jar file, the jar file itself has to be added to the classpath. For example:

        classpath=/opt/HPCCSystems/classes:/opt/HPCCSystems/classes/myjava.jar

         

  4. Restart the thor cluster for the classpath changes to take effect

sudo service hpcc-init start    => command for starting

sudo service hpcc-init stop     => command for stopping

sudo service hpcc-init restart  => command for restarting

 

Call your Java from ECL

Currently, you can only pass primitive data types to and from a Java plugin (String, long, etc.)

Define the interface for the java class you're calling. For example, if you're calling a static method SegmentText in the class Segmenter with the definition of  

      public static String SegmentText(String input, String config)
You would place the following code in your ecl:

import java;
      STRING segment() := IMPORT(java,'org/hpccsystems/Segmenter.SegmentText:(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;');

 

6. THE END

Feel free to raise an issue at http://track.hpccsystems.com if it does not work as expected. I assure you that it will be addressed promptly.

A simple Java Integration example

The idea is to create a Java class that acts as a consumer of external data (think Kafka consumer). For sanities sake let us create a simple implementation of a class with a static method that returns a string. Making this a true Kafka consumer will be material for another Wiki page.

The Java Consumer Class

package org.hpccsystems.streamapi.consumer;

public class DataConsumer {
	
	public static String consume() {
		return "<dataset><rows><row>sample row1</row><row>sample row2</row></rows></dataset>";
	}

}

 

Now, assuming that you have Java configured correctly (if not, read the setting up Java wiki), the sample ECL code to call the Java class will look like:

The ECL Script

IMPORT java;

STRING consume() := IMPORT(java, 
        'org/hpccsystems/streamapi/consumer/DataConsumer.consume:()Ljava/lang/String;');


messages := consume();

OUTPUT(messages);

messagesDS := DATASET([{messages}], {STRING line});

ExtractedRow := RECORD 
  STRING value;
END; 

ExtractedRows := RECORD
  DATASET(ExtractedRow) values;
END;

ExtractedRows RowsTrans := TRANSFORM
  SELF.values := XMLPROJECT('row', TRANSFORM(ExtractedRow, SELF.value := XMLTEXT('')));
END;

parsedData := PARSE(messagesDS, line, RowsTrans, XML('/dataset/rows'));

OUTPUT(parsedData);

 

The calling of the Java consume method is really accomplished in the first three lines. The rest of the code is used to extract the XML content into something more meaningful.

Give it a try and see how easy it is to extend HPCC using Java libraries. HPCC provides you the framework to perform Big Data analytics. This example shows how you can easily extend ECL to perform advanced tasks like streaming data, text extraction, sentiment analysis etc., using Java libraries.

 

  • No labels