Iterate through a Java RDD by row
As mattinbits said in the comments, you want a map
instead of a foreach
, since you want to return values. What a map
does basically is to transform your data: for each row of your RDD you perform an operation and return one value for each row. What you need can be achieved like this:
import org.apache.spark.api.java.function.Function;
...
SparkConf conf = new SparkConf().setAppName("PCA Example");
SparkContext sc = new SparkContext(conf);
JavaRDD<String> data = sc.textFile("clean-sl-mix-with-labels.txt",0).toJavaRDD();
JavaRDD<double[]> whatYouWantRdd = data.map(new Function<String, double[]>() {
@Override
public double[] call(String row) throws Exception {
return splitStringtoDoubles(row);
}
private double[] splitStringtoDoubles(String s) {
String[] splitVals = s.split("\\t");
Double[] vals = new Double[splitVals.length];
for(int i=0; i < splitVals.length; i++) {
vals[i] = Double.parseDouble(splitVals[i]);
}
return vals;
}
});
List<double[]> whatYouWant = whatYouWantRdd.collect();
So that you know how Spark works, you perform actions or transformations on your RDD. For instance, here we are transforming our RDD using a map
function. You need to create this function yourself, this time with an anonymous org.apache.spark.api.java.function.Function
which forces you to override the method call
, where you receive a row of your RDD and return a value.
Just because it's interesting to compare the verboseness of the Java vs Scala API for Spark, here's a Scala version:
import org.apache.spark.{SparkContext, SparkConf}
class example extends App {
val conf = new SparkConf().setMaster("local").setAppName("Spark example")
val sc = new SparkContext(conf)
val inputData = List(
"1.2\t2.7\t3.8",
"4.3\t5.1\t6.3"
)
val inputRDD = sc.parallelize(inputData)
val arrayOfDoubleRDD = inputRDD.map(_.split("\t").map(_.toDouble))
}