distinct
distinct用于去重, 我们生成的RDD可能有重复元素,使用distinct方法可以去掉重复元素, 不过此方法涉及到混洗,操作开销很大
JavaRDD<String> RDD1 = javaSparkContext.parallelize(Arrays.asList("aa", "aa", "bb", "cc", "dd"));
JavaRDD<String> distinctRDD = RDD1.distinct();
List<String> collect = distinctRDD.collect();---------输出----------
aa, dd, bb, cc
union
两个RDD进行合并,不会去重
JavaRDD<String> RDD1 = javaSparkContext.parallelize(Arrays.asList("aa", "aa", "bb", "cc", "dd"));
JavaRDD<String> RDD2 = javaSparkContext.parallelize(Arrays.asList("aa","dd","ff"));JavaRDD<String> unionRDD = RDD1.union(RDD2);
List<String> collect = unionRDD.collect();-----------输出---------
aa, aa, bb, cc, dd, aa, dd, ff
intersection
RDD1.intersection(RDD2) 返回两个RDD的交集,并且去重
intersection 需要混洗数据,比较浪费性能
JavaRDD<String> RDD1 = javaSparkContext.parallelize(Arrays.asList("aa", "aa", "bb", "cc", "dd"));
JavaRDD<String> RDD2 = javaSparkContext.parallelize(Arrays.asList("aa","dd","ff"));
JavaRDD<String> intersectionRDD = RDD1.intersection(RDD2);
List<String> collect = intersectionRDD.collect();-------------输出-----------
aa dd
subtract
RDD1.subtract(RDD2),返回在RDD1中出现,但是不在RDD2中出现的元素,不去重
JavaRDD<String> RDD1 = javaSparkContext.parallelize(Arrays.asList("aa", "aa", "bb","cc", "dd"));
JavaRDD<String> RDD2 = javaSparkContext.parallelize(Arrays.asList("aa","dd","ff"));JavaRDD<String> subtractRDD = RDD1.subtract(RDD2);
List<String> collect = subtractRDD.collect();------------输出-----------------
bb cc
cartesian
RDD1.cartesian(RDD2) 返回RDD1和RDD2的笛卡儿积,这个开销非常大
JavaRDD<String> RDD1 = javaSparkContext.parallelize(Arrays.asList("1", "2", "3"));
JavaRDD<String> RDD2 = javaSparkContext.parallelize(Arrays.asList("a","b","c"));
JavaPairRDD<String, String> cartesian = RDD1.cartesian(RDD2);List<Tuple2<String, String>> collect1 = cartesian.collect();
for (Tuple2<String, String> tp:collect1) {System.out.println("("+tp._1+" "+tp._2+")");
}------------输出-----------------
(1 a)
(1 b)
(1 c)
(2 a)
(2 b)
(2 c)
(3 a)
(3 b)
(3 c)