写点什么

Spark Launcher Java API 提交 Spark 算法

用户头像
杨仪军
关注
发布于: 2020 年 06 月 06 日

在介绍之前,我先附上spark 官方文档地址:http://spark.apache.org/docs/latest/api/java/org/apache/spark/launcher/package-summary.html



源码github地址:https://github.com/yyijun/framework/tree/master/framework-spark

1.主要提交参数说明

spark-submit \
--master yarn \
--deploy-mode cluster \
--driver-memory 4g \
--driver-cores 4 \
--num-executors 20 \
--executor-cores 4 \
--executor-memory 10g \
--class com.yyj.train.spark.launcher.TestSparkLauncher \
--conf spark.yarn.jars=hdfs://hadoop01.xxx.xxx.com:8020/trainsparklauncher/jars/*.jar \
--jars $(ls lib/*.jar| tr '\n' ',') \
lib/ train-spark-1.0.0.jar

--conf spark.yarn.jars:提交算法到yarn集群时算法依赖spark安装包lib目录下的jar包,如果不指定,则每次启动任务都会先上传相关依赖包,耗时严重;

--jars:算法依赖的相关包,spark standalone模式、yarn模式都有用,多个依赖包用逗号”,”分隔;

2.Idea提交算法到yarn集群

2.1.入口参数配置



val spark = SparkSession
.builder
.appName("TestSparkLauncher")
.master("yarn")
.config("deploy.mode", "cluster")
.config("spark.yarn.jars", "hdfs://hadoop01.xxx.xxx.com:8020/trainsparklauncher/jars/*.jar")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.enableHiveSupport()
.getOrCreate()

2.2.pom.xml配置

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-yarn_2.11</artifactId>
<version>2.1.0</version>
</dependency>

3.提交准备



1、从大数据平台下载hadoop相关的xml配置文件:
core-site.xml:必须;
hdfs-site.xml:必须;
hive-site.xml:提交的算法里面用到spark on hive时需要此文件;
yarn-site.xml:提交算法到yarn时必须要此文件;
2、准备自己的算法包,这里对应替换为自己的算法包:
train-spark-1.0.0.jar和train-common-1.0.0.jar
3、上传spark安装目录下jars目录下相关的jar包到hdfs:hadoop fs –put –f /opt/cloudera/parcels/SPARK2/lib/spark2/jars /hdfs目录

测试提交算法

package com.yyj.framework.spark.launcher;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
/**
* Created by yangyijun on 2019/5/20.
* 提交spark算法入口类
*/
public class SparkLauncherMain {
public static void main(String[] args) {
System.out.println("starting...");
String confPath = "/Users/yyj/workspace/alg/src/main/resources";
System.out.println("confPath=" + confPath);
//开始构建提交spark时依赖的jars
String rootPath = "/Users/yyj/workspace/alg/lib/";
File file = new File(rootPath);
StringBuilder sb = new StringBuilder();
String[] files = file.list();
for (String s : files) {
if (s.endsWith(".jar")) {
sb.append("hdfs://hadoop01.xxx.xxx.com:8020/user/alg/jars/");
sb.append(s);
sb.append(",");
}
}
String jars = sb.toString();
jars = jars.substring(0, jars.length() - 1);
Map<String, String> conf = new HashMap<>();
conf.put(SparkConfig.DEBUG, "false");
conf.put(SparkConfig.APP_RESOURCE, "hdfs://hadoop01.xxx.xxx.com:8020/user/alg/jars/alg-gs-offline-1.0.0.jar");
conf.put(SparkConfig.MAIN_CLASS, "com.yyj.alg.gs.offline.StartGraphSearchTest");
conf.put(SparkConfig.MASTER, "yarn");
//如果是提交到spark的standalone集群则采用下面的master
//conf.put(SparkConfig.MASTER, "spark://hadoop01.xxx.xxx.com:7077");
conf.put(SparkConfig.APP_NAME, "offline-graph-search");
conf.put(SparkConfig.DEPLOY_MODE, "client");
conf.put(SparkConfig.JARS, jars);
conf.put(SparkConfig.HADOOP_CONF_DIR, confPath);
conf.put(SparkConfig.YARN_CONF_DIR, confPath);
conf.put(SparkConfig.SPARK_HOME, "/Users/yyj/spark2");
conf.put(SparkConfig.DRIVER_MEMORY, "2g");
conf.put(SparkConfig.EXECUTOR_CORES, "2");
conf.put(SparkConfig.EXECUTOR_MEMORY, "2g");
conf.put(SparkConfig.SPARK_YARN_JARS, "hdfs://hadoop01.xxx.xxx.com:8020/user/alg/jars/*.jar");
conf.put(SparkConfig.APP_ARGS, "params");
SparkActionLauncher launcher = new SparkActionLauncher(conf);
boolean result = launcher.waitForCompletion();
System.out.println("============result=" + result);
}
}



构造SparkLauncher对象,配置Spark提交算法相关参数及说明

private SparkLauncher createSparkLauncher() {
logger.info("actionConfig:\n" + JSON.toJSONString(conf, true));
this.debug = Boolean.parseBoolean(conf.get(SparkConfig.DEBUG));
Map<String, String> env = new HashMap<>();
//配置hadoop的xml文件本地路径
env.put(SparkConfig.HADOOP_CONF_DIR, conf.get(SparkConfig.HADOOP_CONF_DIR));
//配置yarn的xml文件本地路径
env.put(SparkConfig.YARN_CONF_DIR, conf.get(SparkConfig.HADOOP_CONF_DIR));
SparkLauncher launcher = new SparkLauncher(env);
//设置算法入口类所在的jar包本地路径
launcher.setAppResource(conf.get(SparkConfig.APP_RESOURCE));
//设置算法入口类保证包名称及类名,例:com.yyj.train.spark.launcher.TestSparkLauncher
launcher.setMainClass(conf.get(SparkConfig.MAIN_CLASS));
//设置集群的master地址:yarn/spark standalone的master地址,例:spark://hadoop01.xxx.xxx.com:7077
launcher.setMaster(conf.get(SparkConfig.MASTER));
//设置部署模式:cluster(集群模式)/client(客户端模式)
launcher.setDeployMode(conf.get(SparkConfig.DEPLOY_MODE));
//设置算法依赖的包的本地路径,多个jar包用逗号","隔开,如果是spark on yarn只需要把核心算法包放这里即可,
// spark相关的依赖包可以预先上传到hdfs并通过 spark.yarn.jars参数指定;
// 如果是spark standalone则需要把所有依赖的jar全部放在这里
launcher.addJar(conf.get(SparkConfig.JARS));
//设置应用的名称
launcher.setAppName(conf.get(SparkConfig.APP_NAME));
//设置spark客户端安装包的home目录,提交算法时需要借助bin目录下的spark-submit脚本
launcher.setSparkHome(conf.get(SparkConfig.SPARK_HOME));
//driver的内存设置
launcher.addSparkArg(SparkConfig.DRIVER_MEMORY, conf.getOrDefault(SparkConfig.DRIVER_MEMORY, "4g"));
//driver的CPU核数设置
launcher.addSparkArg(SparkConfig.DRIVER_CORES, conf.getOrDefault(SparkConfig.DRIVER_CORES, "2"));
//启动executor个数
launcher.addSparkArg(SparkConfig.NUM_EXECUTOR, conf.getOrDefault(SparkConfig.NUM_EXECUTOR, "30"));
//每个executor的CPU核数
launcher.addSparkArg(SparkConfig.EXECUTOR_CORES, conf.getOrDefault(SparkConfig.EXECUTOR_CORES, "4"));
//每个executor的内存大小
launcher.addSparkArg(SparkConfig.EXECUTOR_MEMORY, conf.getOrDefault(SparkConfig.EXECUTOR_MEMORY, "4g"));
String sparkYarnJars = conf.get(SparkConfig.SPARK_YARN_JARS);
if (StringUtils.isNotBlank(sparkYarnJars)) {
//如果是yarn的cluster模式需要通过此参数指定算法所有依赖包在hdfs上的路径
launcher.setConf(SparkConfig.SPARK_YARN_JARS, conf.get(SparkConfig.SPARK_YARN_JARS));
}
//设置算法入口参数
launcher.addAppArgs(new String[]{conf.get(SparkConfig.APP_ARGS)});
return launcher;
}

准spark安装包,用于提交spark算法的客户端,因为提交算法的时候需要用到Spark的home目录下的bin/spark-submit脚本

重命名conf目录下的spark-env.sh脚本,否则会包如下的错误。原因是spark-env.sh里面配置了大数据平台上的路径,而在提交算法的客户端机器没有对应路径

debug模式提交或者非debug模式

/**
* Submit spark application to hadoop cluster and wait for completion.
*
* @return
*/
public boolean waitForCompletion() {
boolean success = false;
try {
SparkLauncher launcher = this.createSparkLauncher();
if (debug) {
Process process = launcher.launch();
// Get Spark driver log
new Thread(new ISRRunnable(process.getErrorStream())).start();
new Thread(new ISRRunnable(process.getInputStream())).start();
int exitCode = process.waitFor();
System.out.println(exitCode);
success = exitCode == 0 ? true : false;
} else {
appMonitor = launcher.setVerbose(true).startApplication();
success = applicationMonitor();
}
} catch (Exception e) {
logger.error(e);
}
return success;
}



非debug模式提交时,控制台获取处理结果信息

///////////////////////
// private functions
///////////////////////
private boolean applicationMonitor() {
appMonitor.addListener(new SparkAppHandle.Listener() {
@Override
public void stateChanged(SparkAppHandle handle) {
logger.info("****************************");
logger.info("State Changed [state={0}]", handle.getState());
logger.info("AppId={0}", handle.getAppId());
}
@Override
public void infoChanged(SparkAppHandle handle) {
}
});
while (!isCompleted(appMonitor.getState())) {
try {
Thread.sleep(3000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
boolean success = appMonitor.getState() == SparkAppHandle.State.FINISHED;
return success;
}
private boolean isCompleted(SparkAppHandle.State state) {
switch (state) {
case FINISHED:
return true;
case FAILED:
return true;
case KILLED:
return true;
case LOST:
return true;
}
return false;
}



可以从处理结果中获取到app ID,用于杀掉yarn任务时使用

4.任务详情

//访问URL:
http://<rm http address:port>/ws/v1/cluster/apps/{appID}
//例子
http://localhost:8088/ws/v1/cluster/apps/application15617064805542301

访问详情地址,返回数据格式如下:

响应数据字段说明:

"id": "application15617064805542301",--任务ID
"user": "haizhi",--提交任务的用户名称
"name": "TestSparkLauncher",--应用名称
"queue": "root.users.haizhi",--提交队列
"state": "FINISHED",--任务状态
"finalStatus": "SUCCEEDED",--最终状态
"progress": 100,--任务进度
"trackingUI": "History",
"trackingUrl": "http://hadoop01.xx.xxx.com:18088/proxy/application15617064805542301/A",
"diagnostics":"",--任务出错时的主要错误信息
"clusterId": 1561706480554,
"applicationType": "SPARK",--任务类型
"startedTime": 1562808570464,--任务开始时间,单位毫秒
"finishedTime": 1562808621348,--任务结束时间,单位毫秒
"elapsedTime": 50884,--任务耗时,毫秒
"amContainerLogs": "http://hadoop01.xx.xxx.com:8042/node/containerlogs/container15617064805542301_01_000001/haizhi",--任务详细日志
"amHostHttpAddress": "hadoop01.xx.xxx.com:8042",
"memorySeconds": 198648,--任务分配到的内存数,单位MB
"vcoreSeconds": 145,--任务分配到的CPU核数
"logAggregationStatus": "SUCCEEDED"



5.rest API杀掉任务请求格式:

  • 请求URL:http://<rm http address:port>/ws/v1/cluster/apps/{appid}/state

  • 请求方式:put

  • 请求参数: { "state": "KILLED" }

例:

请求URL:http://192.168.1.3:18088/ws/v1/cluster/apps/application15617064805542302/state
请求方式:put
请求参数: { "state": "KILLED" }



发布于: 2020 年 06 月 06 日阅读数: 182
用户头像

杨仪军

关注

学海无涯 2020.03.29 加入

还未添加个人简介

评论

发布
暂无评论
Spark Launcher Java API提交Spark算法