-
适用场景:linux系统,已经搭建好kubernetes1.4及以上的集群,没有配置CA等认证,已经搭建DNS,其它情景仅作参考。
-
如果还没有搭建DNS,请参考kubernetes中部署DNS搭建。
-
相关的yaml文件已经上传到本人的github,需要用到的国外镜像也一并被我替换成了阿里云镜像,可直接下载使用
请根据以下步骤一步步开始搭建spark集群
1.创建spark的namespaces
a.介绍:
Kubernetes通过命名空间,将底层的物理资源划分成若干个逻辑的“分区”,而后续所有的应用、容器都是被部署在一个具体的命名空间里。每个命名空间可以设置独立的资源配额,保证不同命名空间中的应用不会相互抢占资源。此外,命名空间对命名域实现了隔离,因此两个不同命名空间里的应用可以起同样的名字。
文件namespace-spark-cluster.yaml内容:
apiVersion: v1 kind: Namespace metadata: name: "spark-cluster" labels: name: "spark-cluster"
其中规定了一个命名空间名为:"spark-cluster"
b.创建
$ kubectl create -f namespace-spark-cluster.yaml
c.使用该Namespace: (${CLUSTER_NAME}和${USER_NAME}可在kubeconfig文件中查看)
$ kubectl config set-context spark --namespace=spark-cluster --cluster=${CLUSTER_NAME} --user=${USER_NAME} $ kubectl config use-context spark
- 这样接下来创建的Pod和service(或任意资源)都是在这个命名空间(spark-cluster)下了
2.创建spark-master的Rc
a.文件spark-master-controller.yaml内容:
kind: ReplicationController apiVersion: v1 metadata: name: spark-master-controller spec: replicas: 1 selector: component: spark-master template: metadata: labels: component: spark-master spec: containers: - name: spark-master image: registry.cn-hangzhou.aliyuncs.com/sjq-study/spark:1.5.2_v1 command: ["/start-master"] ports: - containerPort: 7077 - containerPort: 8080 resources: requests: cpu: 100m
b.创建
$ kubectl create -f spark-master-controller.yaml
c.查看验证
$ kubectl get pods |grep spark-master spark-master-controller-rz1hd 1/1 Running 0 5h
-
已经running!
-
再查看master的日志看是否有报错的问题:
$ kubectl logs spark-master-controller-rz1hd -n spark-cluster 17/12/20 07:30:36 INFO Master: Registered signal handlers for [TERM, HUP, INT] 17/12/20 07:30:37 INFO SecurityManager: Changing view acls to: root 17/12/20 07:30:37 INFO SecurityManager: Changing modify acls to: root 17/12/20 07:30:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 17/12/20 07:30:38 INFO Slf4jLogger: Slf4jLogger started 17/12/20 07:30:38 INFO Remoting: Starting remoting 17/12/20 07:30:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark-master:7077] 17/12/20 07:30:38 INFO Utils: Successfully started service 'sparkMaster' on port 7077. 17/12/20 07:30:38 INFO Master: Starting Spark master at spark://spark-master:7077 17/12/20 07:30:38 INFO Master: Running Spark version 1.5.2 17/12/20 07:30:39 INFO Utils: Successfully started service 'MasterUI' on port 8080. 17/12/20 07:30:39 INFO MasterWebUI: Started MasterWebUI at http://10.1.24.4:8080 17/12/20 07:30:39 INFO Utils: Successfully started service on port 6066. 17/12/20 07:30:39 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066 17/12/20 07:30:39 INFO Master: I have been elected leader! New state: ALIVE
从日志中可以看出spark的master已经创建成功并成功成为leader和开放了8080端口作为Master的UI
3.创建spark-master的sercives
a.文件spark-master-service.yaml内容
kind: Service apiVersion: v1 metadata: name: spark-master spec: ports: - port: 7077 targetPort: 7077 name: spark - port: 8080 targetPort: 8080 name: http selector: component: spark-master
b.创建
$ kubectl create -f spark-master-service.yaml
c.查看验证
$ kubectl get svc |grep spark-master spark-master 192.168.3.239 <none> 7077/TCP,8080/TCP 5h
4.创建spark-worker的Rc
a.文件spark-worker-controller.yaml内容
kind: ReplicationController apiVersion: v1 metadata: name: spark-worker-controller spec: replicas: 3 selector: component: spark-worker template: metadata: labels: component: spark-worker spec: containers: - name: spark-worker image: registry.cn-hangzhou.aliyuncs.com/sjq-study/spark:1.5.2_v1 command: ["/start-worker"] ports: - containerPort: 8081 resources: requests: cpu: 100m
- 其中镜像已经替换成了阿里云镜像,可直接下载使用
- 定义了3个worker节点,实际需要多少个可以直接修改replicas:
- cpu和mem也可根据实际需要进行修改
b.创建
$ kubectl create -f spark-worker-controller.yaml
c.查看验证
$ kubectl get pods |grep spark-work spark-worker-controller-djk50 1/1 Running 0 2h spark-worker-controller-qf1p3 1/1 Running 0 3h spark-worker-controller-w0kzw 1/1 Running 0 3h
到这为止spark的集群就已经搭建成功了!
可以通过查看master POD的IP+port或者master-servixes的IP+port来访问master的UI

可以通过查看worker POD的IP+port来访问worker的UI

但此时mater和worker节点的ui都是单独的,没法在一个UI里实现查看,点击worker UI里的==back to master==也是返回不来master的UI的。并且此时集群外也无法访问我们的spark集群。
实现多UI合并和对外开放问题见 kubernetes中搭建spark集群 (二)
声明!以上内容纯属个人原创!转载请标注出处,谢谢!
如果本文有帮助到你,希望能动动小手点个赞。 如有错误请多指正!如有雷同!纯属巧合!