写点什么

Kubernetes 深入学习之二: 编译和部署镜像 (api-server)

作者:程序员欣宸
  • 2022 年 8 月 29 日
    广东
  • 本文字数:11333 字

    阅读完需:约 37 分钟

Kubernetes深入学习之二:编译和部署镜像(api-server)

欢迎访问我的 GitHub

这里分类和汇总了欣宸的全部原创(含配套源码):https://github.com/zq2599/blog_demos

本篇概览

  • 本文是《Kubernetes 深入学习》系列的第二篇,上一章我们下载了 Kubernetes1.13 源码,然后修改 kubectl 源码再构建运行进行验证,在整个源码包中,除了 kubectl 这样的可执行程序,还有 api-server、controller-manager 这些 docker 容器,今天的实战是修改这些容器镜像的源码,再部署新的镜像,验证我们修改的代码是否生效;

环境信息

  • 为了验证修改的结果在 Kubernetes 环境是否生效,需要您准备好 Kubernetes1.13 版本的环境,实战中涉及的应用和版本信息如下:


  1. 操作系统:CentOS 7.6.1810

  2. go 版本:1.12

  3. Docker:17.03.2-ce

  4. Kubernetes:1.13

关于依赖镜像的下载

  • 在编译过程中会用到以下三个镜像,但是 docker pull 命令是无法下载到这些镜像的(你懂的):


  1. k8s.gcr.io/kube-cross:v1.11.5-1

  2. k8s.gcr.io/debian-iptables-amd64:v11.0

  3. k8s.gcr.io/debian-base-amd64:0.4.0


  • 如果您的环境无法下载这三个镜像,可通过以下方式来下载:

  • 执行以下命令,下载我上传的三个镜像:


docker pull bolingcavalry/kube-cross:v1.11.5-1 \&& docker pull bolingcavalry/debian-iptables-amd64:v11.0 \&& docker pull bolingcavalry/debian-base-amd64:0.4.0
复制代码


  • 下载完毕后,通过 docker images 命令可以看到这三个镜像:


[root@hedy kubernetes]# docker imagesREPOSITORY                            TAG                 IMAGE ID            CREATED             SIZEbolingcavalry/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GBbolingcavalry/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MBbolingcavalry/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
复制代码


  • 执行以下命令,将下载的镜像更名,并且删除不再用到的镜像:


docker tag b16987a9b305 k8s.gcr.io/kube-cross:v1.11.5-1 \&& docker tag 48319fdf4d25 k8s.gcr.io/debian-iptables-amd64:v11.0 \&& docker tag 8021d54711e6 k8s.gcr.io/debian-base-amd64:0.4.0 \&& docker rmi bolingcavalry/kube-cross:v1.11.5-1 \&& docker rmi bolingcavalry/debian-iptables-amd64:v11.0 \&& docker rmi bolingcavalry/debian-base-amd64:0.4.0
复制代码


  • 此时再执行 docker images 查看本地镜像,可见正是编译所需那三个:


[root@hedy kubernetes]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GBk8s.gcr.io/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MBk8s.gcr.io/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
复制代码


  • 打开文件 build/lib/release.sh,找到下面这段内容,将其中的**---pull**删除,这样就不会重新去远程下载镜像了:


"${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" "${docker_build_path}" >/dev/null
复制代码


  • 这段代码的具体位置如下图绿框所示,将绿框中的内容删除:



  • 至此准备工作已结束,接下来就是修改了;

修改源码

  • 接下来的工作是修改源码,本次实战要修改的是 api-server 的源码,我们在里面加一些日志,最后在验证环节只要能看见这些日志就说明我们修改的源码可以成功运行;

  • 修改的文件是 create.go 路径如下,这个文件是创建资源的响应入口:


$GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/create.go
复制代码


  • 在 create.go 处理请求的位置增加日志代码,如下所示,所有 fmt.Println 的调用都是本次新增的内容:


func createHandler(r rest.NamedCreater, scope RequestScope, admit admission.Interface, includeName bool) http.HandlerFunc {    return func(w http.ResponseWriter, req *http.Request) {        fmt.Println("***********************************************************************************************")        fmt.Println("start create", req)        fmt.Println("-----------------------------------------------------------------------------------------------")        fmt.Printf("%s\n", debug.Stack())        fmt.Println("***********************************************************************************************")
复制代码


  • 上述代码的作用是在 api-server 接收到创建资源的请求时打印日志,日志内容是 http 请求内容和当前方法的调用堆栈打印出来;

开始构建

  • 进入目录 $GOPATH/src/k8s.io/kubernetes,执行以下命令开始构建镜像:


KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images
复制代码


  • 根据 build/root/Makefile 中的描述,KUBE_BUILD_CONFORMANCE 参数用来控制是否创建一致性测试镜像,KUBE_BUILD_HYPERKUBE 控制是否创建 hyperkube 镜像(各种工具集成在一起),这两个目前都用不上,因此是设置为"n"表示不构建;

  • 大约 10 多分钟后,镜像构建成功,控制台输出如下:


[root@hedy kubernetes]# KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images+++ [0316 19:11:40] Verifying Prerequisites....+++ [0316 19:11:40] Building Docker image kube-build:build-b58720d1c7-5-v1.11.5-1+++ [0316 19:15:46] Creating data container kube-build-data-b58720d1c7-5-v1.11.5-1+++ [0316 19:17:02] Syncing sources to container+++ [0316 19:17:11] Running build command...+++ [0316 19:17:21] Building go targets for linux/amd64:    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen+++ [0316 19:17:28] Building go targets for linux/amd64:    ./vendor/k8s.io/code-generator/cmd/defaulter-gen+++ [0316 19:17:34] Building go targets for linux/amd64:    ./vendor/k8s.io/code-generator/cmd/conversion-gen+++ [0316 19:17:43] Building go targets for linux/amd64:    ./vendor/k8s.io/kube-openapi/cmd/openapi-gen2019/03/16 19:17:51 Code for OpenAPI definitions generated+++ [0316 19:17:52] Building go targets for linux/amd64:    ./vendor/github.com/jteeuwen/go-bindata/go-bindata+++ [0316 19:17:53] Building go targets for linux/amd64:    cmd/cloud-controller-manager    cmd/kube-apiserver    cmd/kube-controller-manager    cmd/kube-scheduler    cmd/kube-proxy+++ [0316 19:20:41] Syncing out of container+++ [0316 19:20:55] Building images: linux-amd64+++ [0316 19:20:56] Starting docker build for image: cloud-controller-manager-amd64+++ [0316 19:20:56] Starting docker build for image: kube-apiserver-amd64+++ [0316 19:20:56] Starting docker build for image: kube-controller-manager-amd64+++ [0316 19:20:56] Starting docker build for image: kube-scheduler-amd64+++ [0316 19:20:56] Starting docker build for image: kube-proxy-amd64+++ [0316 19:21:37] Deleting docker image k8s.gcr.io/kube-proxy:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty+++ [0316 19:21:41] Deleting docker image k8s.gcr.io/kube-scheduler:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty+++ [0316 19:21:44] Deleting docker image k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty+++ [0316 19:21:48] Docker builds done
复制代码


  • 在目录下可见构建的 tar 文件,可以通过 docker load 命令加载到本地镜像仓库使用:


[root@hedy amd64]# cd $GOPATH/src/k8s.io/kubernetes/_output/release-images/amd64[root@hedy amd64]# lscloud-controller-manager.tar  kube-apiserver.tar  kube-controller-manager.tar  kube-proxy.tar  kube-scheduler.tar
复制代码


  • 将新生成的 kube-apiserver.tar 上传到 kubernetes 环境的 master 节点;

  • 执行命令 docker load < kube-apiserver.tar,将文件 kube-apiserver.tar 导入本地镜像仓库;

  • 执行命令 docker images,如下所示,可见本地仓库多了个 TAG 为 v1.13.5-beta.0.7_6c1e64b94a3e11-dirty 的 kube-apiserver 镜像:


[root@master 16]# docker load < kube-apiserver.tarefd6f8f1a8c2: Loading layer [==================================================>]  138.5MB/138.5MBLoaded image: k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty[root@master 16]# docker imagesREPOSITORY                           TAG                                     IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-apiserver            v1.13.5-beta.0.7_6c1e64b94a3e11-dirty   c9482a699ba7        About an hour ago   181MBquay.io/coreos/flannel               v0.11.0-amd64                           ff281650a721        6 weeks ago         52.6MBk8s.gcr.io/kube-proxy                v1.13.0                                 8fa56d18961f        3 months ago        80.2MBk8s.gcr.io/kube-scheduler            v1.13.0                                 9508b7d8008d        3 months ago        79.6MBk8s.gcr.io/kube-controller-manager   v1.13.0                                 d82530ead066        3 months ago        146MBk8s.gcr.io/kube-apiserver            v1.13.0                                 f1ff9b7e3d6e        3 months ago        181MBk8s.gcr.io/coredns                   1.2.6                                   f59dcacceff4        4 months ago        40MBk8s.gcr.io/etcd                      3.2.24                                  3cab8e1b9802        5 months ago        220MBk8s.gcr.io/pause                     3.1                                     da86e6ba6ca1        15 months ago       742kB
复制代码


  • 先看看当前的 api-server 这个 Pod 的基本情况,命令是 kubectl describe pod kube-apiserver-master -n kube-system,如下所示,当前的镜像是 k8s.gcr.io/kube-apiserver:v1.13.0


[root@master 16]# kubectl describe pod kube-apiserver-master -n kube-systemName:               kube-apiserver-masterNamespace:          kube-systemPriority:           2000000000PriorityClassName:  system-cluster-criticalNode:               master/192.168.182.130Start Time:         Sat, 16 Mar 2019 21:53:22 +0800Labels:             component=kube-apiserver                    tier=control-planeAnnotations:        kubernetes.io/config.hash: 38da173e77f3fd0c39712abbb79b5529                    kubernetes.io/config.mirror: 38da173e77f3fd0c39712abbb79b5529                    kubernetes.io/config.seen: 2019-02-23T13:46:43.135821321+08:00                    kubernetes.io/config.source: file                    scheduler.alpha.kubernetes.io/critical-pod: Status:             RunningIP:                 192.168.182.130Containers:  kube-apiserver:    Container ID:  docker://cb0234269ee2fbef23078cc1bbf6a2d6edd4b248cb733f793853dbfec2f0d814    Image:         k8s.gcr.io/kube-apiserver:v1.13.0
复制代码


  • 修改文件/etc/kubernetes/manifests/kube-apiserver.yaml,修改完毕后,执行命令 kubectl apply -f kube-apiserver.yaml 使修改生效;

验证源码修改是否生效

  • 执行命令 kubectl logs -f kube-apiserver-master -n kube-system 查看 Pod 的日志,内容如下,可见请求的详细信息已经打印出来了,证明之前修改的代码已经生效,这是个系统事件对象的创建请求:


***********************************************************************************************start create &{POST /api/v1/namespaces/kube-system/events HTTP/2.0 2 0 map[Accept:[application/vnd.kubernetes.protobuf, */*] Content-Type:[application/vnd.kubernetes.protobuf] User-Agent:[kubelet/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[359] Accept-Encoding:[gzip]] 0xc00ccd0870 <nil> 359 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.131:58558 /api/v1/namespaces/kube-system/events 0xc00908cf20 <nil> <nil> 0xc00ccd0990}-----------------------------------------------------------------------------------------------goroutine 49344 [running]:runtime/debug.Stack(0xc007076760, 0x1, 0x1)    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc00b83ce88, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc00ccd09f0, 0xc0087d4ae0)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc00ccd09f0, 0xc0087d4ae0)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20dk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16ak8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8ak8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ffnet/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eebnet/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005f2ccc0, 0xc008f1c2e0, 0x5db4f80, 0xc00b83ce78, 0xc00bb46a00)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0
***********************************************************************************************
复制代码


  • 接下来我们自己创建个 rc 资源试试,新开一个控制台窗口连接 Kubernetes 的 master,输入以下命令创建一个名为 nginx-rc.yaml 的文件,内容是 nginx 的 rc:


tee nginx-rc.yaml <<-'EOF'apiVersion: v1kind: ReplicationControllermetadata:  name: nginx-controllerspec:  replicas: 2  selector:    name: nginx  template:    metadata:      labels:        name: nginx    spec:      containers:        - name: nginx          image: nginx:latest          imagePullPolicy: Never          ports:            - containerPort: 80EOF
复制代码


  • 在 nginx-rc.yaml 所在目录执行命令 kubectl apply -f nginx-rc.yaml,即可创建资源;

  • 在输出 api-server 日志的窗口可见如下内容,就是我们刚刚创建的 rc 资源:


***********************************************************************************************start create &{POST /api/v1/namespaces/default/replicationcontrollers HTTP/2.0 2 0 map[Accept:[application/json] Content-Type:[application/json] User-Agent:[kubectl/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[818] Accept-Encoding:[gzip]] 0xc004b4dfb0 <nil> 818 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.130:57856 /api/v1/namespaces/default/replicationcontrollers 0xc007b83600 <nil> <nil> 0xc004bc40f0}-----------------------------------------------------------------------------------------------goroutine 133183 [running]:runtime/debug.Stack(0xc00a08c760, 0x1, 0x1)    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc006e07e58, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc004bc4150, 0xc00a435680)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc004bc4150, 0xc00a435680)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20dk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16ak8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8ak8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ffnet/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eebnet/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0000)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0000)    /usr/local/go/src/net/http/server.go:1964 +0x44k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a28ae40, 0xc008f1c2e0, 0x5db4f80, 0xc006e07e48, 0xc002cc0000)    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0
***********************************************************************************************
复制代码


  • 至此,Kubernetes 的镜像的源码的修改、构建、运行实战就全部完成了,在学习源码的过程中如果遇到了有兴趣或有疑惑的代码,您不妨也尝试一下;

欢迎关注 InfoQ:程序员欣宸

学习路上,你不孤单,欣宸原创一路相伴...


发布于: 刚刚阅读数: 3
用户头像

搜索"程序员欣宸",一起畅游Java宇宙 2018.04.19 加入

前腾讯、前阿里员工,从事Java后台工作,对Docker和Kubernetes充满热爱,所有文章均为作者原创,个人Github:https://github.com/zq2599/blog_demos

评论

发布
暂无评论
Kubernetes深入学习之二:编译和部署镜像(api-server)_Kubernetes_程序员欣宸_InfoQ写作社区