kube-client
kube-client
Crystal client library for the Kubernetes (1.11+) API
Installation
-
Add the dependency to your
shard.yml
:dependencies: kube-client: github: spoved/kube-client.cr
-
Run
shards install
Usage
Specify the kubernetes api version of the client to use:
require "kube-client/v1.20"
client = Kube::Client.autoconfig
Or you can specify the kubernetes api version at compile time via the -Dk8s_v{major}.{minor}
flag:
require "kube-client"
client = Kube::Client.autoconfig
$ crystal build -Dk8s_v1.20 kube-client.cr
Overview
The top-level Kube::Client
provides access to separate APIClient
instances for each Kubernetes API Group (v1
, apps/v1
, etc.), which in turns provides access to separate ResourceClient
instances for each API resource type (nodes
, pods
, deployments
, etc.).
Individual resources are returned as K8S::Kubernetes::Resource
instances, which provide attribute access (resource.metadata.name
). The resource instances are returned by methods such as client.api("v1").resource("nodes").get("foo")
, and passed as arguments for client.api("v1").resource("nodes").create_resource(res)
. Resources can also be loaded from disk using Kube::Resource.from_files(path)
, and passed to the top-level methods such as client.create_resource(res)
, which lookup the correct API/Resource client from the resource apiVersion
and kind
.
The different Kube::Error::API
subclasses represent different HTTP response codes, such as Kube::Error::NotFound
or Kube::Error::Conflict
.
Creating a client
Unauthenticated client
client = Kube.client("https://localhost:6443", ssl_verify_peer: false)
The keyword options are Kube::Transport::Options options.
Client from kubeconfig
client = Kube::Client.config(
Kube::Config.load_file(
File.expand_path "~/.kube/config"
)
)
Supported kubeconfig options
Not all kubeconfig options are supported, only the following kubeconfig options work:
current_context
context.cluster
context.user
cluster.server
cluster.insecure_skip_tls_verify
cluster.certificate_authority
cluster.certificate_authority_data
user.client_certificate
+user.client_key
user.client_certificate_data
+user.client_key_data
user.token
With overrides
client = Kube::Client.config(Kube::Config.load_file("~/.kube/config"),
server: "http://localhost:8001",
)
In-cluster client from pod envs/secrets
client = Kube::Client.in_cluster_config
API Resources
Resources are a sub class of ::K8S::Kubernetes::Resource
, which is generated and defined in the k8s.cr sub-shard.
Please note that custom resources are not supported at this time.
Prefetching API resources
Operations like mapping a resource kind
to an API resource URL require knowledge of the API resource lists for the API group. Mapping resources for multiple API groups would require fetching the API resource lists for each API group in turn, leading to additional request latency. This can be optimized using resource prefetching:
client.apis(prefetch_resources: true)
This will fetch the API resource lists for all API groups in a single pipelined request.
Listing resources
client.api("v1").resource("pods", namespace: "default").list(label_selector: {"role" => "test"}).each do |pod|
pod = pod.as(K8S::Api::Core::V1::Pod)
puts "namespace=#{pod.metadata!.namespace} pod: #{pod.metadata!.name} node=#{pod.spec.try &.node_name}"
end
Updating resources
node = client.api("v1").resource("nodes").get("test-node")
node.as(K8S::Api::Core::V1::Node).spec.not_nil!.unschedulable = true
client.api("v1").resource("nodes").update_resource(node)
Deleting resources
pod = client.api("v1").resource("pods", namespace: "default").delete("test-pod")
pods = client.api("v1").resource("pods", namespace: "default").delete_collection(label_selector: {"role" => "test"})
Creating resources
Programmatically defined resources
pod = K8S::Api::Core::V1::Pod.new(
metadata: {
name: "pod-name",
namespace: "default",
labels: {
"app" => "kube-client-test",
},
},
spec: {
containers: [
{
name: "test",
image: "test",
},
],
}
)
logger.info "Create pod=#{pod.metadata!.name} in namespace=#{pod.metadata!.namespace}"
pod = client.api("v1").resource("pods").create_resource(pod)
From file(s)
resources = K8S::Kubernetes::Resource.from_file("./test.yaml")
resources = client.create_resources(resources)
Patching resources
client.api("apps/v1").resource("deployments", namespace: "default").merge_patch("test", {
spec: { replicas: 3 },
})
Watching resources
Watching resources spawns a background fiber that will push K8S::Kubernetes::WatchEvent
s for the resource onto the returned Channel
. A Kube::Error::WatchClosed
error will be returned if the watch stream has been closed. A Kube::Error::API
error will be returned if an api error is encountered.
resource_client = client.api("v1").resource("pods")
channel = resource_client.watch(resource_version: "4651")
while !channel.closed?
event = channel.receive
if event.is_a?(Kube::Error::WatchClosed)
# Handle if the watch stream has been closed
elsif event.is_a?(Kube::Error::API)
# Handle error
else
pp event # => K8S::Kubernetes::WatchEvent(K8S::Api::Core::V1::Pod)
end
end
The returned Kube::Error::WatchClosed
error will contain the resource_version
of the last event received before the watch stream was closed. This can be used to resume watching from the last known resource version:
resource_client = client.api("v1").resource("pods")
channel = resource_client.watch
while !channel.closed?
event = channel.receive
if event.is_a?(Kube::Error::WatchClosed)
# Restart the watch from the last known resource version
channel = resource_client.watch(resource_version: event.resource_version)
else
pp event # => K8S::Kubernetes::WatchEvent(K8S::Api::Core::V1::Pod)
end
end
You can also invoke the watch
method with a block, which can automatically restart the watch from the last known resource version:
resource_client.watch(auto_resume: true) do |event|
obj = event.object.as(K8S::Api::Core::V1::Pod)
Log.info { "#{event.type} #{obj.metadata.try &.name}" }
end
Contributing
- Fork it (https://github.com/spoved/kube-client.cr/fork)
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am "Add some feature"
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request
Contributors
- Holden Omans - creator and maintainer
- k8s-client - Ruby client this was heavily sourced from