Using an MCCS cluster
Okay, so you’ve followed the instructions at the Set up your deployment environment page, and you now have a working MCCS cluster. Now what can you do with it? Here are some options:
run our functional (BDD) tests
develop and test your code against the cluster (not recommended)
run an interactive Jupytango session
monitor and control devices with Taranta
Running functional tests
To run the functional (BDD) tests:
make k8s-test
Develop and test code
Because of the time it takes to build images, and deploy and delete the cluster, it is unwise to develop code against a real cluster unless absolutely necessary. If doing so, the basic workflow is:
Edit code and/or tests
Build the image to be deployed:
IMPORTANT If you are developing locally, because we are using docker as our driver, the environment must be set in your terminal. This command must be run in each new terminal:
eval $(minikube docker-env)
make oci-build
Re-deploy with the newly built image
make k8s-bounce
Wait for the cluster to be fully deployed:
make k8s-watch # to manually watch the cluster come up # or make k8s-wait # to block until the cluster is up
Problems with the deployment? To get an overview of container logs, try (warning: this will produce a lot of logs)
make k8s-podlogs
To view the logs of a specific pod, try
kubectl -n ska-low-mccs logs PODNAME
Note: This can also be done in k9s, which will provice a UI for this.
Once the cluster is fully deployed, run the tests:
make k8s-test
Test failed? Edit the code, then
make oci-build make k8s-bounce make k8s-watch # until pods have come back up. make k8s-test
Rinse, repeat. Developing against the cluster is very slow.
Run an interactive session with JupyTango
JupyTango provides an easy to use platform to work with an MCCS deployment.
me@local:~$ minikube ip
192.168.49.2
me@local:~$
Navigate to http://192.168.49.2/jupyterhub/user/mccs/lab/workspaces/auto-s for local development. (or wherever your persistent jupyterhub deployment is for non-local development)
import tango
db = tango.Database()
station_device_strings = db.get_device_exported("low-mccs/station/*")
stations = []
for device_str in station_device_strings:
device = tango.DeviceProxy(device_str)
device.adminMode = 0
stations.append(device)
# Do stuff with your stations here
MCCS has a selection of prebuilt notebooks (/notebooks), these can be loaded into your JupyTango session by clicking the ‘Upload’ button, if you then want to export your work, click ‘Download’.
Taranta
Taranta provides a Web UI for monitoring and controlling devices in the cluster. The MCCS charts have Taranta enabled by default, so once MCCS is deployed, you should be able to see Taranta in your web browser, at the cluster’s IP address.
Find out the IP address of the cluster:
me@local:~$ minikube ip 192.168.49.2 me@local:~$
Open a Web browser (Taranta works best with Chrome) and navigate to http://192.168.49.2/ska-low-mccs/taranta/devices.
Log in with credentials found here: https://developer.skao.int/projects/ska-tango-taranta-suite/en/latest/taranta_users.html
Select the Dashboards tab on the left-hand side of the window.
You may now build your own dashboard, or import a dashboard from file. MCCS dashboards are available at ska-low-mccs/dashboards/.
Resource Usage
Resource usage for the MCCS pods is defined in ska-low-mccs/values.yaml. Currently, following a review of the resources required by the MCCS system, these are set to 20m CPU, and 50Mi memory, where 1000m is equivalent to 1 vCPU/Core for cloud providers, or 1 hyperthread on a bare-metal intel processor. Should these need to be increased in future, values.yaml will need updating. More information on resource usage in kubernetes can be found here: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Charts
To view the charts avaliable in ska-low-mccs:
Navigate to https://artefact.skao.int/#browse/search=keyword%3Dska-low-mccs.
Open the helm release for the version of interest.
There are some optional tools avaliable for use:
- To deploy Jupyterhub add the folowing to the values file you are configuring the deployment with:
deploy-jupyterhub: true
- To deploy Taranta add the folowing to the values file you are configuring the deployment with:
deploy-taranta: true
An example helm values file is defined as follows: (found in /helmfile.d/mccs/values/stfc-ci.yaml)
overrides:
array:
station_clusters:
"ci":
stations:
"1":
id: 1
sps:
subracks:
"1":
simulated: true
srmb_host: srmb-1
srmb_port: 8081
tpms:
"10":
simulated: true
host: 10.0.10.201
port: 10000
version: tpm_v1_6
subrack: 1
subrack_slot: 1
pasd:
fndh:
gateway:
simulated: true
host: whatever
port: 9502
timeout: 10.0
controller:
modbus_id: 101
smartboxes:
"1":
fndh_port: 1
modbus_id: 1
"2":
fndh_port: 2
modbus_id: 2
antennas:
"100":
location_offset:
east: -3.25
north: 11.478
up: 0.023
eep: 100
smartbox: "1"
smartbox_port: 5
tpm: "1"
tpm_input: 5
"113":
location_offset:
east: -0.746
north: 12.648
up: 0.019
eep: 113
smartbox: "1"
smartbox_port: 7
tpm: "1"
tpm_input: 7
"2":
...
defaults:
logging_level_default: 5
subarrays:
"1":
enabled: true
logging_level_default: 4
subarraybeams:
"1": {}
"2": {}
"3": {}
"4": {}
The helm templates filter this environment-specific deployment yaml into the form: (found in /charts/ska-low-mccs/values.yaml)
deviceServers:
subarrays: # Tango device server type
subarray-01: # Tango device server instances
low-mccs/subarray/01: # Tango device instance TRL
subarray_id: 1
skuid_url: http://127.0.0.1:8000/
logging_level_default: 5
stationbeams: # Tango device server type
beam-001: # Tango device server instances
low-mccs/beam/001: # Tango device instance TRL
beam_id: 1
logging_level_default: 5