InterLink is an open-source service to enable transparent access to heterogeneous computing providers. It provides an abstraction for the execution of a Kubernetes pod on any remote resource capable of managing a Container execution lifecycle. The interLink component extends the Kubernetes Virtual Kubelet solution with a generic API layer for delegating pod execution on ANY remote backend. Kubernetes POD requests are digested through the API layer (e.g. deployed on an HPC edge) into batch job execution of a container.
The API layer foresees a plugin structure to accommodate any possible backend integration.
Executing payloads in response to an external trigger like a storage event or a web server call.
Frameworks for DAG workflow managements are usually well integrated with Kubernetes APIs
Users can exploit interLink via self-provisioned deployment (i.e. through already integrated high level services) or standalone Kubernetes deployment creating and deploying a simple container that can be scheduled on a remote system such as a Slurm batch on an HPC center. High Level services getting integrated are e.g.: Airflow/Kubeflow pipelines/Argo workflows/MLFlow, Jupyter notebooks.