Learn how to manage and process data in your CI workflow.
December 4, 2023
In general, the developer workflow for HPE ML Data Management involves adding data to versioned data repositories, creating pipelines to read from those repositories, executing the pipeline’s code, and writing the pipeline’s output to other data repositories. Both the data and pipeline can be iterated on independently with HPE ML Data Management handling the code execution according to the pipeline specfication. The workflow steps are shown below.
Data Workflow #
Adding data to HPE ML Data Management is the first step towards building data-driven pipelines. There are multiple ways to add data to a HPE ML Data Management repository:
- By using the
pachctl put filecommand
- By using a special type of pipeline, such as a spout or cron
- By using one of the HPE ML Data Management’s language clients
- By using a compatible S3 client
For more information, see Load Your Data Into HPE ML Data Management.
Pipeline Workflow #
The fundamental concepts of HPE ML Data Management are very powerful, but the manual build steps mentioned in the pipeline workflow can become cumbersome during rapid-iteration development cycles. We’ve created a few helpful developer workflows and tools to automate steps that are error-prone or repetitive:
- The push images flag or
--push-imagesis a optional flag that can be passed to the
updatepipeline command. This option is most useful when you need to customize your Docker image or are iterating on the Docker image and code together, since it tags and pushes the image before updating the pipeline.
- CI/CD Integration provides a way to incorporate HPE ML Data Management functions into the CI process. This is most useful when working with a complex project or for code collaboration.