Integration of Kedro with the additional Python packages as Kedro DataSets, Hooks (callbacks), and wrappers.¶
Supplements to kedro.extras pipelinex.extras provides features not available in kedro.extras. Contributors who are willing to help preparing the test code and send pull request to Kedro following Kedro’s CONTRIBUTING.md are welcomed.
Additional Kedro datasets (data interface sets)¶
pipelinex.extras.datasets provides the following Kedro Datasets (data interface sets) mainly for Computer Vision applications using PyTorch/torchvision, OpenCV, and Scikit-image.
-
loads/saves multiple numpy arrays (RGB, BGR, or monochrome image) from/to a folder in local storage using
pillow
package, working likekedro.extras.datasets.pillow.ImageDataSet
andkedro.io.PartitionedDataSet
with conversion between numpy arrays and Pillow images.an example project is at pipelinex_image_processing
-
modified version of kedro.extras.APIDataSet with more flexible options including downloading multiple contents (such as images and json) by HTTP requests to multiple URLs using
requests
packagean example project is at pipelinex_image_processing
-
downloads multiple contents (such as images and json) by asynchronous HTTP requests to multiple URLs using
httpx
packagean example project is at pipelinex_image_processing
pipelinex.IterableImagesDataSet
wrapper of
torchvision.datasets.ImageFolder
that loads images in a folder as an iterable data loader to use with PyTorch.
pipelinex.PandasProfilingDataSet
generates a pandas dataframe summary report using pandas-profiling
more data interface sets for pandas dataframe summarization/visualization provided by PipelineX
Additional function decorators for benchmarking¶
pipelinex.extras.decorators provides Python decorators for benchmarking.
-
logs the duration time of a function (difference of timestamp before and after running the function).
Slightly modified version of Kedro’s log_time
-
logs the peak memory usage during running the function.
memory_profiler
needs to be installed.Slightly modified version of Kedro’s mem_profile
-
logs the difference of NVIDIA GPU usage before and after running the function.
pynvml
orpy3nvml
needs to be installed.
from pipelinex import log_time
from pipelinex import mem_profile # Need to install memory_profiler for memory profiling
from pipelinex import nvml_profile # Need to install pynvml for NVIDIA GPU profiling
from time import sleep
import logging
logging.basicConfig(level=logging.INFO)
@nvml_profile
@mem_profile
@log_time
def foo_func(i=1):
sleep(0.5) # Needed to avoid the bug reported at https://github.com/pythonprofilers/memory_profiler/issues/216
return "a" * i
output = foo_func(100_000_000)
INFO:pipelinex.decorators.decorators:Running 'foo_func' took 549ms [0.549s]
INFO:pipelinex.decorators.memory_profiler:Running 'foo_func' consumed 579.02MiB memory at peak time
INFO:pipelinex.decorators.nvml_profiler:Ran: 'foo_func', NVML returned: {'_Driver_Version': '418.67', '_NVML_Version': '10.418.67', 'Device_Count': 1, 'Devices': [{'_Name': 'Tesla P100-PCIE-16GB', 'Total_Memory': 17071734784, 'Free_Memory': 17071669248, 'Used_Memory': 65536, 'GPU_Utilization_Rate': 0, 'Memory_Utilization_Rate': 0}]}, Used memory diff: [0]
Use with PyTorch¶
To develop a simple neural network, it is convenient to use Sequential API
(e.g. torch.nn.Sequential
, tf.keras.Sequential
).
Hardcoded:
from torch.nn import Sequential, Conv2d, ReLU
model = Sequential(
Conv2d(in_channels=3, out_channels=16, kernel_size=[3, 3]),
ReLU(),
)
print("### model object by hard-coding ###")
print(model)
### model object by hard-coding ###
Sequential(
(0): Conv2d(3, 16, kernel_size=[3, 3], stride=(1, 1))
(1): ReLU()
)
Using import-less Python object feature:
from pipelinex import HatchDict
import yaml
from pprint import pprint # pretty-print for clearer look
# Read parameters dict from a YAML file in actual use
params_yaml="""
model:
=: torch.nn.Sequential
_:
- {=: torch.nn.Conv2d, in_channels: 3, out_channels: 16, kernel_size: [3, 3]}
- {=: torch.nn.ReLU, _: }
"""
parameters = yaml.safe_load(params_yaml)
model_dict = parameters.get("model")
print("### Before ###")
pprint(model_dict)
model = HatchDict(parameters).get("model")
print("\n### After ###")
print(model)
### Before ###
{'=': 'torch.nn.Sequential',
'_': [{'=': 'torch.nn.Conv2d',
'in_channels': 3,
'kernel_size': [3, 3],
'out_channels': 16},
{'=': 'torch.nn.ReLU', '_': None}]}
### After ###
Sequential(
(0): Conv2d(3, 16, kernel_size=[3, 3], stride=(1, 1))
(1): ReLU()
)
In addition to Sequential
, TensorFLow/Keras provides modules to merge branches such as
tf.keras.layers.Concatenate
, but PyTorch provides only functional interface such as torch.cat
.
PipelineX provides modules to merge branches such as ModuleConcat
, ModuleSum
, and ModuleAvg
.
Hardcoded:
from torch.nn import Sequential, Conv2d, AvgPool2d, ReLU
from pipelinex import ModuleConcat
model = Sequential(
ModuleConcat(
Conv2d(in_channels=3, out_channels=16, kernel_size=[3, 3], stride=[2, 2], padding=[1, 1]),
AvgPool2d(kernel_size=[3, 3], stride=[2, 2], padding=[1, 1]),
),
ReLU(),
)
print("### model object by hard-coding ###")
print(model)
### model object by hard-coding ###
Sequential(
(0): ModuleConcat(
(0): Conv2d(3, 16, kernel_size=[3, 3], stride=[2, 2], padding=[1, 1])
(1): AvgPool2d(kernel_size=[3, 3], stride=[2, 2], padding=[1, 1])
)
(1): ReLU()
)
Using import-less Python object feature:
from pipelinex import HatchDict
import yaml
from pprint import pprint # pretty-print for clearer look
# Read parameters dict from a YAML file in actual use
params_yaml="""
model:
=: torch.nn.Sequential
_:
- =: pipelinex.ModuleConcat
_:
- {=: torch.nn.Conv2d, in_channels: 3, out_channels: 16, kernel_size: [3, 3], stride: [2, 2], padding: [1, 1]}
- {=: torch.nn.AvgPool2d, kernel_size: [3, 3], stride: [2, 2], padding: [1, 1]}
- {=: torch.nn.ReLU, _: }
"""
parameters = yaml.safe_load(params_yaml)
model_dict = parameters.get("model")
print("### Before ###")
pprint(model_dict)
model = HatchDict(parameters).get("model")
print("\n### After ###")
print(model)
### Before ###
{'=': 'torch.nn.Sequential',
'_': [{'=': 'pipelinex.ModuleConcat',
'_': [{'=': 'torch.nn.Conv2d',
'in_channels': 3,
'kernel_size': [3, 3],
'out_channels': 16,
'padding': [1, 1],
'stride': [2, 2]},
{'=': 'torch.nn.AvgPool2d',
'kernel_size': [3, 3],
'padding': [1, 1],
'stride': [2, 2]}]},
{'=': 'torch.nn.ReLU', '_': None}]}
### After ###
Sequential(
(0): ModuleConcat(
(0): Conv2d(3, 16, kernel_size=[3, 3], stride=[2, 2], padding=[1, 1])
(1): AvgPool2d(kernel_size=[3, 3], stride=[2, 2], padding=[1, 1])
)
(1): ReLU()
)
Use with PyTorch Ignite¶
Wrappers of PyTorch Ignite provides most of features available in Ignite, including integration with MLflow, in an easy declarative way.
In addition, the following optional features are available in PipelineX.
Use only partial samples in dataset (Useful for quick preliminary check before using the whole dataset)
Time limit for training (Useful for code-only (Kernel-only) Kaggle competitions with time limit)
Here are the arguments for NetworkTrain
:
loss_fn (callable): Loss function used to train.
Accepts an instance of loss functions at https://pytorch.org/docs/stable/nn.html#loss-functions
epochs (int, optional): Max epochs to train
seed (int, optional): Random seed for training.
optimizer (torch.optim, optional): Optimizer used to train.
Accepts optimizers at https://pytorch.org/docs/stable/optim.html
optimizer_params (dict, optional): Parameters for optimizer.
train_data_loader_params (dict, optional): Parameters for data loader for training.
Accepts args at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
val_data_loader_params (dict, optional): Parameters for data loader for validation.
Accepts args at https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
evaluation_metrics (dict, optional): Metrics to compute for evaluation.
Accepts dict of metrics at https://pytorch.org/ignite/metrics.html
evaluate_train_data (str, optional): When to compute evaluation_metrics using training dataset.
Accepts events at https://pytorch.org/ignite/engine.html#ignite.engine.Events
evaluate_val_data (str, optional): When to compute evaluation_metrics using validation dataset.
Accepts events at https://pytorch.org/ignite/engine.html#ignite.engine.Events
progress_update (bool, optional): Whether to show progress bar using tqdm package
scheduler (ignite.contrib.handle.param_scheduler.ParamScheduler, optional): Param scheduler.
Accepts a ParamScheduler at
https://pytorch.org/ignite/contrib/handlers.html#module-ignite.contrib.handlers.param_scheduler
scheduler_params (dict, optional): Parameters for scheduler
model_checkpoint (ignite.handlers.ModelCheckpoint, optional): Model Checkpoint.
Accepts a ModelCheckpoint at https://pytorch.org/ignite/handlers.html#ignite.handlers.ModelCheckpoint
model_checkpoint_params (dict, optional): Parameters for ModelCheckpoint at
https://pytorch.org/ignite/handlers.html#ignite.handlers.ModelCheckpoint
early_stopping_params (dict, optional): Parameters for EarlyStopping at
https://pytorch.org/ignite/handlers.html#ignite.handlers.EarlyStopping
time_limit (int, optioinal): Time limit for training in seconds.
train_dataset_size_limit (int, optional): If specified, only the subset of training dataset is used.
Useful for quick preliminary check before using the whole dataset.
val_dataset_size_limit (int, optional): If specified, only the subset of validation dataset is used.
useful for qucik preliminary check before using the whole dataset.
cudnn_deterministic (bool, optional): Value for torch.backends.cudnn.deterministic.
See https://pytorch.org/docs/stable/notes/randomness.html for details.
cudnn_benchmark (bool, optional): Value for torch.backends.cudnn.benchmark.
See https://pytorch.org/docs/stable/notes/randomness.html for details.
mlflow_logging (bool, optional): If True and MLflow is installed, MLflow logging is enabled.
Please see the example code using MNIST dataset prepared based on the original code.
It is also possible to use:
FlexibleModelCheckpoint handler which enables to use timestamp in the model checkpoint file name to clarify which one is the latest.
CohenKappaScore metric which can compute Quadratic Weighted Kappa Metric used in some Kaggle competitions. See sklearn.metrics.cohen_kappa_score for details.
It is planned to port some code used with PyTorch Ignite to PyTorch Ignite repository once test and example codes are prepared.
Use with OpenCV¶
A challenge of image processing is that the parameters and algorithms that work with an image often do not work with another image. You will want to output intermediate images from each image processing pipeline step for visual check during development, but you will not want to output all the intermediate images to save time and disk space in production.
Wrappers of OpenCV and ImagesLocalDataSet
are the solution. You can concentrate on developping your image processing pipeline for an image (3-D or 2-D numpy array), and it will run for all the images in a folder.
If you are devepping an image processing pipeline consisting of 5 steps and you have 10 images, for example, you can check 10 generated images in each of 5 folders, 50 images in total, during development.
Use with PyTorch Lightning¶
(To-do: contributions are welcome.)
Use with TensorFlow/Keras¶
(To-do: contributions are welcome.)