pipelinex.extras.ops package¶
Subpackages¶
Submodules¶
pipelinex.extras.ops.allennlp_ops module¶
pipelinex.extras.ops.argparse_ops module¶
pipelinex.extras.ops.numpy_ops module¶
pipelinex.extras.ops.opencv_ops module¶
-
class
pipelinex.extras.ops.opencv_ops.
CvBGR2Gray
(*args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'cvtColor'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvBGR2HSV
(*args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'cvtColor'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvBilateralFilter
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'bilateralFilter'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvBlur
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'blur'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvBoxFilter
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'boxFilter'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvCanny
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'Canny'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvCvtColor
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'cvtColor'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvDictToDict
(**kwargs)[source]¶ Bases:
pipelinex.utils.DictToDict
-
module
= <module 'cv2' from '/home/docs/checkouts/readthedocs.org/user_builds/pipelinex/envs/latest/lib/python3.8/site-packages/cv2/__init__.py'>¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvDilate
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'dilate'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvEqualizeHist
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'equalizeHist'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvErode
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'erode'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvFilter2d
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'filter2D'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvGaussianBlur
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'GaussianBlur'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvHoughLinesP
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'HoughLinesP'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvLine
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'line'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvMedianBlur
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'medianBlur'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvResize
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
fn
= 'resize'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvSobel
(ddepth='CV_64F', **kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
__init__
(ddepth='CV_64F', **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
fn
= 'Sobel'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
CvThreshold
(type='THRESH_BINARY', **kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.CvDictToDict
-
__init__
(type='THRESH_BINARY', **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
fn
= 'threshold'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpAbs
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'abs'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpConcat
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'concatenate'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpDictToDict
(**kwargs)[source]¶ Bases:
pipelinex.utils.DictToDict
-
module
= <module 'numpy' from '/home/docs/checkouts/readthedocs.org/user_builds/pipelinex/envs/latest/lib/python3.8/site-packages/numpy/__init__.py'>¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpFullLike
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'full_like'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpMean
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'mean'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpOnesLike
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'ones_like'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpSqrt
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'sqrt'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpSquare
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'square'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpStack
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'stack'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpSum
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'sum'¶
-
-
class
pipelinex.extras.ops.opencv_ops.
NpZerosLike
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.opencv_ops.NpDictToDict
-
fn
= 'zeros_like'¶
-
pipelinex.extras.ops.pandas_ops module¶
-
class
pipelinex.extras.ops.pandas_ops.
DfAgg
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'agg'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfAggregate
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'aggregate'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfApply
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'apply'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfApplymap
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'applymap'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfAssignColumns
(names=None, name_fmt='{:03d}')[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfBaseTask
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
object
-
__init__
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
method
= None¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfConcat
(new_col_name=None, new_col_values=None, col_id=None, sort=False)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfCondReplace
(flag, columns, value=nan, replace_if_flag=True, **kwargs)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfDrop
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'drop'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfDropDuplicates
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'drop_duplicates'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfEval
(expr, parser='pandas', engine=None, truediv=True)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfEwm
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'ewm'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfExpanding
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'expanding'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfFillna
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'fillna'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfFilter
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶
-
class
pipelinex.extras.ops.pandas_ops.
DfFilterCols
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶
-
class
pipelinex.extras.ops.pandas_ops.
DfFocusTransform
(focus, columns, groupby=None, keep_others=False, func='max', **kwargs)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfGroupby
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'groupby'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfHead
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'head'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfMap
(arg, prefix='', suffix='', **kwargs)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfNgroup
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'ngroup'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfPipe
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'pipe'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfQuery
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'query'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfRelative
(focus, columns, groupby=None)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfRename
(index=None, columns=None, copy=True, level=None)[source]¶ Bases:
object
-
class
pipelinex.extras.ops.pandas_ops.
DfResample
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'resample'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfResetIndex
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'reset_index'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfRolling
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'rolling'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfSelectDtypes
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'select_dtypes'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfSetIndex
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'set_index'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfShift
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'shift'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfSortValues
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseMethod
-
method
= 'sort_values'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfSpatialFeatures
(output='distance', coo_cols=['X', 'Y'], groupby=None, ord=None, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06, col_name_fmt='feat_{:03d}', keep_others=True, sort=True)[source]¶ Bases:
object
-
__init__
(output='distance', coo_cols=['X', 'Y'], groupby=None, ord=None, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06, col_name_fmt='feat_{:03d}', keep_others=True, sort=True)[source]¶ - Available values for output:
distance affinity laplacian eigenvalues eigenvectors n_connected
-
-
class
pipelinex.extras.ops.pandas_ops.
DfTail
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'tail'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
DfTransform
(groupby=None, columns=None, keep_others=False, method=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pandas_ops.DfBaseTask
-
method
= 'transform'¶
-
-
class
pipelinex.extras.ops.pandas_ops.
NestedDictToDf
(row_oriented=True, index_name='index', reset_index=True)[source]¶ Bases:
object
-
pipelinex.extras.ops.pandas_ops.
affinity_matrix
(coo_2darr, ord=None, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06, zero_diag=True)[source]¶
-
pipelinex.extras.ops.pandas_ops.
distance_to_affinity
(dist_2darr, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06)[source]¶
-
pipelinex.extras.ops.pandas_ops.
eigen
(a, return_values=True, values_as_square_matrix=False, return_vectors=False, sort=False)[source]¶
-
pipelinex.extras.ops.pandas_ops.
laplacian_eigen
(coo_2darr, return_values=True, return_vectors=False, ord=None, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06, sort=False)[source]¶
-
pipelinex.extras.ops.pandas_ops.
laplacian_matrix
(coo_2darr, ord=None, unit_distance=1.0, affinity_scale=1.0, binary_affinity=False, min_affinity=1e-06)[source]¶
pipelinex.extras.ops.pytorch_ops module¶
-
class
pipelinex.extras.ops.pytorch_ops.
CrossEntropyLoss2d
(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)[source]¶ Bases:
torch.nn.modules.loss.CrossEntropyLoss
-
forward
(input, target)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
ignore_index
: int¶
-
label_smoothing
: float¶
-
class
pipelinex.extras.ops.pytorch_ops.
ModuleAvg
(*args: torch.nn.modules.module.Module)[source]¶ -
class
pipelinex.extras.ops.pytorch_ops.
ModuleAvg
(arg: OrderedDict[str, Module]) Bases:
pipelinex.extras.ops.pytorch_ops.ModuleListMerge
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
pipelinex.extras.ops.pytorch_ops.
ModuleBottleneck2d
(in_channels, out_channels, kernel_size=(1, 1), stride=(1, 1), mid_channels=None, batch_norm=None, activation=None, **kwargs)[source]¶ Bases:
torch.nn.modules.container.Sequential
-
class
pipelinex.extras.ops.pytorch_ops.
ModuleConcat
(*args: torch.nn.modules.module.Module)[source]¶ -
class
pipelinex.extras.ops.pytorch_ops.
ModuleConcat
(arg: OrderedDict[str, Module]) Bases:
pipelinex.extras.ops.pytorch_ops.ModuleListMerge
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
pipelinex.extras.ops.pytorch_ops.
ModuleConvWrap
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
torch.nn.modules.container.Sequential
-
__init__
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
core
= None¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
ModuleListMerge
(*args: torch.nn.modules.module.Module)[source]¶ -
class
pipelinex.extras.ops.pytorch_ops.
ModuleListMerge
(arg: OrderedDict[str, Module]) Bases:
torch.nn.modules.container.Sequential
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
pipelinex.extras.ops.pytorch_ops.
ModuleProd
(*args: torch.nn.modules.module.Module)[source]¶ -
class
pipelinex.extras.ops.pytorch_ops.
ModuleProd
(arg: OrderedDict[str, Module]) Bases:
pipelinex.extras.ops.pytorch_ops.ModuleListMerge
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
pipelinex.extras.ops.pytorch_ops.
ModuleSum
(*args: torch.nn.modules.module.Module)[source]¶ -
class
pipelinex.extras.ops.pytorch_ops.
ModuleSum
(arg: OrderedDict[str, Module]) Bases:
pipelinex.extras.ops.pytorch_ops.ModuleListMerge
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
pipelinex.extras.ops.pytorch_ops.
NLLoss
(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')[source]¶ Bases:
torch.nn.modules.loss.NLLLoss
The negative likelihood loss. To compute Cross Entropy Loss, there are 3 options. NLLoss with torch.nn.Softmax torch.nn.NLLLoss with torch.nn.LogSoftmax torch.nn.CrossEntropyLoss
-
forward
(input, target)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
ignore_index
: int¶
-
class
pipelinex.extras.ops.pytorch_ops.
StatModule
(dim, keepdim=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(dim, keepdim=False)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
StepBinary
(size, desc=False, compare=None, dtype=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(size, desc=False, compare=None, dtype=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorAvgPool1d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.AvgPool1d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorAvgPool2d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.AvgPool2d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorAvgPool3d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.AvgPool3d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorClamp
(min=None, max=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(min=None, max=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorClampMax
(max=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(max=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorClampMin
(min=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(min=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorConstantLinear
(weight=1, bias=0)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(weight=1, bias=0)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorConv1d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.conv.Conv1d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorConv2d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.conv.Conv2d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorConv3d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.conv.Conv3d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorCumsum
(dim=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(dim=1)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorExp
(*args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorFlatten
(*args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorForward
(func=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(func=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalAvgPool1d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool1dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMean
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalAvgPool2d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool2dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMean
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalAvgPool3d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool3dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMean
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMaxPool1d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool1dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMax
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMaxPool2d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool2dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMax
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMaxPool3d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool3dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMax
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMinPool1d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool1dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMin
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMinPool2d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool2dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMin
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalMinPool3d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool3dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorMin
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalRangePool1d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool1dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorRange
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalRangePool2d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool2dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorRange
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalRangePool3d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool3dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorRange
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalSumPool1d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool1dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorSum
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalSumPool2d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool2dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorSum
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorGlobalSumPool3d
(keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.Pool3dMixIn
,pipelinex.extras.ops.pytorch_ops.TensorSum
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorIdentity
(*args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorLog
(*args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMax
(dim, keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.StatModule
,torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMaxPool1d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.MaxPool1d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMaxPool2d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.MaxPool2d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMaxPool3d
(batchnorm=None, activation=None, *args, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.ModuleConvWrap
-
core
¶ alias of
torch.nn.modules.pooling.MaxPool3d
-
training
: bool¶
-
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMean
(dim, keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.StatModule
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorMin
(dim, keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.StatModule
,torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorNearestPad
(lower=1, upper=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(lower=1, upper=1)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorProba
(dim=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(dim=1)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorRange
(dim, keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.StatModule
,torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorSkip
(*args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorSlice
(start=0, end=None, step=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(start=0, end=None, step=1)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorSqueeze
(dim=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(dim=None)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorSum
(dim, keepdim=False)[source]¶ Bases:
pipelinex.extras.ops.pytorch_ops.StatModule
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
class
pipelinex.extras.ops.pytorch_ops.
TensorUnsqueeze
(dim)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(dim)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
training
: bool¶
-
pipelinex.extras.ops.pytorch_ops.
setup_conv_params
(kernel_size=1, dilation=None, padding=None, stride=None, raise_error=False, *args, **kwargs)[source]¶
-
pipelinex.extras.ops.pytorch_ops.
step_binary
(input, output_size, compare=<built-in method ge of type object>)[source]¶
pipelinex.extras.ops.shap_ops module¶
pipelinex.extras.ops.skimage_ops module¶
-
class
pipelinex.extras.ops.skimage_ops.
SkimageMarkBoundaries
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.skimage_ops.SkimageSegmentationDictToDict
-
fn
= 'mark_boundaries'¶
-
-
class
pipelinex.extras.ops.skimage_ops.
SkimageSegmentationDictToDict
(**kwargs)[source]¶ Bases:
pipelinex.utils.DictToDict
-
module
= <module 'skimage.segmentation' from '/home/docs/checkouts/readthedocs.org/user_builds/pipelinex/envs/latest/lib/python3.8/site-packages/skimage/segmentation/__init__.py'>¶
-
-
class
pipelinex.extras.ops.skimage_ops.
SkimageSegmentationFelzenszwalb
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.skimage_ops.SkimageSegmentationDictToDict
-
fn
= 'felzenszwalb'¶
-
-
class
pipelinex.extras.ops.skimage_ops.
SkimageSegmentationQuickshift
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.skimage_ops.SkimageSegmentationDictToDict
-
fn
= 'quickshift'¶
-
-
class
pipelinex.extras.ops.skimage_ops.
SkimageSegmentationSlic
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.skimage_ops.SkimageSegmentationDictToDict
-
fn
= 'slic'¶
-
-
class
pipelinex.extras.ops.skimage_ops.
SkimageSegmentationWatershed
(**kwargs)[source]¶ Bases:
pipelinex.extras.ops.skimage_ops.SkimageSegmentationDictToDict
-
fn
= 'watershed'¶
-
pipelinex.extras.ops.sklearn_ops module¶
-
class
pipelinex.extras.ops.sklearn_ops.
DfBaseTransformer
(cols=None, target_col=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.sklearn_ops.ZeroToZeroTransformer
-
__init__
(cols=None, target_col=None, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
fit_transform
(df)[source]¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Input samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).
**fit_params (dict) – Additional fit parameters.
- Returns:
X_new – Transformed array.
- Return type:
ndarray array of shape (n_samples, n_features_new)
-
set_fit_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfBaseTransformer¶ Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter infit
.- Returns:
self – The updated object.
- Return type:
object
-
set_transform_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfBaseTransformer¶ Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter intransform
.- Returns:
self – The updated object.
- Return type:
object
-
class
pipelinex.extras.ops.sklearn_ops.
DfMinMaxScaler
(cols=None, target_col=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.sklearn_ops.DfBaseTransformer
,sklearn.preprocessing._data.MinMaxScaler
-
set_fit_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfMinMaxScaler¶ Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter infit
.- Returns:
self – The updated object.
- Return type:
object
-
set_transform_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfMinMaxScaler¶ Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter intransform
.- Returns:
self – The updated object.
- Return type:
object
-
class
pipelinex.extras.ops.sklearn_ops.
DfQuantileTransformer
(cols=None, target_col=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.sklearn_ops.DfBaseTransformer
,sklearn.preprocessing._data.QuantileTransformer
-
set_fit_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfQuantileTransformer¶ Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter infit
.- Returns:
self – The updated object.
- Return type:
object
-
set_transform_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfQuantileTransformer¶ Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter intransform
.- Returns:
self – The updated object.
- Return type:
object
-
class
pipelinex.extras.ops.sklearn_ops.
DfStandardScaler
(cols=None, target_col=None, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.sklearn_ops.DfBaseTransformer
,sklearn.preprocessing._data.StandardScaler
-
set_fit_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfStandardScaler¶ Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter infit
.- Returns:
self – The updated object.
- Return type:
object
-
set_inverse_transform_request
(*, copy: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfStandardScaler¶ Request metadata passed to the
inverse_transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toinverse_transform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toinverse_transform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
copy (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
copy
parameter ininverse_transform
.- Returns:
self – The updated object.
- Return type:
object
-
set_partial_fit_request
(*, sample_weight: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfStandardScaler¶ Request metadata passed to the
partial_fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topartial_fit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topartial_fit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
sample_weight
parameter inpartial_fit
.- Returns:
self – The updated object.
- Return type:
object
-
set_transform_request
(*, df: Union[bool, None, str] = '$UNCHANGED$') → pipelinex.extras.ops.sklearn_ops.DfStandardScaler¶ Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
df (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
df
parameter intransform
.- Returns:
self – The updated object.
- Return type:
object
-
class
pipelinex.extras.ops.sklearn_ops.
EstimatorTransformer
[source]¶ Bases:
sklearn.base.TransformerMixin
,sklearn.base.BaseEstimator
-
class
pipelinex.extras.ops.sklearn_ops.
ZeroToZeroTransformer
(zero_to_zero=False, **kwargs)[source]¶ Bases:
pipelinex.extras.ops.sklearn_ops.EstimatorTransformer
-
__init__
(zero_to_zero=False, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
fit_transform
(X)[source]¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Input samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).
**fit_params (dict) – Additional fit parameters.
- Returns:
X_new – Transformed array.
- Return type:
ndarray array of shape (n_samples, n_features_new)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
class
-
-
class
-
-
class
-
-
class
-
-
class
-
-
-