m2cgen (Model 2 Code Generator) - is a lightweight library which provides an easy way to transpile trained statistical models into a native code (Python, C, Java, Go, JavaScript, Visual Basic, C#, PowerShell, R, PHP, Dart, Haskell, Ruby, F#, Rust, Elixir).
Supported Python version is >= 3.7.
pip install m2cgen
Make sure the following command runs successfully before submitting a PR:
make pre-pr
Alternatively you can run the Docker version of the same command:
make docker-build docker-pre-pr
- C
- C#
- Dart
- F#
- Go
- Haskell
- Java
- JavaScript
- PHP
- PowerShell
- Python
- R
- Ruby
- Rust
- Visual Basic (VBA-compatible)
- Elixir
Classification | Regression | |
---|---|---|
Linear |
|
|
SVM |
|
|
Tree |
|
|
Random Forest |
|
|
Boosting |
|
|
You can find versions of packages with which compatibility is guaranteed by CI tests here. Other versions can also be supported but they are untested.
Scalar value; signed distance of the sample to the hyperplane for the second class.
Vector value; signed distance of the sample to the hyperplane per each class.
The output is consistent with the output of LinearClassifierMixin.decision_function
.
Scalar value; signed distance of the sample to the separating hyperplane: positive for an inlier and negative for an outlier.
Scalar value; signed distance of the sample to the hyperplane for the second class.
Vector value; one-vs-one score for each class, shape (n_samples, n_classes * (n_classes-1) / 2).
The output is consistent with the output of BaseSVC.decision_function
when the decision_function_shape
is set to ovo
.
Vector value; class probabilities.
Vector value; class probabilities.
The output is consistent with the output of the predict_proba
method of DecisionTreeClassifier
/ ExtraTreeClassifier
/ ExtraTreesClassifier
/ RandomForestClassifier
/ XGBRFClassifier
/ XGBClassifier
/ LGBMClassifier
.
Here's a simple example of how a linear model trained in Python environment can be represented in Java code:
from sklearn.datasets import load_diabetes
from sklearn import linear_model
import m2cgen as m2c
X, y = load_diabetes(return_X_y=True)
estimator = linear_model.LinearRegression()
estimator.fit(X, y)
code = m2c.export_to_java(estimator)
Generated Java code:
public class Model {
public static double score(double[] input) {
return ((((((((((152.1334841628965) ((input[0]) * (-10.012197817470472))) ((input[1]) * (-239.81908936565458))) ((input[2]) * (519.8397867901342))) ((input[3]) * (324.39042768937657))) ((input[4]) * (-792.1841616283054))) ((input[5]) * (476.74583782366153))) ((input[6]) * (101.04457032134408))) ((input[7]) * (177.06417623225025))) ((input[8]) * (751.2793210873945))) ((input[9]) * (67.62538639104406));
}
}
You can find more examples of generated code for different models/languages here.
m2cgen
can be used as a CLI tool to generate code using serialized model objects (pickle protocol):
$ m2cgen <pickle_file> --language <language> [--indent <indent>] [--function_name <function_name>]
[--class_name <class_name>] [--module_name <module_name>] [--package_name <package_name>]
[--namespace <namespace>] [--recursion-limit <recursion_limit>]
Don't forget that for unpickling serialized model objects their classes must be defined in the top level of an importable module in the unpickling environment.
Piping is also supported:
$ cat <pickle_file> | m2cgen --language <language>
Q: Generation fails with RecursionError: maximum recursion depth exceeded
error.
A: If this error occurs while generating code using an ensemble model, try to reduce the number of trained estimators within that model. Alternatively you can increase the maximum recursion depth with sys.setrecursionlimit(<new_depth>)
.
Q: Generation fails with ImportError: No module named <module_name_here>
error while transpiling model from a serialized model object.
A: This error indicates that pickle protocol cannot deserialize model object. For unpickling serialized model objects, it is required that their classes must be defined in the top level of an importable module in the unpickling environment. So installation of package which provided model's class definition should solve the problem.
Q: Generated by m2cgen code provides different results for some inputs compared to original Python model from which the code were obtained.
A: Some models force input data to be particular type during prediction phase in their native Python libraries. Currently, m2cgen works only with float64
(double
) data type. You can try to cast your input data to another type manually and check results again. Also, some small differences can happen due to specific implementation of floating-point arithmetic in a target language.