This piece of software prepares different datasets and ML algorithms as well as implementations and saves the qualitative benchmark results.
It emerged as part of a CompSci Masters' Thesis at the University of Applied Sciences and Arts Dortmund.
It is currently in a state of prerelease and still subject to changes!
A stable release can be expected around 03/2019
Have a look at the Getting Started section in the documentation for a detailed guide.
Here is a minimal working example to check your installation:
git clone https://github.com/Maddosaurus/MLT
cd MLT
pipenv install
cd MLT/datasets
git clone https://github.com/defcom17/NSL_KDD NSL_KDD
cd ..
python run.py --pnsl
python run.py --single --nsl --xgb 10 10 0.1
Upon completion, you should be able to find infos for the test run in your console as well as in the subfolder results
.
- Python 3.6
- CUDA 9.1 (optional)
- tensorflow-gpu (optional)
If you plan on using GPU-accelerated learning (strongly recommended), please set up CUDA 9.1 on your system. The current version of Tensorflow relies on CUDA 9.1 (not 10!). Please refer to the Tensorflow Install How To for up to date install instructions!
If you are interested in using the GPU-accelerated deep learning potion, make sure to replace tensorflow
with tensorflow-gpu
in your installation.
The use of a virtual environment is strongly advised!
All package requirements can be installed via pipenv install
(add --dev
for development dependencies).
Besides these, you will need copies of the NSL-KDD and CICIDS2017 datasets stored in the subfolder datasets
(/NSL_KDD
and /CICIDS2017pub
). The CICIDS2017 dataset can be downloaded at the University of New Brunswick, while NSL-KDD can be obtained on GitHub. Additional datasets can be included analogous to these.
The current documentation can be found at readthedocs.io.
If you're intersted in manually building the API documentation, run make html
in the docroot
folder. This command will generate the full sphinx-doc for the project.
You can view a local copy of the docs by running cd docroot/_build/html && python -m http.server
from the project root.
The general workflow is:
- Dataset Preparation (sanitize and pickle)
- Algorithm definition
- Feature Selection, optional CV spits and Normalization/Scaling
- Algorithm Training
- Result Collection and Evaluation