aeon
is an open-source toolkit for learning from time series. It is compatible with
scikit-learn and provides access to the very latest
algorithms for time series machine learning, in addition to a range of classical
techniques for learning tasks such as forecasting and classification.
We strive to provide a broad library of time series algorithms including the latest advances, offer efficient implementations using numba, and interfaces with other time series packages to provide a single framework for algorithm comparison.
The latest aeon
release is v0.10.0
. You can view the full changelog
here.
Our webpage and documentation is available at https://aeon-toolkit.org.
The following modules are still considered experimental, and the deprecation policy does not apply:
anomaly_detection
, benchmarking
, segmentation
, similarity_search
,
testing
, transformations/series
, visualisation
Overview | |
---|---|
CI/CD | |
Code | |
Community |
aeon
requires a Python version of 3.9 or greater. Our full installation guide is
available in our documentation.
The easiest way to install aeon
is via pip:
pip install aeon
Some estimators require additional packages to be installed. If you want to install the full package with all optional dependencies, you can use:
pip install aeon[all_extras]
Instructions for installation from the GitHub source can be found here.
The best place to get started for all aeon
packages is our getting started guide.
Below we provide a quick example of how to use aeon
for forecasting,
classification and clustering.
It's worth mentioning that the classifier used in the example can easily be swapped out for a regressor, and the labels for numeric targets. This flexibility allowing for seamless adaptation to different tasks and datasets while preserving API consistency.
import numpy as np
from aeon.classification.distance_based import KNeighborsTimeSeriesClassifier
X = [[[1, 2, 3, 4, 5, 5]], # 3D array example (univariate)
[[1, 2, 3, 4, 4, 2]], # Three samples, one channel, six series length,
[[8, 7, 6, 5, 4, 4]]]
y = ['low', 'low', 'high'] # class labels for each sample
X = np.array(X)
y = np.array(y)
clf = KNeighborsTimeSeriesClassifier(distance="dtw")
clf.fit(X, y) # fit the classifier on train data
>>> KNeighborsTimeSeriesClassifier()
X_test = np.array(
[[[2, 2, 2, 2, 2, 2]], [[5, 5, 5, 5, 5, 5]], [[6, 6, 6, 6, 6, 6]]]
)
y_pred = clf.predict(X_test) # make class predictions on new data
>>> ['low' 'high' 'high']
import numpy as np
from aeon.clustering import TimeSeriesKMeans
X = np.array([[[1, 2, 3, 4, 5, 5]], # 3D array example (univariate)
[[1, 2, 3, 4, 4, 2]], # Three samples, one channel, six series length,
[[8, 7, 6, 5, 4, 4]]])
clu = TimeSeriesKMeans(distance="dtw", n_clusters=2)
clu.fit(X) # fit the clusterer on train data
>>> TimeSeriesKMeans(distance='dtw', n_clusters=2)
clu.labels_ # get training cluster labels
>>> array([0, 0, 1])
X_test = np.array(
[[[2, 2, 2, 2, 2, 2]], [[5, 5, 5, 5, 5, 5]], [[6, 6, 6, 6, 6, 6]]]
)
clu.predict(X_test) # Assign clusters to new data
>>> array([1, 0, 0])
Type | Platforms |
---|---|
🐛 Bug Reports | GitHub Issue Tracker |
✨ Feature Requests & Ideas | GitHub Issue Tracker & Slack |
💻 Usage Questions | GitHub Discussions & Slack |
💬 General Discussion | GitHub Discussions & Slack |
🏭 Contribution & Development | Slack |