-
Notifications
You must be signed in to change notification settings - Fork 130
Adding TDD to false positives with test case to validate if model can… #124
Conversation
Build failed.
|
The test case should be red and fail unless we are able to pass in the false positives into SOM and if its true that SOM can learn from false positive data. |
Build failed.
|
Build failed.
|
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
recheck |
Build failed.
|
recheck |
@goern says "seems to be a problem between zuul and Openshift" |
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
recheck |
Build failed.
|
recheck |
@zmhassan do you know how much memory your pytests require? |
Build failed.
|
recheck |
Build failed.
|
recheck |
Build failed.
|
recheck |
Build failed.
|
recheck |
Build failed.
|
recheck |
Build failed.
|
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Build failed.
|
Related Ticket: https://url.corp.redhat.com/b7d1e80 The 'module failure' seems to be related to a timeout between Zuul executor and the pod running pytest. We can reproduce the behaviour via a PR thoth-station/srcops-testing#109 and via an ansible playbook. The root cause might be the version of Ansible used by Zuul. to be continued... |
Build failed.
|
Build failed.
|
Build succeeded.
|
… learn from false positives
Build succeeded.
|
@durandom So I removed the WIP and rebased. Awaiting on the merge ?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @zmhassan, looks good
This PR is work in progress and shall be used to start a discussion on how we can implement this test case to validate our assumptions.
Description
This PR contains a unit test to validate if the model can learn from false positives. We are simulating the scenario of a human in the loop providing feedback that a log which is not an anomaly has be flagged incorrectly. Our model should learn from this and not report the same false prediction. We can filter programmatically but this is not machine learning. We should have the model learn from the data over time and get smarter.
What is included in this PR
Unit test to validate this.
What is missing is a function for passing in feedback into Self-organizing map model?
@MichaelClifford What do you recommend is the best approach in passing false positives into SOM?