Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misleading & clumsy ModSecurity doc #8388

Open
damienleger opened this issue Mar 24, 2022 · 14 comments
Open

Misleading & clumsy ModSecurity doc #8388

damienleger opened this issue Mar 24, 2022 · 14 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@damienleger
Copy link

hello,

I'm on nginx 1.1.1 and I've been struggling with the ModSecurity documentation. I wanted to explain why since usually the doc is clean, but for ModSecurity it isn't really.

Atm on the website there are:

[1] is kinda ok but miss a critical point imho: telling that there is both possible implementation: per ingress or controller wide, something that you conclude yourself by checking [2] and [3]

[2] is full of clumsy phrasing imho and mistakes, I will elaborate below

[3] there are no examples of modsecurity configmap parameters, those must be derived from their ingress annotations counterpart in [2] this is really weird. It seems like this feature is beta or something 🤨

particularly the configmap equivalent of

nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request_id"

is a bit tricky and would be interesting to detail (found how to do there

http-snippet: |- 
  modsecurity_transaction_id "$request_id";

About [2],

image

So if I'm rephrasing, enabling modsecurity in configmap enables it for every ingress per default except if you disable it specifically. Yet the configuration show how to enable modsecurity on ingress instead of disabling 🤨

image

This one is true, yet if you do that on controller level you have a ton of spam from 127.0.0.1. This tweak might be either documented or implemented by default imho.

image

This one I mentioned above

image

So I've made a lot of test to understand this one

default=y, crs=n, snippet=n

# configmap
enable-modsecurity: "true"

# /etc/nginx/nginx.conf
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;

OK

default=y, crs=y, snippet=n

# configmap
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"

# /etc/nginx/nginx.conf
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf;

OK

default=y, crs=y, snippet=y

# configmap
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
modsecurity-snippet: SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"

# /etc/nginx/nginx.conf
modsecurity on;
modsecurity_rules '
        SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"
        ';
modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf;

KO! both snippet and crs are here, and doc said snippet only takes effect in that case
which clearly not the case because when trying this

# configmap
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
modsecurity-snippet:-
  SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"
  Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

the pod doesn't even start saying this error [emerg] 931#931: "modsecurity_rules_file" directive Rule id: 901001 is duplicated which clearly means both snippet and modsecurity_rules_file take effect here

image

And this last part I don't really get, the 0.24.1 instructions seem wrong since the crs doc specifically says that default conf must be loaded BEFORE crs but it's done after 🤨

And for the 0.25.0 and above if doing this and keep enable-owasp-modsecurity-crs: "true" your pod doesn't even start

What worked for me eventually is this btw.

enable-modsecurity: "true"
http-snippet: |- 
  modsecurity_transaction_id "$request_id";
modsecurity-snippet: |-
  SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"
  Include /etc/nginx/modsecurity/modsecurity.conf
  Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

(btw I don't really get #8021 because I'm supposed to have this "bug" in 1.1.1, meaning I should load twice /etc/nginx/modsecurity/modsecurity.conf in the code above but it's not since pods are starting 🤨 )

What do you want to happen?

To have a better doc for the community. So that people not struggle as I did with the doc.

Is there currently another issue associated with this?

I feel like this problem of snippet/default/crs has been seen there but not entirely #8021

Does it require a particular kubernetes version?

I don't think so. I'm on EKS 1.21 (last one atm)

/kind documentation
/remove-kind feature

@damienleger damienleger added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 24, 2022
@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. and removed kind/feature Categorizes issue or PR as related to a new feature. labels Mar 24, 2022
@k8s-ci-robot
Copy link
Contributor

@damienleger: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Mar 24, 2022
@xianzheTM
Copy link

xianzheTM commented Mar 31, 2022

The document really confuses me.
image
I want to block the request with' class' in the URL. But this writing didn't take effect. So I began to wonder if I should configure every ingress.
----update----
I set each ingress and it worked. I don't understand why it can't be set in the configmap and it works for every ingress.
----update----
I saw #8021 , maybe that's why.

@arthurk
Copy link

arthurk commented Apr 13, 2022

Thanks @damienleger your comment helped me out a lot. I updated to the latest nginx-ingress helm chart version (from 4.0.9 to 4.0.19) and modescurity stopped working, using your solution made it work again.

In 4.0.9 I was using this, which worked fine:

      enable-modsecurity: true
      enable-owasp-modsecurity-crs: true
      modsecurity-snippet: |
        SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"

It stopped working with 4.0.19. Now I have to use:

      enable-modsecurity: true
      modsecurity-snippet: |
        SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"
        Include /etc/nginx/modsecurity/modsecurity.conf
        Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

@mac-chaffee
Copy link
Contributor

mac-chaffee commented Apr 13, 2022

Apparently there's also inconsistency in which modsecurity settings take precedence when there are conflicts (owasp-modsecurity/ModSecurity-nginx#280), which undermines a lot of the assumptions of #8021. In fact, that PR broke my modsecurity settings.

To anyone else finding this thread, my advice is to just not use any of the built-in modsecurity settings. It's pretty easy to configure 100% manually:

main-snippet: "load_module /etc/nginx/modules/ngx_http_modsecurity_module.so;"
http-snippet: |
  modsecurity on;
  modsecurity_rules '
    # Yes, this does somehow take precedence over the modsecurity_rules_file below
    SecRuleEngine On
    # Put custom rules here...
  ';
  modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
  modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf;

@vijay-veeranki
Copy link

Hi @damienleger @arthurk

We upgraded the chart from 4.0.19 to 4.1.1, and this configuration stopped working.

      enable-modsecurity: true
      modsecurity-snippet: |
        Include /etc/nginx/modsecurity/modsecurity.conf
        Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

Did you try upgrading to the latest version of the chart, and see a similar issue I am facing now?

@damienleger
Copy link
Author

Hi @vijay-veeranki . I'm afraid I cannot help you or anybody else, I decided not to use ModSecurity soon after writing this GIthub issue. I was running a PoC by then actually. Good luck with the issue you are facing.

@macmiranda
Copy link

I've tried gazillions of combinations between configMap and ingress annotations. Nothing in that documentation makes sense to me.
I get that the snippet has precedence over the owasp setting but how do you override the settings in /etc/nginx/modsecurity/modsecurity.conf?

  enable-modsecurity: "true"
  enable-owasp-modsecurity-crs: "false"
  modsecurity-snippet: |-
    SecRuleEngine On
    SecAuditEngine On
    SecAuditLogParts ABIJDEFHZ
    SecAuditLogFormat JSON
    SecAuditLogType Serial
    SecAuditLog /dev/stdout

According to the documentation I should get Audit logs in JSON format sent to the STDOUT but I keep getting same ol' /var/log/modsec_audit.log and not JSON-formated.

@mac-chaffee
Copy link
Contributor

One possibility for the confusion is that updates to the modsecurity settings roll out slowly as the nginx workers restart. If there are long-lived connections keeping old nginx workers from restarting, then it can seem like changes to the modsecurity settings aren't doing anything.

But also based on this thread it sounds like there is a complex/buggy interaction between inline modsecurity settings (modsecurity-snippet) and the files (/etc/nginx/modsecurity/modsecurity.conf). I think someone has to just go look at the modsecurity-nginx source code and find out for sure what's happening.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2022
@vijay-veeranki
Copy link

Please keep this issue open

@arthurk
Copy link

arthurk commented Sep 10, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2022
@macmiranda
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2022
@tbondarchuk
Copy link

The following seems to work for me on chart 4.5.2, controller 1.6.4:

    enable-modsecurity: true
    enable-owasp-modsecurity-crs: true
    modsecurity-snippet: |
      SecRuleEngine On
      SecStatusEngine Off
      SecAuditLog /dev/stdout
      SecAuditLogFormat JSON
      SecAuditEngine RelevantOnly
      SecRule REMOTE_ADDR "@ipMatch 127.0.0.1" "id:87,phase:1,pass,nolog,ctl:ruleEngine=Off"
      SecAction "id:900200,phase:1,nolog,pass,t:none,setvar:tx.allowed_methods=GET HEAD POST OPTIONS PUT PATCH DELETE"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

9 participants