Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analytics docker image unhealthy when self-hosting due to migration issue (sources not found) #28348

Open
2 tasks done
raman04-byte opened this issue Aug 2, 2024 · 36 comments
Labels
awaiting-details For issues needing detail from the opener. bug Something isn't working external-issue needs-analysis Issue status is unknown and/or not possible to triage with the current info self-hosted Issues related to self hosting

Comments

@raman04-byte
Copy link

Bug report

  • I confirm this is a bug with Supabase, not with my own application.
  • I confirm I have searched the Docs, GitHub Discussions, and Discord.

Describe the bug

Well I am trying to deploy the supabase localhost to the Red Hat Linux server and I am not able to start the container it shows that analytics can't start

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

  1. Clone the repo
  2. Do export DOCKER_DEFAULT_PLATFORM=linux/amd64
  3. Build the studio
  4. You will get the error

Expected behavior

It should start normally

Screenshots

image image

System information

  • OS: macOS
  • Browser (if applies) [e.g. chrome, safari]
  • Version of supabase-js: [e.g. 6.0.2]
  • Version of Node.js: [e.g. 10.10.0]

Additional context

Well I have macOS and I am building the image for the Red Hat Linux server and I am failing to do so

@raman04-byte raman04-byte added the bug Something isn't working label Aug 2, 2024
@raman04-byte raman04-byte changed the title The supabase localhost is not working in Linux The supabase localhost is not working in Red Hat Linux Server Aug 2, 2024
@raman04-byte
Copy link
Author

@Akash187 Can you help me in this issue please as we are stuck in this problem since a week now. The analytics image is just unable to start and we are getting errors while starting the images

@raman04-byte
Copy link
Author

@encima Any updates please

@Akash187
Copy link
Contributor

Akash187 commented Aug 5, 2024

@raman04-byte for me it is working fine on my Macbook CLI v1.187.10.

@raman04-byte
Copy link
Author

raman04-byte commented Aug 5, 2024

@raman04-byte for me it is working fine on my Macbook CLI v1.187.10.

I am not using CLI

Create the image for the amd/Linux and run it.

You can use this
DOCKER_DEFAULT_PLATFORM=linux/amd64

@Akash187
Copy link
Contributor

Akash187 commented Aug 5, 2024

Are you using self-hosted version?

@encima
Copy link
Member

encima commented Aug 5, 2024

@raman04-byte no update in the few hours since you opened the issue :)
You mention that your OS is macOS in the issue template, please correct that to Red Hat.

Then please ensure

  • you are on the latest commit for master
  • you have cleared your cache and pulled the latest images
  • you post the image versions here

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

@raman04-byte no update in the few hours since you opened the issue :) You mention that your OS is macOS in the issue template, please correct that to Red Hat.

Then please ensure

  • you are on the latest commit for master
  • you have cleared your cache and pulled the latest images
  • you post the image versions here

I am on latest commit for master and have cleared all the cache (even reinstalled the docker) and all the images are the latest one

Images with tags also:

image

and yes I am using latest version of Supabase

image

@raman04-byte
Copy link
Author

Are you using self-hosted version?

I am using Self-Hosting with docker by cloning the repo and than compose up

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

These are all the steps we are doing
@encima @Akash187

I did not want to pull the studio image from the docker so we did one modification in the docker-compose.yml file and using a script we ran it

image

docker-compose.yml:

# Usage
#   Start:          docker compose up
#   With helpers:   docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml up
#   Stop:           docker compose down
#   Destroy:        docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans

name: supabase
version: "3.8"

services:
  studio:
    container_name: supabase-studio
    image: studio:latest
    restart: unless-stopped
    healthcheck:
      test:
        [
          "CMD",
          "node",
          "-e",
          "require('http').get('http://localhost:3000/api/profile', (r) => {if (r.statusCode !== 200) throw new Error(r.statusCode)})"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      STUDIO_PG_META_URL: http://meta:8080
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

      DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
      DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}

      SUPABASE_URL: http://kong:8000
      SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      AUTH_JWT_SECRET: ${JWT_SECRET}

      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
      LOGFLARE_URL: http://analytics:4000
      NEXT_PUBLIC_ENABLE_LOGS: true
      # Comment to use Big Query backend for analytics
      NEXT_ANALYTICS_BACKEND_PROVIDER: postgres
      # Uncomment to use Big Query backend for analytics
      # NEXT_ANALYTICS_BACKEND_PROVIDER: bigquery

  kong:
    container_name: supabase-kong
    image: kong:2.8.1
    restart: unless-stopped
    # https://unix.stackexchange.com/a/294837
    entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
    ports:
      - ${KONG_HTTP_PORT}:8000/tcp
      - ${KONG_HTTPS_PORT}:8443/tcp
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      KONG_DATABASE: "off"
      KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
      # https://github.com/supabase/cli/issues/14
      KONG_DNS_ORDER: LAST,A,CNAME
      KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
      KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
      KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
      DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
    volumes:
      # https://github.com/supabase/supabase/issues/12661
      - ./volumes/api/kong.yml:/home/kong/temp.yml:ro

  auth:
    container_name: supabase-auth
    image: supabase/gotrue:v2.151.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:9999/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      GOTRUE_API_HOST: 0.0.0.0
      GOTRUE_API_PORT: 9999
      API_EXTERNAL_URL: ${API_EXTERNAL_URL}

      GOTRUE_DB_DRIVER: postgres
      GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}

      GOTRUE_SITE_URL: ${SITE_URL}
      GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
      GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}

      GOTRUE_JWT_ADMIN_ROLES: service_role
      GOTRUE_JWT_AUD: authenticated
      GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
      GOTRUE_JWT_EXP: ${JWT_EXPIRY}
      GOTRUE_JWT_SECRET: ${JWT_SECRET}

      GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
      GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
      GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
      # GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
      # GOTRUE_SMTP_MAX_FREQUENCY: 1s
      GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
      GOTRUE_SMTP_HOST: ${SMTP_HOST}
      GOTRUE_SMTP_PORT: ${SMTP_PORT}
      GOTRUE_SMTP_USER: ${SMTP_USER}
      GOTRUE_SMTP_PASS: ${SMTP_PASS}
      GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
      GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
      GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
      GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
      GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}

      GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
      GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
      # Uncomment to enable custom access token hook. You'll need to create a public.custom_access_token_hook function and grant necessary permissions.
      # See: https://supabase.com/docs/guides/auth/auth-hooks#hook-custom-access-token for details
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED="true"
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_URI="pg-functions://postgres/public/custom_access_token_hook"

      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_ENABLED="true"
      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/mfa_verification_attempt"

      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_ENABLED="true"
      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/password_verification_attempt"




  rest:
    container_name: supabase-rest
    image: postgrest/postgrest:v12.2.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_SECRET}
      PGRST_DB_USE_LEGACY_GUCS: "false"
      PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
      PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
    command: "postgrest"

  realtime:
    # This container name looks inconsistent but is correct because realtime constructs tenant id by parsing the subdomain
    container_name: realtime-dev.supabase-realtime
    image: supabase/realtime:v2.29.15
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "-sSfL",
          "--head",
          "-o",
          "/dev/null",
          "-H",
          "Authorization: Bearer ${ANON_KEY}",
          "http://localhost:4000/api/tenants/realtime-dev/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      PORT: 4000
      DB_HOST: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_USER: supabase_admin
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_NAME: ${POSTGRES_DB}
      DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
      DB_ENC_KEY: supabaserealtime
      API_JWT_SECRET: ${JWT_SECRET}
      SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
      ERL_AFLAGS: -proto_dist inet_tcp
      DNS_NODES: "''"
      RLIMIT_NOFILE: "10000"
      APP_NAME: realtime
      SEED_SELF_HOST: true

  # To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up
  storage:
    container_name: supabase-storage
    image: supabase/storage-api:v1.0.6
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      rest:
        condition: service_started
      imgproxy:
        condition: service_started
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:5000/status"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      ANON_KEY: ${ANON_KEY}
      SERVICE_KEY: ${SERVICE_ROLE_KEY}
      POSTGREST_URL: http://rest:3000
      PGRST_JWT_SECRET: ${JWT_SECRET}
      DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      FILE_SIZE_LIMIT: 52428800
      STORAGE_BACKEND: file
      FILE_STORAGE_BACKEND_PATH: /var/lib/storage
      TENANT_ID: stub
      # TODO: https://github.com/supabase/storage-api/issues/55
      REGION: stub
      GLOBAL_S3_BUCKET: stub
      ENABLE_IMAGE_TRANSFORMATION: "true"
      IMGPROXY_URL: http://imgproxy:5001
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  imgproxy:
    container_name: supabase-imgproxy
    image: darthsim/imgproxy:v3.8.0
    healthcheck:
      test: [ "CMD", "imgproxy", "health" ]
      timeout: 5s
      interval: 5s
      retries: 3
    environment:
      IMGPROXY_BIND: ":5001"
      IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
      IMGPROXY_USE_ETAG: "true"
      IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  meta:
    container_name: supabase-meta
    image: supabase/postgres-meta:v0.83.2
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PG_META_PORT: 8080
      PG_META_DB_HOST: ${POSTGRES_HOST}
      PG_META_DB_PORT: ${POSTGRES_PORT}
      PG_META_DB_NAME: ${POSTGRES_DB}
      PG_META_DB_USER: supabase_admin
      PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}

  functions:
    container_name: supabase-edge-functions
    image: supabase/edge-runtime:v1.55.0
    restart: unless-stopped
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      JWT_SECRET: ${JWT_SECRET}
      SUPABASE_URL: http://kong:8000
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
      SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      # TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
      VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
    volumes:
      - ./volumes/functions:/home/deno/functions:Z
    command:
      - start
      - --main-service
      - /home/deno/functions/main

  analytics:
    container_name: supabase-analytics
    image: supabase/logflare:1.4.0
    healthcheck:
      test: [ "CMD", "curl", "http://localhost:4000/health" ]
      timeout: 5s
      interval: 5s
      retries: 10
    restart: unless-stopped
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
    # Uncomment to use Big Query backend for analytics
    # volumes:
    #   - type: bind
    #     source: ${PWD}/gcloud.json
    #     target: /opt/app/rel/logflare/bin/gcloud.json
    #     read_only: true
    environment:
      LOGFLARE_NODE_HOST: 127.0.0.1
      DB_USERNAME: supabase_admin
      DB_DATABASE: ${POSTGRES_DB}
      DB_HOSTNAME: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_SCHEMA: _analytics
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
      LOGFLARE_SINGLE_TENANT: true
      LOGFLARE_SUPABASE_MODE: true
      LOGFLARE_MIN_CLUSTER_SIZE: 1

      # Comment variables to use Big Query backend for analytics
      POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      POSTGRES_BACKEND_SCHEMA: _analytics
      LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
      # Uncomment to use Big Query backend for analytics
      # GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
      # GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
    ports:
      - 4000:4000

  # Comment out everything below this point if you are using an external Postgres database
  db:
    container_name: supabase-db
    image: supabase/postgres:15.1.1.78
    healthcheck:
      test: pg_isready -U postgres -h localhost
      interval: 5s
      timeout: 5s
      retries: 10
    depends_on:
      vector:
        condition: service_healthy
    command:
      - postgres
      - -c
      - config_file=/etc/postgresql/postgresql.conf
      - -c
      - log_min_messages=fatal # prevents Realtime polling queries from appearing in logs
    restart: unless-stopped
    ports:
      # Pass down internal port because it's set dynamically by other services
      - ${POSTGRES_PORT}:${POSTGRES_PORT}
    environment:
      POSTGRES_HOST: /var/run/postgresql
      PGPORT: ${POSTGRES_PORT}
      POSTGRES_PORT: ${POSTGRES_PORT}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
      POSTGRES_DB: ${POSTGRES_DB}
      JWT_SECRET: ${JWT_SECRET}
      JWT_EXP: ${JWT_EXPIRY}
    volumes:
      - ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
      # Must be superuser to create event trigger
      - ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
      # Must be superuser to alter reserved role
      - ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
      # Initialize the database settings with JWT_SECRET and JWT_EXP
      - ./volumes/db/jwt.sql:/docker-entrypoint-initdb.d/init-scripts/99-jwt.sql:Z
      # PGDATA directory is persisted between restarts
      - ./volumes/db/data:/var/lib/postgresql/data:Z
      # Changes required for Analytics support
      - ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
      # Use named volume to persist pgsodium decryption key between restarts
      - db-config:/etc/postgresql-custom

  vector:
    container_name: supabase-vector
    image: timberio/vector:0.28.1-alpine
    healthcheck:
      test:
        [

          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://vector:9001/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    volumes:
      - ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
      - ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
    environment:
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
    command: [ "--config", "etc/vector/vector.yml" ]

volumes:
  db-config:

docker-compose2.yml:

# Usage
#   Start:          docker compose up
#   With helpers:   docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml up
#   Stop:           docker compose down
#   Destroy:        docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans

name: supabase
version: "3.8"

services:
  # studio:
  #   container_name: supabase-studio
  #   image: studio:latest
  #   restart: unless-stopped
  #   healthcheck:
  #     test:
  #       [
  #         "CMD",
  #         "node",
  #         "-e",
  #         "require('http').get('http://localhost:3000/api/profile', (r) => {if (r.statusCode !== 200) throw new Error(r.statusCode)})"
  #       ]
  #     timeout: 5s
  #     interval: 5s
  #     retries: 3
  #   depends_on:
  #     analytics:
  #       condition: service_healthy
  #   environment:
  #     STUDIO_PG_META_URL: http://meta:8080
  #     POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

  #     DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
  #     DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}

  #     SUPABASE_URL: http://kong:8000
  #     SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
  #     SUPABASE_ANON_KEY: ${ANON_KEY}
  #     SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
  #     AUTH_JWT_SECRET: ${JWT_SECRET}

  #     LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
  #     LOGFLARE_URL: http://analytics:4000
  #     NEXT_PUBLIC_ENABLE_LOGS: true
  #     # Comment to use Big Query backend for analytics
  #     NEXT_ANALYTICS_BACKEND_PROVIDER: postgres
  #    # Uncomment to use Big Query backend for analytics
  #    # NEXT_ANALYTICS_BACKEND_PROVIDER: bigquery

  kong:
    container_name: supabase-kong
    image: kong:2.8.1
    restart: unless-stopped
    # https://unix.stackexchange.com/a/294837
    entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
    ports:
      - ${KONG_HTTP_PORT}:8000/tcp
      - ${KONG_HTTPS_PORT}:8443/tcp
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      KONG_DATABASE: "off"
      KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
      # https://github.com/supabase/cli/issues/14
      KONG_DNS_ORDER: LAST,A,CNAME
      KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
      KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
      KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
      DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
    volumes:
      # https://github.com/supabase/supabase/issues/12661
      - ./volumes/api/kong.yml:/home/kong/temp.yml:ro

  auth:
    container_name: supabase-auth
    image: supabase/gotrue:v2.151.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:9999/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      GOTRUE_API_HOST: 0.0.0.0
      GOTRUE_API_PORT: 9999
      API_EXTERNAL_URL: ${API_EXTERNAL_URL}

      GOTRUE_DB_DRIVER: postgres
      GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}

      GOTRUE_SITE_URL: ${SITE_URL}
      GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
      GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}

      GOTRUE_JWT_ADMIN_ROLES: service_role
      GOTRUE_JWT_AUD: authenticated
      GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
      GOTRUE_JWT_EXP: ${JWT_EXPIRY}
      GOTRUE_JWT_SECRET: ${JWT_SECRET}

      GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
      GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
      GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
      # GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
      # GOTRUE_SMTP_MAX_FREQUENCY: 1s
      GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
      GOTRUE_SMTP_HOST: ${SMTP_HOST}
      GOTRUE_SMTP_PORT: ${SMTP_PORT}
      GOTRUE_SMTP_USER: ${SMTP_USER}
      GOTRUE_SMTP_PASS: ${SMTP_PASS}
      GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
      GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
      GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
      GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
      GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}

      GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
      GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
      # Uncomment to enable custom access token hook. You'll need to create a public.custom_access_token_hook function and grant necessary permissions.
      # See: https://supabase.com/docs/guides/auth/auth-hooks#hook-custom-access-token for details
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED="true"
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_URI="pg-functions://postgres/public/custom_access_token_hook"

      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_ENABLED="true"
      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/mfa_verification_attempt"

      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_ENABLED="true"
      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_URI="pg-functions://postgres/public/password_verification_attempt"




  rest:
    container_name: supabase-rest
    image: postgrest/postgrest:v12.2.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_SECRET}
      PGRST_DB_USE_LEGACY_GUCS: "false"
      PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
      PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
    command: "postgrest"

  realtime:
    # This container name looks inconsistent but is correct because realtime constructs tenant id by parsing the subdomain
    container_name: realtime-dev.supabase-realtime
    image: supabase/realtime:v2.29.15
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "-sSfL",
          "--head",
          "-o",
          "/dev/null",
          "-H",
          "Authorization: Bearer ${ANON_KEY}",
          "http://localhost:4000/api/tenants/realtime-dev/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      PORT: 4000
      DB_HOST: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_USER: supabase_admin
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_NAME: ${POSTGRES_DB}
      DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
      DB_ENC_KEY: supabaserealtime
      API_JWT_SECRET: ${JWT_SECRET}
      SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
      ERL_AFLAGS: -proto_dist inet_tcp
      DNS_NODES: "''"
      RLIMIT_NOFILE: "10000"
      APP_NAME: realtime
      SEED_SELF_HOST: true

  # To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up
  storage:
    container_name: supabase-storage
    image: supabase/storage-api:v1.0.6
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      rest:
        condition: service_started
      imgproxy:
        condition: service_started
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:5000/status"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      ANON_KEY: ${ANON_KEY}
      SERVICE_KEY: ${SERVICE_ROLE_KEY}
      POSTGREST_URL: http://rest:3000
      PGRST_JWT_SECRET: ${JWT_SECRET}
      DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      FILE_SIZE_LIMIT: 52428800
      STORAGE_BACKEND: file
      FILE_STORAGE_BACKEND_PATH: /var/lib/storage
      TENANT_ID: stub
      # TODO: https://github.com/supabase/storage-api/issues/55
      REGION: stub
      GLOBAL_S3_BUCKET: stub
      ENABLE_IMAGE_TRANSFORMATION: "true"
      IMGPROXY_URL: http://imgproxy:5001
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  imgproxy:
    container_name: supabase-imgproxy
    image: darthsim/imgproxy:v3.8.0
    healthcheck:
      test: [ "CMD", "imgproxy", "health" ]
      timeout: 5s
      interval: 5s
      retries: 3
    environment:
      IMGPROXY_BIND: ":5001"
      IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
      IMGPROXY_USE_ETAG: "true"
      IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  meta:
    container_name: supabase-meta
    image: supabase/postgres-meta:v0.83.2
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PG_META_PORT: 8080
      PG_META_DB_HOST: ${POSTGRES_HOST}
      PG_META_DB_PORT: ${POSTGRES_PORT}
      PG_META_DB_NAME: ${POSTGRES_DB}
      PG_META_DB_USER: supabase_admin
      PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}

  functions:
    container_name: supabase-edge-functions
    image: supabase/edge-runtime:v1.55.0
    restart: unless-stopped
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      JWT_SECRET: ${JWT_SECRET}
      SUPABASE_URL: http://kong:8000
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
      SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      # TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
      VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
    volumes:
      - ./volumes/functions:/home/deno/functions:Z
    command:
      - start
      - --main-service
      - /home/deno/functions/main

  analytics:
    container_name: supabase-analytics
    image: supabase/logflare:1.4.0
    healthcheck:
      test: [ "CMD", "curl", "http://localhost:4000/health" ]
      timeout: 5s
      interval: 5s
      retries: 10
    restart: unless-stopped
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
    # Uncomment to use Big Query backend for analytics
    # volumes:
    #   - type: bind
    #     source: ${PWD}/gcloud.json
    #     target: /opt/app/rel/logflare/bin/gcloud.json
    #     read_only: true
    environment:
      LOGFLARE_NODE_HOST: 127.0.0.1
      DB_USERNAME: supabase_admin
      DB_DATABASE: ${POSTGRES_DB}
      DB_HOSTNAME: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_SCHEMA: _analytics
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
      LOGFLARE_SINGLE_TENANT: true
      LOGFLARE_SUPABASE_MODE: true
      LOGFLARE_MIN_CLUSTER_SIZE: 1

      # Comment variables to use Big Query backend for analytics
      POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      POSTGRES_BACKEND_SCHEMA: _analytics
      LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
      # Uncomment to use Big Query backend for analytics
      # GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
      # GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
    ports:
      - 4000:4000

  # Comment out everything below this point if you are using an external Postgres database
  db:
    container_name: supabase-db
    image: supabase/postgres:15.1.1.78
    healthcheck:
      test: pg_isready -U postgres -h localhost
      interval: 5s
      timeout: 5s
      retries: 10
    depends_on:
      vector:
        condition: service_healthy
    command:
      - postgres
      - -c
      - config_file=/etc/postgresql/postgresql.conf
      - -c
      - log_min_messages=fatal # prevents Realtime polling queries from appearing in logs
    restart: unless-stopped
    ports:
      # Pass down internal port because it's set dynamically by other services
      - ${POSTGRES_PORT}:${POSTGRES_PORT}
    environment:
      POSTGRES_HOST: /var/run/postgresql
      PGPORT: ${POSTGRES_PORT}
      POSTGRES_PORT: ${POSTGRES_PORT}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
      POSTGRES_DB: ${POSTGRES_DB}
      JWT_SECRET: ${JWT_SECRET}
      JWT_EXP: ${JWT_EXPIRY}
    volumes:
      - ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
      # Must be superuser to create event trigger
      - ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
      # Must be superuser to alter reserved role
      - ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
      # Initialize the database settings with JWT_SECRET and JWT_EXP
      - ./volumes/db/jwt.sql:/docker-entrypoint-initdb.d/init-scripts/99-jwt.sql:Z
      # PGDATA directory is persisted between restarts
      - ./volumes/db/data:/var/lib/postgresql/data:Z
      # Changes required for Analytics support
      - ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
      # Use named volume to persist pgsodium decryption key between restarts
      - db-config:/etc/postgresql-custom

  vector:
    container_name: supabase-vector
    image: timberio/vector:0.28.1-alpine
    healthcheck:
      test:
        [

          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://vector:9001/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    volumes:
      - ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
      - ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
    environment:
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
    command: [ "--config", "etc/vector/vector.yml" ]

volumes:
  db-config:

and script we are using is this

export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker build . -f apps/studio/Dockerfile --target production -t studio:latest --platform linux/amd64
cd docker
docker compose -f docker-compose2.yml pull
docker compose up -d 
cd ..

and after all that we are facing this issue

image

@encima
Copy link
Member

encima commented Aug 6, 2024

  1. Building your own studio image is not supported so I cannot really help with that (though the issue does not seem related). Please pull the studio image directly for purposes of debugging)
  2. Please post the output of docker ps so we can see/confirm image versions
  3. Please post the output of docker logs for the supabase-analytics container

@encima encima added self-hosted Issues related to self hosting needs-analysis Issue status is unknown and/or not possible to triage with the current info awaiting-details For issues needing detail from the opener. and removed to-triage labels Aug 6, 2024
@raman04-byte
Copy link
Author

The issue is not with the studio we are not pulling it is with the supabase-analytics container which is not able to run properly.

Here are the docker ps and docker logs of supabase-analytics container

docker ps :

CONTAINER ID   IMAGE                           COMMAND                  CREATED          STATUS                            PORTS                    NAMES
c2f66bb19fbe   supabase/logflare:1.4.0         "sh run.sh"              39 seconds ago   Up 6 seconds (health: starting)   0.0.0.0:4000->4000/tcp   supabase-analytics
a038b3ec7785   supabase/postgres:15.1.1.78     "docker-entrypoint.s…"   39 seconds ago   Up 32 seconds (healthy)           0.0.0.0:5432->5432/tcp   supabase-db
b428a36e59cd   timberio/vector:0.28.1-alpine   "/usr/local/bin/vect…"   39 seconds ago   Up 38 seconds (healthy)                                    supabase-vector
b93616bcc257   darthsim/imgproxy:v3.8.0        "imgproxy"               39 seconds ago   Up 38 seconds (healthy)           8080/tcp                 supabase-imgproxy

docker logs of supabase-analytics :

LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:17.754 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:20.859 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:20.949 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:21.106 [info] Table counters started!

10:29:21.106 [info] Rate counter table started!

10:29:21.138 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:21.169 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:21.172 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:21.176 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 57, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:21.177 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:21.197 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:21.252 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 58,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 58,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Qu

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:23.942 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:27.310 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:27.386 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:27.509 [info] Table counters started!

10:29:27.509 [info] Rate counter table started!

10:29:27.530 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:27.533 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:27.535 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:27.579 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 84, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:27.580 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:27.590 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:27.625 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 90,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 90,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Qu

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:30.568 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:34.734 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:34.816 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:34.957 [info] Table counters started!

10:29:34.957 [info] Rate counter table started!

10:29:34.993 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:35.047 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:35.051 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 110, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:35.051 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:35.052 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:35.076 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:35.095 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 106,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 106,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:38.423 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:41.810 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:41.901 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:42.051 [info] Table counters started!

10:29:42.051 [info] Rate counter table started!

10:29:42.070 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:42.074 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:42.087 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:42.114 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 132, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:42.115 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:42.123 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:42.148 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 129,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 129,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:45.574 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:48.412 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:48.512 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:48.631 [info] Table counters started!

10:29:48.631 [info] Rate counter table started!

10:29:48.657 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:48.699 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:48.702 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 154, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:48.703 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:48.705 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:48.717 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:48.749 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 157,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 157,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:29:52.987 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:29:55.645 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:29:55.713 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:29:55.816 [info] Table counters started!

10:29:55.816 [info] Rate counter table started!

10:29:55.837 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:29:55.873 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:29:55.878 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 179, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:29:55.878 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:29:55.879 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:29:55.891 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:29:55.914 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 175,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 175,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:30:01.869 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:30:04.651 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:30:04.720 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:30:04.822 [info] Table counters started!

10:30:04.822 [info] Rate counter table started!

10:30:04.844 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:30:04.878 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:30:04.882 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 207, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:30:04.882 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:30:04.885 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:30:04.902 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:30:04.924 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 206,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 206,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:30:14.057 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:30:16.684 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:30:16.751 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:30:16.853 [info] Table counters started!

10:30:16.853 [info] Rate counter table started!

10:30:16.870 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:30:16.876 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:30:16.893 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:30:16.914 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 238, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:30:16.914 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:30:16.924 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:30:16.942 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 241,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 241,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1

10:30:32.329 [info] Starting migration
** (ArgumentError) could not call Module.put_attribute/3 because the module Logflare.Repo.Migrations.AddUsersTable is already compiled
    (elixir 1.14.4) lib/module.ex:2504: Module.assert_not_readonly!/2
    (elixir 1.14.4) lib/module.ex:2201: Module.__put_attribute__/5
    /opt/app/rel/logflare/lib/logflare-1.4.0/priv/repo/migrations/20181212220417_create_sources.exs:2: (module)
    nofile:1: (file)

10:30:35.005 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}

10:30:35.077 [info] Elixir.Logflare.SigtermHandler is being initialized...

10:30:35.190 [info] Table counters started!

10:30:35.190 [info] Rate counter table started!

10:30:35.211 [info] Running LogflareWeb.Endpoint with cowboy 2.10.0 at 0.0.0.0:4000 (http)

10:30:35.236 [info] Access LogflareWeb.Endpoint at http://localhost:4000

10:30:35.244 [info] Running LogflareGrpc.Endpoint with Cowboy using http://0.0.0.0:50051

10:30:35.248 [info] Going Down - {%Postgrex.Error{message: nil, postgres: %{code: :undefined_table, file: "parse_relation.c", line: "1392", message: "relation \"sources\" does not exist", pg_code: "42P01", position: "480", routine: "parserOpenTable", severity: "ERROR", unknown: "ERROR"}, connection_id: 279, query: "SELECT s0.\"id\", s0.\"name\", s0.\"token\", s0.\"public_token\", s0.\"favorite\", s0.\"bigquery_table_ttl\", s0.\"api_quota\", s0.\"webhook_notification_url\", s0.\"slack_hook_url\", s0.\"bq_table_partition_type\", s0.\"custom_event_message_keys\", s0.\"log_events_updated_at\", s0.\"notifications_every\", s0.\"lock_schema\", s0.\"validate_schema\", s0.\"drop_lql_filters\", s0.\"drop_lql_string\", s0.\"v2_pipeline\", s0.\"suggested_keys\", s0.\"user_id\", s0.\"notifications\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"sources\" AS s0 WHERE (s0.\"log_events_updated_at\" > $1) ORDER BY s0.\"log_events_updated_at\""}, [{Ecto.Adapters.SQL, :raise_sql_call_error, 1, [file: 'lib/ecto/adapters/sql.ex', line: 913, error_info: %{module: Exception}]}, {Ecto.Adapters.SQL, :execute, 6, [file: 'lib/ecto/adapters/sql.ex', line: 828]}, {Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 229]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 19]}, {Logflare.Source.Supervisor, :handle_continue, 2, [file: 'lib/logflare/source/supervisor.ex', line: 52]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 1123]}, {:gen_server, :loop, 7, [file: 'gen_server.erl', line: 865]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]} - Elixir.Logflare.Source.Supervisor

10:30:35.249 [error] GenServer Logflare.Source.Supervisor terminating
** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "sources" does not exist

    query: SELECT s0."id", s0."name", s0."token", s0."public_token", s0."favorite", s0."bigquery_table_ttl", s0."api_quota", s0."webhook_notification_url", s0."slack_hook_url", s0."bq_table_partition_type", s0."custom_event_message_keys", s0."log_events_updated_at", s0."notifications_every", s0."lock_schema", s0."validate_schema", s0."drop_lql_filters", s0."drop_lql_string", s0."v2_pipeline", s0."suggested_keys", s0."user_id", s0."notifications", s0."inserted_at", s0."updated_at" FROM "sources" AS s0 WHERE (s0."log_events_updated_at" > $1) ORDER BY s0."log_events_updated_at"
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
    (logflare 1.4.0) lib/logflare/source/supervisor.ex:52: Logflare.Source.Supervisor.handle_continue/2
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:865: :gen_server.loop/7
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: {:continue, :boot}

10:30:35.262 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist

    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6

10:30:35.281 [notice]     :alarm_handler: {:clear, :system_memory_high_watermark}
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 277,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 277,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q

Crash dump is being written to: erl_crash.dump...done

@encima encima changed the title The supabase localhost is not working in Red Hat Linux Server Analytics docker image unhealthy when self-hosting due to migration issue (sources not found) Aug 6, 2024
@encima
Copy link
Member

encima commented Aug 6, 2024

Thanks for this, this looks like an issue with migrating the analytics schema for Logflare.

To check:

  1. Are the credentials correct for the Postgres db? Check you can psql as the supabase_admin user
  2. Do you have special characters in the password? Has the password been reset / changed?

Also, it looks like you are missing some containers (pg-meta, kong, gotrue etc) but they may just not be included in your docker ps

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

how to check credentials for the Postgres db and also how to check psql thing as well

Any docs so that I can get those credentials and share to you

@encima
Copy link
Member

encima commented Aug 6, 2024

The postgres credentials are in your .env file.
PSQL documentation is here instructions can be found here.

Please do not share those credentials here (or anywhere else). This is only to confirm that the connection information is correct

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

I can share the .env file because I just clone the repo and started pulling that so no sensitive information is there

@encima
Copy link
Member

encima commented Aug 6, 2024

Use the credentials in that file to confirm that you can connect to your Postgres container.

If you cannot, delete the postgres container and volumes and run it again (note: you will lose data)

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

I think it's running in port 5432

How to use credentials to confirm that it's running or not

image

@encima
Copy link
Member

encima commented Aug 6, 2024

Postgres does not support HTTP connections so you cannot use a browser to connect. Try using a SQL GUI (i.e. pgAdmin) or psql (documentation in the comment above)

If you still have issues, you might be better off asking in the Discord channel

@raman04-byte
Copy link
Author

Let me test with pgAdmin and will get back to you

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

This is what I am getting after configuring
image

image

@encima
Copy link
Member

encima commented Aug 6, 2024

ok, so you have connected. that is good. are you connected as the user supabase_admin or as postgres?

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

postgres

You can see the Properties

image

@encima
Copy link
Member

encima commented Aug 6, 2024

OK, logflare is using the supabase_admin user (per your docker file), so you will need to try connecting with that

@raman04-byte
Copy link
Author

the logflare is unstable it mostly is in restarting state and when it got started it immediately stop and restart again

image

@encima
Copy link
Member

encima commented Aug 6, 2024

Correct. It is doing that because it cannot connect to the database (see the logs you posted).

  1. Try connecting to the database with supabase_admin and confirm it works
  2. Confirm you password has no special chars
  3. Remove all containers and volumes and start again (you will lose data)

@raman04-byte
Copy link
Author

raman04-byte commented Aug 6, 2024

None of my password has special chars

and how to connect database with supabase_admin

I do not have to worry about the data (cuz it's empty) also it's fresh new container and also the container is not able to spin up hence I can't even access the 8000 port

any docs available to do so.

image

@encima
Copy link
Member

encima commented Aug 6, 2024

Use the credentials from your env file. Check the docs https://supabase.com/docs/guides/self-hosting for more info

@raman04-byte
Copy link
Author

I am only using credentials of env file

@raman04-byte
Copy link
Author

Also if I do not use linux/amd64 to build it than it is working fine and there is no issue with the any images so far.

So there is some issue with the analytics linux/amd64 image

@encima
Copy link
Member

encima commented Aug 6, 2024

You will need the credentials of the supabase_admin user (or switch Analytics to connect using postgres in your config or create the supabase_admin user).
If you do not have an issue with a different architecture then, for now, I would certainly recommend building it for the one that works (I assume that is linux/aarch64)

@raman04-byte
Copy link
Author

Will linux/aarch64 support on the red hat linux server?

@encima
Copy link
Member

encima commented Aug 6, 2024

docker (and others) supports emulation, yes. using the --platform tag

@raman04-byte
Copy link
Author

Let me give it a try than I will get back to you

@raman04-byte
Copy link
Author

I can not use other than linux/amd64 as the architecture is based on x86_64

@raman04-byte
Copy link
Author

@encima any other solution?

@encima
Copy link
Member

encima commented Aug 7, 2024

afraid not. We can test this with a Linux build but the current workaround would be to run it under a different architecture

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-details For issues needing detail from the opener. bug Something isn't working external-issue needs-analysis Issue status is unknown and/or not possible to triage with the current info self-hosted Issues related to self hosting
Projects
Status: No status
Development

No branches or pull requests

3 participants