Chapter 6. Configuring action log storage for Elasticsearch and Splunk (2024)

download

PDF

By default, usage logs are stored in the Red Hat Quay database and exposed through the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can store the usage logs in Elasticsearch and Splunk instead of the Red Hat Quay database backend.

6.1.Configuring action log storage for Elasticsearch

Note

To configure action log storage for Elasticsearch, you must provide your own Elasticsearch stack, as it is not included with Red Hat Quay as a customizable component.

Enabling Elasticsearch logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml file. When configured, usage log access continues to be provided through the web UI for repositories and organizations.

Use the following procedure to configure action log storage for Elasticsearch:

Procedure

  1. Obtain an Elasticsearch account.
  2. Open the Red Hat Quay Config Tool (either during or after Red Hat Quay deployment).
  3. Scroll to the Action Log Storage Configuration setting and select Elasticsearch. The following figure shows the Elasticsearch settings that appear:

    Chapter6.Configuring action log storage for Elasticsearch and Splunk (1)

  4. Fill in the following information for your Elasticsearch instance:

    • Elasticsearch hostname: The hostname or IP address of the system providing the Elasticsearch service.
    • Elasticsearch port: The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Red Hat Quay registry. The default is TCP port 9200.
    • Elasticsearch access key: The access key needed to gain access to the Elastic search service, if required.
    • Elasticsearch secret key: The secret key needed to gain access to the Elastic search service, if required.
    • AWS region: If you are running on AWS, set the AWS region (otherwise, leave it blank).
    • Index prefix: Choose a prefix to attach to log entries.
    • Logs Producer: Choose either Elasticsearch (default) or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch (for example, Logstash). The following figure shows additional fields you would need to fill in for Kinesis:

      Chapter6.Configuring action log storage for Elasticsearch and Splunk (2)

  5. If you chose Elasticsearch as the Logs Producer, no further configuration is needed. If you chose Kinesis, fill in the following:

    • Stream name: The name of the Kinesis stream.
    • AWS access key: The name of the AWS access key needed to gain access to the Kinesis stream, if required.
    • AWS secret key: The name of the AWS secret key needed to gain access to the Kinesis stream, if required.
    • AWS region: The AWS region.
  6. When you are done, save the configuration. The configuration tool checks your settings. If there is a problem connecting to the Elasticsearch or Kinesis services, you will see an error and have the opportunity to continue editing. Otherwise, logging will begin to be directed to your Elasticsearch configuration after the cluster restarts with the new configuration.

6.2.Configuring action log storage for Splunk

Splunk is an alternative to Elasticsearch that can provide log analyses for your Red Hat Quay data.

Enabling Splunk logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml file. When configured, usage log access continues to be provided through the Splunk web UI for repositories and organizations.

Use the following procedures to enable Splunk for your Red Hat Quay deployment.

6.2.1.Installing and creating a username for Splunk

Use the following procedure to install and create Splunk credentials.

Procedure

  1. Create a Splunk account by navigating to Splunk and entering the required credentials.
  2. Navigate to the Splunk Enterprise Free Trial page, select your platform and installation package, and then click Download Now.
  3. Install the Splunk software on your machine. When prompted, create a username, for example, splunk_admin and password.
  4. After creating a username and password, a localhost URL will be provided for your Splunk deployment, for example, http://<sample_url>.remote.csb:8000/. Open the URL in your preferred browser.
  5. Log in with the username and password you created during installation. You are directed to the Splunk UI.

6.2.2.Generating a Splunk token

Use one of the following procedures to create a bearer token for Splunk.

6.2.2.1.Generating a Splunk token using the Splunk UI

Use the following procedure to create a bearer token for Splunk using the Splunk UI.

Prerequisites

  • You have installed Splunk and created a username.

Procedure

  1. On the Splunk UI, navigate to Settings Tokens.
  2. Click Enable Token Authentication.
  3. Ensure that Token Authentication is enabled by clicking Token Settings and selecting Token Authentication if necessary.
  4. Optional: Set the expiration time for your token. This defaults at 30 days.
  5. Click Save.
  6. Click New Token.
  7. Enter information for User and Audience.
  8. Optional: Set the Expiration and Not Before information.
  9. Click Create. Your token appears in the Token box. Copy the token immediately.

    Important

    If you close out of the box before copying the token, you must create a new token. The token in its entirety is not available after closing the New Token window.

6.2.2.2.Generating a Splunk token using the CLI

Use the following procedure to create a bearer token for Splunk using the CLI.

Prerequisites

  • You have installed Splunk and created a username.

Procedure

  1. In your CLI, enter the following CURL command to enable token authentication, passing in your Splunk username and password:

    $ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false
  2. Create a token by entering the following CURL command, passing in your Splunk username and password.

    $ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d
  3. Save the generated bearer token.

6.2.3.Configuring Red Hat Quay to use Splunk

Use the following procedure to configure Red Hat Quay to use Splunk.

Prerequisites

  • You have installed Splunk and created a username.
  • You have generated a Splunk bearer token.

Procedure

  1. Open your Red Hat Quay config.yaml file and add the following configuration fields:

    ---LOGS_MODEL: splunkLOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7---
    1

    String. The Splunk cluster endpoint.

    2

    Integer. The Splunk management cluster endpoint port. Differs from the Splunk GUI hosted port. Can be found on the Splunk UI under Settings

    Server Settings

    General Settings.

    3

    String. The generated bearer token for Splunk.

    4

    String. The URL scheme for access the Splunk service. If Splunk is configured to use TLS/SSL, this must be https.

    5

    Boolean. Whether to enable TLS/SSL. Defaults to true.

    6

    String. The Splunk index prefix. Can be a new, or used, index. Can be created from the Splunk UI.

    7

    String. The relative container path to a single .pem file containing a certificate authority (CA) for TLS/SSL validation.

  2. If you are configuring ssl_ca_path, you must configure the SSL/TLS certificate so that Red Hat Quay will trust it.

    1. If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the extra_ca_certs directory, or inside of the relative container path and specified by ssl_ca_path.
    2. If you are using the Red Hat Quay Operator, create a config bundle secret, including the certificate authority (CA) of the Splunk server. For example:

      $ oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret

      Specify the conf/stack/extra_ca_certs/splunkserver.crt file in your config.yaml. For example:

      LOGS_MODEL: splunkLOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt

6.2.4.Creating an action log

Use the following procedure to create a user account that can forward action logs to Splunk.

Important

You must use the Splunk UI to view Red Hat Quay action logs. At this time, viewing Splunk action logs on the Red Hat Quay Usage Logs page is unsupported, and returns the following message: Method not implemented. Splunk does not support log lookups.

Prerequisites

  • You have installed Splunk and created a username.
  • You have generated a Splunk bearer token.
  • You have configured your Red Hat Quay config.yaml file to enable Splunk.

Procedure

  1. Log in to your Red Hat Quay deployment.
  2. Click on the name of the organization that you will use to create an action log for Splunk.
  3. In the navigation pane, click Robot Accounts Create Robot Account.
  4. When prompted, enter a name for the robot account, for example spunkrobotaccount, then click Create robot account.
  5. On your browser, open the Splunk UI.
  6. Click Search and Reporting.
  7. In the search bar, enter the name of your index, for example, <splunk_log_index_name> and press Enter.

    The search results populate on the Splunk UI. Logs are forwarded in JSON format. A response might look similar to the following:

    { "log_data": { "kind": "authentication", "account": "quayuser123", "performer": "John Doe", "repository": "projectQuay", "ip": "192.168.1.100", "metadata_json": {...}, "datetime": "2024-02-06T12:30:45Z" }}

Chapter 6. Configuring action log storage for Elasticsearch and Splunk (2024)

References

Top Articles
Latest Posts
Article information

Author: Prof. Nancy Dach

Last Updated:

Views: 6178

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Prof. Nancy Dach

Birthday: 1993-08-23

Address: 569 Waelchi Ports, South Blainebury, LA 11589

Phone: +9958996486049

Job: Sales Manager

Hobby: Web surfing, Scuba diving, Mountaineering, Writing, Sailing, Dance, Blacksmithing

Introduction: My name is Prof. Nancy Dach, I am a lively, joyous, courageous, lovely, tender, charming, open person who loves writing and wants to share my knowledge and understanding with you.