Configuring Generic S3 Connectors

This connector allows Stellar Cyber to pull data sent to S3 and add the records to the data lake.

The Generic S3 connector uses Amazon Simple Queue Service (SQS) queues and S3 buckets to pull data. The SQS queue gets information about newly added S3 objects. Stellar Cyber reads from the queue to get information about the S3 bucket and key for the newly added files. When there is a new upload, notifications are sent to the SQS queue URL.

This is a generic S3 connector that can support a wide-range of data sources. Sample sources are included with this connector but new sources can also be added. This requires some additional information to be configured and uploaded for each new source from which you want to pull data. Contact Stellar Cyber Customer Support to assist with this.

Stellar Cyber connectors with the Collect function (collectors) may skip collecting some data when the ingestion volume is large, which potentially can lead to data loss. This can happen when the processing capacity of the collector is exceeded.

Connector Overview: Generic S3

Capabilities

  • Collect: Yes

  • Respond: No

  • Native Alerts Mapped: No

  • Runs on: DP

  • Interval: 5 minutes (if there is no data) or, reads notifications from the SQS queue (if there is data)

Collected Data

Content Type

Index

Locating Records

N/A

(collects all data in the provided SQS Queue URL)

Syslog (for CrowdStrike (JSON), Generic (JSON), Proofpoint (JSON), and Teleport (JSON))

AWS Events (for AWS CloudTrail (JSON))

msg_class:

generic_s3_cloudtrail (for AWS CloudTrail (JSON))

generic_s3_crowdstrike_log (for CrowdStrike (JSON))

generic_s3_log (for Generic (JSON))

generic_s3_proofpoint_log (for Proofpoint (JSON))

generic_s3_teleport_log (for Teleport (JSON))

msg_origin.source:

s3 (for AWS CloudTrail (JSON), CrowdStrike (JSON), Generic (JSON), Proofpoint (JSON), and Teleport (JSON))

msg_origin.vendor:

aws (for AWS CloudTrail (JSON))

crowdstrike (for CrowdStrike (JSON))

aws (for Generic (JSON))

proofpoint (for Proofpoint (JSON))

teleport (for Teleport (JSON))

msg_origin.category:

paas (for AWS CloudTrail (JSON), CrowdStrike (JSON), Generic (JSON), Proofpoint (JSON), and Teleport (JSON))

Domain

<SQS Queue URL>

where <SQS Queue URL> is a variable from the configuration of this connector

Response Actions

N/A

Third Party Native Alert Integration Details

N/A

Required Credentials

  • Access Key ID, Secret Access Key, SQS Queue URL

Adding a Generic S3 Connector

To add a Generic S3 connector:

  1. Obtain credentials
  2. Add the connector in Stellar Cyber
  3. Test the connector
  4. Verify ingestion

Obtaining Credentials

To obtain the required credentials, refer to the following sections:

Permissions

You need the following permissions to create the SQS queue:

  • s3:GetBucketNotification

  • s3:PutBucketNotification

  • sqs:CreateQueue

  • sqs:DeleteQueue

  • sqs:GetQueueAttributes

  • sqs:GetQueueUrl

  • sqs:SetQueueAttributes

You also need the following permissions to perform actions:

  • s3:GetObject

  • s3:ListBucket

  • iam:CreateRole

  • iam:DeleteRolePolicy

  • iam:GetRole

  • iam:PutRolePolicy

Prerequisites

The prerequisite for the following procedure is that you are already sending data to an S3 bucket.

The S3 bucket is limited to data from only one log source.

Configuring the Bucket for Notifications (SQS Queue)

The procedures in this document are for an S3 bucket and SQS queue that are created in the same AWS account.

Have the following information handy:

  • AWS account ID

  • S3 bucket name

In this procedure you create an SQS queue, attach an access policy to the queue, and add a notification configuration to the bucket.

Configure only one SQS queue URL per S3 bucket.

  1. Log in to the Amazon SQS console at https://console.aws.amazon.com/sqs/.

  2. In the navigation pane, choose Queues.

  3. Click Create queue.

    1. For Type, the Standard queue type is set by default.

    2. Enter a Name for your queue.

    3. For Server-side encryption, Enabled is set by default.

    4. For Encryption key type, Amazon SQS key (SSE-SQS) is set by default.

    5. Click Create queue.

    6. Note the URL. This is the SQS Queue URL, which you will need when configuring the connector in Stellar Cyber.

    7. Note the ARN. This is the SQS-queue-ARN, which you will need when configuring the access policy in the following step.

  4. Replace the access policy attached to the queue:

    1. In the Amazon SQS console, in the Queues list, select the queue name.

    2. Click Edit.

    3. Replace the access policy attached to the queue. Provide the following: 

      • Amazon SQS ARN you noted above

      • your AWS account ID for <bucket-owner-account-id>

      • your source S3 bucket name for <your-S3-bucket-name>

      Copy
      {
        "Version": "2012-10-17",
        "Id": "example-ID",
        "Statement": [
          {
            "Sid": "example-statement-ID",
            "Effect": "Allow",
            "Principal": {
              "Service": "s3.amazonaws.com"
            },
            "Action": "SQS:SendMessage",
            "Resource": "arn:aws:sqs:<aws-region:account-id:queue-name>",
            "Condition": {
              "StringEquals": {
                "aws:SourceAccount": "<bucket-owner-account-id>"
              },
              "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::<your-S3-bucket-name>"
              }
            }
          }
        ]
      }
    4. Click Save.

  5. Add a notification configuration to your bucket.

    1. In the Amazon S3 Buckets console, locate your S3 bucket.

    2. Double-click your bucket name.

    3. Click the Properties tab.

    4. Scroll down to Event notifications and click Create event notification.

    5. Enter an Event name.

    6. For Event types, select All object create events.

    7. Scroll down to Destination:

      • For Destination, select SQS queue.

      • For Specify SQS queue, select Enter SQS queue ARN.

      • For SQS queue, enter the ARN.

    8. Click Save changes.

Adding an AWS User for Stellar Cyber

Create an IAM user in the same AWS account as the SQS queue. You will need the SQS-queue-ARN you noted above for the policy.

Use our example as a guideline, as you might be using a different software version.

To add a user with the appropriate permissions:

  1. Log in to your AWS Management Console at https://aws.amazon.com/console. View the services in the Console Home or choose View all services.

  2. Choose IAM. The IAM Dashboard appears.

  3. Choose Policies and then choose Create Policy.

  4. In the Create policy pane, choose the JSON tab.

  5. Using the example below as a guide, edit the JSON policy document.

    The following is just an EXAMPLE. You must modify this JSON to match the resources in your own environment.

    Copy
    Generic S3 policy
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "sqs:DeleteMessage",
                    "sqs:ReceiveMessage"
                ],
                "Resource": [
                    "arn:aws:sqs:<aws-region:account-id:queue-name>"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::<your-S3-bucket-name>",
                    "arn:aws:s3:::<your-S3-bucket-name>/*"
                ]
            }
        ]
    }
  6. Choose Next.

  7. Give your policy a name to associate it with Stellar Cyber, then choose Create policy.

    The policy can now be attached to a user.

  8. From the IAM navigation pane, choose Users and then choose Create user.

  9. On the Specify user details page for User details, enter a User name for the new user.

  10. Choose Provide user access to the – AWS Management Console optional to produce login credentials for the new user, such as a password.

  11. Choose how you want to create the Console password and then choose Next.

  12. On the Set permissions page, choose Attach policies directly. Then search for the policy you created above.

  13. Select the checkbox to the left of the policy name, then choose Next.

  14. On the Review and create page, verify the information and then choose Create user.

Obtaining the Access Key ID and Secret Access Key

To obtain the access key ID and secret access key:

  1. Log in to the IAM Console with your AWS account ID, your IAM user name, and password. You must have the permissions required to create access keys for a user.

  2. Choose Users, then click the user name, and click Security credentials.

  3. In the Access keys section, choose Create access key. Access keys have two parts: an access key ID and a secret access key. Click Create access key.

  4. On the Access key best practices & alternatives page, choose Other and then choose Next.

  5. (Optional) On the Set description tag page, enter a description.

  6. Click Create access key.

  7. On the Retrieve access keys page, choose Show to reveal the value of the user's secret access key. Save the access key ID and secret access key in a secure location. You will need them when configuring the connector in Stellar Cyber.

  8. Click Done.

Adding the Connector in Stellar Cyber

With the access information handy, you can add a Generic S3 connector in Stellar Cyber:

  1. Log in to Stellar Cyber.

  2. Click System | Connectors (under Integrations). The Connector Overview appears.

  3. Click Create. The General tab of the Add Connector screen appears. The information on this tab cannot be changed after you add the connector.

    The asterisk (*) indicates a required field.

  4. Choose PaaS from the Category drop-down.

  5. Choose Generic S3 from the Type drop-down.

  6. For this connector, the supported Function is Collect, which is enabled already.

  7. Enter a Name.

    Notes:
    • This field does not accept multibyte characters.
    • It is recommended that you follow a naming convention such as tenantname-connectortype.
  8. Choose a Tenant Name. This identifies which tenant is allowed to use the connector.

  9. Choose the device on which to run the connector.

    • Certain connectors can be run on either a Sensor or a Data Processor. The available devices are displayed in the Run On menu. If you want to associate your collector with a sensor, you must have configured that sensor prior to configuring the connector or you will not be able to select it during initial configuration. If you select Data Processor, you will need to associate the connector with a Data Analyzer profile as a separate step. That step is not required for a sensor, which is configured with only one possible profile.

    • If the device you're connecting to is on premises, we recommend you run on the local sensor. If you're connecting to a cloud service, we recommend you run on the DP.

  10. (Optional) When the Function is Collect, you can apply Log Filters. For information, see Managing Log Filters.

  11. Click Next. The Configuration tab appears.

    The asterisk (*) indicates a required field.

  12. Enter the Access Key ID you noted above in Obtaining the Access Key ID and Secret Access Key.

  13. Enter the Secret Access Key you noted above in Obtaining the Access Key ID and Secret Access Key.

  14. Choose the Region from the available AWS regions in the drop-down.

  15. Enter the SQS Queue URL you noted above. It has the following format: 

    https://<aws-region>.amazonaws.com/<account-id>/<queue-name>

  16. Choose the File Type. This refers to the specific data within the S3 object. The supported file types are gzip and text. The only supported format is JSON. For example, if the File Type is text, the file must contain a properly formatted JSON object.

  17. (Optional) If the specified File Type contains multiple JSON objects in the file, select the Multiple Logs per File checkbox. For example, if there are multiple JSON objects in the file, one per line, with each line being a single JSON object, then select this checkbox. Your file contents will look like this:

    Do not select this checkbox if there is only a single JSON object in the file and your file contents looks like this:

  18. Choose the Log Source. The supported log sources are AWS CloudTrail (JSON), CrowdStrike (JSON), Generic (JSON), Proofpoint (JSON), and Teleport (JSON). Only one log source can be configured per connector.

    In addition, there may be other Log Sources listed in this drop-down menu. Contact the Stellar Cyber Customer Support team if you have a log source you want added.

  19. Click Next. The final confirmation tab appears.

  20. Click Submit.

    To pull data, a connector must be added to a Data Analyzer profile if it is running on the Data Processor.

  21. If you are adding rather than editing a connector with the Collect function enabled and you specified for it to run on a Data Processor, a dialog box now prompts you to add the connector to the default Data Analyzer profile. Click Cancel to leave it out of the default profile or click OK to add it to the default profile.

    • This prompt only occurs during the initial create connector process when Collect is enabled.

    • Certain connectors can be run on either a Sensor or a Data Processor, and some are best run on one versus the other. In any case, when the connector is run on a Data Processor, that connector must be included in a Data Analyzer profile. If you leave it out of the default profile, you must add it to another profile. You need the Administrator Root scope to add the connector to the Data Analyzer profile. If you do not have privileges to configure Data Analyzer profiles, a dialog displays recommending you ask your administrator to add it for you.

    • The first time you add a Collect connector to a profile, it pulls data immediately and then not again until the scheduled interval has elapsed. If the connector configuration dialog did not offer an option to set a specific interval, it is run every five minutes. Exceptions to this default interval are the Proofpoint on Demand (pulls data every 1 hour) and Azure Event Hub (continuously pulls data) connectors. The intervals for each connector are listed in the Connector Types & Functions topic.

    The Connector Overview appears.

The new connector is immediately active.

Testing the Connector

When you add (or edit) a connector, we recommend that you run a test to validate the connectivity parameters you entered. (The test validates authentication and connectivity).

  1. Click System | Connectors (under Integrations). The Connector Overview appears.

  2. Locate the connector by name that you added, or modified, or that you want to test.

  3. Click Test at the right side of that row. The test runs immediately.

    Note that you may run only one test at a time.

Stellar Cyber conducts a basic connectivity test for the connector and reports a success or failure result. A successful test indicates that you entered all of the connector information correctly.

To aid troubleshooting your connector, the dialog remains open until you explicitly close it by using the X button. If the test fails, you can select the  button from the same row to review and correct issues.

The connector status is updated every five (5) minutes. A successful test clears the connector status, but if issues persist, the status reverts to failed after a minute.

Repeat the test as needed.

ClosedDisplay sample messages...

Success !

Failure with summary of issue:

Show More example detail:

If the test fails, the common HTTP status error codes are as follows:

HTTP Error Code HTTP Standard Error Name Explanation Recommendation
400 Bad Request This error occurs when there is an error in the connector configuration.

Did you configure the connector correctly?

401 Unauthorized

This error occurs when an authentication credential is invalid or when a user does not have sufficient privileges to access a specific API.

Did you enter your credentials correctly?

Are your credentials expired?

Are your credentials entitled or licensed for that specific resource?

403 Forbidden This error occurs when the permission or scope is not correct in a valid credential.

Did you enter your credentials correctly?

Do you have the required role or permissions for that credential?

404 Not Found This error occurs when a URL path does not resolve to an entity. Did you enter your API URL correctly?
429 Too Many Requests

This error occurs when the API server receives too much traffic or if a user’s license or entitlement quota is exceeded.

The server or user license/quota will eventually recover. The connector will periodically retry the query.

If this occurs unexpectedly or too often, work with your API provider to investigate the server limits, user licensing, or quotas.

For a full list of codes, refer to HTTP response status codes.

Verifying Ingestion

To verify ingestion:

  1. Click Investigate | Threat Hunting. The Interflow Search tab appears.

  2. Change the Indices for the type of log source:

    • Change the Indices to Syslog.

    • For AWS CloudTrail (JSON) only, change the Indices to AWS Events.

    The table immediately updates to show ingested Interflow records.