Skip to content

googleLoggingProjectSink

Manages a project-level logging sink. For more information see:

\~> You can specify exclusions for log sinks created by terraform by using the exclusions field of googleLoggingFolderSink

\~> Note: You must have granted the "Logs Configuration Writer" IAM role (roles/loggingConfigWriter) to the credentials used with terraform.

\~> Note You must enable the Cloud Resource Manager API

Example Usage

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
new google.loggingProjectSink.LoggingProjectSink(this, "my-sink", {
  destination:
    "pubsub.googleapis.com/projects/my-project/topics/instance-activity",
  filter: "resource.type = gce_instance AND severity >= WARNING",
  name: "my-pubsub-instance-sink",
  unique_writer_identity: true,
});

A more complete example follows: this creates a compute instance, as well as a log sink that logs all activity to a cloud storage bucket. Because we are using uniqueWriterIdentity, we must grant it access to the bucket.

Note that this grant requires the "Project IAM Admin" IAM role (roles/resourcemanagerProjectIamAdmin) granted to the credentials used with Terraform.

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
const googleComputeInstanceMyLoggedInstance =
  new google.computeInstance.ComputeInstance(this, "my-logged-instance", {
    boot_disk: [
      {
        initialize_params: [
          {
            image: "debian-cloud/debian-11",
          },
        ],
      },
    ],
    machine_type: "e2-medium",
    name: "my-instance",
    network_interface: [
      {
        access_config: [{}],
        network: "default",
      },
    ],
    zone: "us-central1-a",
  });
const googleStorageBucketLogBucket = new google.storageBucket.StorageBucket(
  this,
  "log-bucket",
  {
    location: "US",
    name: "my-unique-logging-bucket",
  }
);
const googleLoggingProjectSinkInstanceSink =
  new google.loggingProjectSink.LoggingProjectSink(this, "instance-sink", {
    description: "some explanation on what this is",
    destination: `storage.googleapis.com/\${${googleStorageBucketLogBucket.name}}`,
    filter: `resource.type = gce_instance AND resource.labels.instance_id = "\${${googleComputeInstanceMyLoggedInstance.instanceId}}"`,
    name: "my-instance-sink",
    unique_writer_identity: true,
  });
new google.projectIamBinding.ProjectIamBinding(this, "log-writer", {
  members: [googleLoggingProjectSinkInstanceSink.writerIdentity],
  project: "your-project-id",
  role: "roles/storage.objectCreator",
});

The following example uses exclusions to filter logs that will not be exported. In this example logs are exported to a log bucket and there are 2 exclusions configured

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
new google.loggingProjectSink.LoggingProjectSink(this, "log-bucket", {
  destination:
    "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default",
  exclusions: [
    {
      description: "Exclude logs from namespace-1 in k8s",
      filter:
        'resource.type = k8s_container resource.labels.namespace_name="namespace-1" ',
      name: "nsexcllusion1",
    },
    {
      description: "Exclude logs from namespace-2 in k8s",
      filter:
        'resource.type = k8s_container resource.labels.namespace_name="namespace-2" ',
      name: "nsexcllusion2",
    },
  ],
  name: "my-logging-sink",
  unique_writer_identity: true,
});

Argument Reference

The following arguments are supported:

  • name - (Required) The name of the logging sink.

  • destination - (Required) The destination of the sink (or, in other words, where logs are written to). Can be a Cloud Storage bucket, a PubSub topic, a BigQuery dataset or a Cloud Logging bucket . Examples:

    • storageGoogleapisCom/[gcsBucket]
    • bigqueryGoogleapisCom/projects/[projectId]/datasets/[dataset]
    • pubsubGoogleapisCom/projects/[projectId]/topics/[topicId]
    • loggingGoogleapisCom/projects/[projectId]]/locations/global/buckets/[bucketId]

    The writer associated with the sink must have access to write to the above resource.

  • filter - (Optional) The filter to apply when exporting logs. Only log entries that match the filter are exported. See Advanced Log Filters for information on how to write a filter.

  • description - (Optional) A description of this sink. The maximum length of the description is 8000 characters.

  • disabled - (Optional) If set to True, then this sink is disabled and it does not export any log entries.

  • project - (Optional) The ID of the project to create the sink in. If omitted, the project associated with the provider is used.

  • uniqueWriterIdentity - (Optional) Whether or not to create a unique identity associated with this sink. If false (the default), then the writerIdentity used is serviceAccount:cloudLogs@systemGserviceaccountCom. If true, then a unique service account is created and used for this sink. If you wish to publish logs across projects or utilize bigqueryOptions, you must set uniqueWriterIdentity to true.

  • bigqueryOptions - (Optional) Options that affect sinks exporting data to BigQuery. Structure documented below.

  • exclusions - (Optional) Log entries that match any of the exclusion filters will not be exported. If a log entry is matched by both filter and one of exclusionsFilter, it will not be exported. Can be repeated multiple times for multiple exclusions. Structure is documented below.

The bigqueryOptions block supports:

  • usePartitionedTables - (Required) Whether to use BigQuery's partition tables. By default, Logging creates dated tables based on the log entries' timestamps, e.g. syslog20170523. With partitioned tables the date suffix is no longer present and special query syntax has to be used instead. In both cases, tables are sharded based on UTC timezone.

The exclusions block supports:

  • name - (Required) A client-assigned identifier, such as loadBalancerExclusion. Identifiers are limited to 100 characters and can include only letters, digits, underscores, hyphens, and periods. First character has to be alphanumeric.
  • description - (Optional) A description of this exclusion.
  • filter - (Required) An advanced logs filter that matches the log entries to be excluded. By using the sample function, you can exclude less than 100% of the matching log entries. See Advanced Log Filters for information on how to write a filter.
  • disabled - (Optional) If set to True, then this exclusion is disabled and it does not exclude any log entries.

Attributes Reference

In addition to the arguments listed above, the following computed attributes are exported:

  • id - an identifier for the resource with format projects/{{project}}/sinks/{{name}}

  • writerIdentity - The identity associated with this sink. This identity must be granted write access to the configured destination.

Import

Project-level logging sinks can be imported using their URI, e.g.

$ terraform import google_logging_project_sink.my_sink projects/my-project/sinks/my-sink