Skip to content

googleDatastreamStream

A resource representing streaming data from a source to a destination.

To get more information about Stream, see:

Example Usage - Datastream Stream Full

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
import * as random from "./.gen/providers/random";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google, random.
For a more precise conversion please use the --provider flag in convert.*/
const googleSqlDatabaseInstanceInstance =
  new google.sqlDatabaseInstance.SqlDatabaseInstance(this, "instance", {
    database_version: "MYSQL_8_0",
    deletion_protection: true,
    name: "my-instance",
    region: "us-central1",
    settings: [
      {
        backup_configuration: [
          {
            binary_log_enabled: true,
            enabled: true,
          },
        ],
        ip_configuration: [
          {
            authorized_networks: [
              {
                value: "34.71.242.81",
              },
              {
                value: "34.72.28.29",
              },
              {
                value: "34.67.6.157",
              },
              {
                value: "34.67.234.134",
              },
              {
                value: "34.72.239.218",
              },
            ],
          },
        ],
        tier: "db-f1-micro",
      },
    ],
  });
const googleStorageBucketBucket = new google.storageBucket.StorageBucket(
  this,
  "bucket",
  {
    location: "US",
    name: "my-bucket",
    uniform_bucket_level_access: true,
  }
);
const randomPasswordPwd = new random.password.Password(this, "pwd", {
  length: 16,
  special: false,
});
const dataGoogleProjectProject = new google.dataGoogleProject.DataGoogleProject(
  this,
  "project",
  {}
);
const googleDatastreamConnectionProfileDestinationConnectionProfile =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "destination_connection_profile",
    {
      connection_profile_id: "destination-profile",
      display_name: "Connection profile",
      gcs_profile: [
        {
          bucket: googleStorageBucketBucket.name,
          root_path: "/path",
        },
      ],
      location: "us-central1",
    }
  );
const googleKmsCryptoKeyIamMemberKeyUser =
  new google.kmsCryptoKeyIamMember.KmsCryptoKeyIamMember(this, "key_user", {
    crypto_key_id: "kms-name",
    member: `serviceAccount:service-\${${dataGoogleProjectProject.number}}@gcp-sa-datastream.iam.gserviceaccount.com`,
    role: "roles/cloudkms.cryptoKeyEncrypterDecrypter",
  });
new google.sqlDatabase.SqlDatabase(this, "db", {
  instance: googleSqlDatabaseInstanceInstance.name,
  name: "db",
});
const googleSqlUserUser = new google.sqlUser.SqlUser(this, "user", {
  host: "%",
  instance: googleSqlDatabaseInstanceInstance.name,
  name: "user",
  password: randomPasswordPwd.result,
});
new google.storageBucketIamMember.StorageBucketIamMember(this, "creator", {
  bucket: googleStorageBucketBucket.name,
  member: `serviceAccount:service-\${${dataGoogleProjectProject.number}}@gcp-sa-datastream.iam.gserviceaccount.com`,
  role: "roles/storage.objectCreator",
});
new google.storageBucketIamMember.StorageBucketIamMember(this, "reader", {
  bucket: googleStorageBucketBucket.name,
  member: `serviceAccount:service-\${${dataGoogleProjectProject.number}}@gcp-sa-datastream.iam.gserviceaccount.com`,
  role: "roles/storage.legacyBucketReader",
});
new google.storageBucketIamMember.StorageBucketIamMember(this, "viewer", {
  bucket: googleStorageBucketBucket.name,
  member: `serviceAccount:service-\${${dataGoogleProjectProject.number}}@gcp-sa-datastream.iam.gserviceaccount.com`,
  role: "roles/storage.objectViewer",
});
const googleDatastreamConnectionProfileSourceConnectionProfile =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "source_connection_profile",
    {
      connection_profile_id: "source-profile",
      display_name: "Source connection profile",
      location: "us-central1",
      mysql_profile: [
        {
          hostname: googleSqlDatabaseInstanceInstance.publicIpAddress,
          password: googleSqlUserUser.password,
          username: googleSqlUserUser.name,
        },
      ],
    }
  );
new google.datastreamStream.DatastreamStream(this, "default", {
  backfill_all: [
    {
      mysql_excluded_objects: [
        {
          mysql_databases: [
            {
              database: "my-database",
              mysql_tables: [
                {
                  mysql_columns: [
                    {
                      collation: "utf8mb4",
                      column: "excludedColumn",
                      data_type: "VARCHAR",
                      nullable: false,
                      ordinal_position: 0,
                      primary_key: false,
                    },
                  ],
                  table: "excludedTable",
                },
              ],
            },
          ],
        },
      ],
    },
  ],
  customer_managed_encryption_key: "kms-name",
  depends_on: [`\${${googleKmsCryptoKeyIamMemberKeyUser.fqn}}`],
  desired_state: "NOT_STARTED",
  destination_config: [
    {
      destination_connection_profile:
        googleDatastreamConnectionProfileDestinationConnectionProfile.id,
      gcs_destination_config: [
        {
          file_rotation_interval: "60s",
          file_rotation_mb: 200,
          json_file_format: [
            {
              compression: "GZIP",
              schema_file_format: "NO_SCHEMA_FILE",
            },
          ],
          path: "mydata",
        },
      ],
    },
  ],
  display_name: "my stream",
  labels: [
    {
      key: "value",
    },
  ],
  location: "us-central1",
  source_config: [
    {
      mysql_source_config: [
        {
          exclude_objects: [
            {
              mysql_databases: [
                {
                  database: "my-database",
                  mysql_tables: [
                    {
                      mysql_columns: [
                        {
                          collation: "utf8mb4",
                          column: "excludedColumn",
                          data_type: "VARCHAR",
                          nullable: false,
                          ordinal_position: 0,
                          primary_key: false,
                        },
                      ],
                      table: "excludedTable",
                    },
                  ],
                },
              ],
            },
          ],
          include_objects: [
            {
              mysql_databases: [
                {
                  database: "my-database",
                  mysql_tables: [
                    {
                      mysql_columns: [
                        {
                          collation: "utf8mb4",
                          column: "includedColumn",
                          data_type: "VARCHAR",
                          nullable: false,
                          ordinal_position: 0,
                          primary_key: false,
                        },
                      ],
                      table: "includedTable",
                    },
                  ],
                },
              ],
            },
          ],
          max_concurrent_cdc_tasks: 5,
        },
      ],
      source_connection_profile:
        googleDatastreamConnectionProfileSourceConnectionProfile.id,
    },
  ],
  stream_id: "my-stream",
});

Example Usage - Datastream Stream Postgresql

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
const googleDatastreamConnectionProfileDestination =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "destination",
    {
      bigquery_profile: [{}],
      connection_profile_id: "destination-profile",
      display_name: "BigQuery Destination",
      location: "us-central1",
    }
  );
const googleDatastreamConnectionProfileSource =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "source",
    {
      connection_profile_id: "source-profile",
      display_name: "Postgresql Source",
      location: "us-central1",
      postgresql_profile: [
        {
          database: "postgres",
          hostname: "hostname",
          password: "pass",
          port: 3306,
          username: "user",
        },
      ],
    }
  );
new google.datastreamStream.DatastreamStream(this, "default", {
  backfill_all: [
    {
      postgresql_excluded_objects: [
        {
          postgresql_schemas: [
            {
              postgresql_tables: [
                {
                  postgresql_columns: [
                    {
                      column: "column",
                    },
                  ],
                  table: "table",
                },
              ],
              schema: "schema",
            },
          ],
        },
      ],
    },
  ],
  desired_state: "RUNNING",
  destination_config: [
    {
      bigquery_destination_config: [
        {
          data_freshness: "900s",
          source_hierarchy_datasets: [
            {
              dataset_template: [
                {
                  location: "us-central1",
                },
              ],
            },
          ],
        },
      ],
      destination_connection_profile:
        googleDatastreamConnectionProfileDestination.id,
    },
  ],
  display_name: "Postgres to BigQuery",
  location: "us-central1",
  source_config: [
    {
      postgresql_source_config: [
        {
          exclude_objects: [
            {
              postgresql_schemas: [
                {
                  postgresql_tables: [
                    {
                      postgresql_columns: [
                        {
                          column: "column",
                        },
                      ],
                      table: "table",
                    },
                  ],
                  schema: "schema",
                },
              ],
            },
          ],
          include_objects: [
            {
              postgresql_schemas: [
                {
                  postgresql_tables: [
                    {
                      postgresql_columns: [
                        {
                          column: "column",
                        },
                      ],
                      table: "table",
                    },
                  ],
                  schema: "schema",
                },
              ],
            },
          ],
          max_concurrent_backfill_tasks: 12,
          publication: "publication",
          replication_slot: "replication_slot",
        },
      ],
      source_connection_profile: googleDatastreamConnectionProfileSource.id,
    },
  ],
  stream_id: "my-stream",
});

Example Usage - Datastream Stream Oracle

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google.
For a more precise conversion please use the --provider flag in convert.*/
const googleDatastreamConnectionProfileDestination =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "destination",
    {
      bigquery_profile: [{}],
      connection_profile_id: "destination-profile",
      display_name: "BigQuery Destination",
      location: "us-central1",
    }
  );
const googleDatastreamConnectionProfileSource =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "source",
    {
      connection_profile_id: "source-profile",
      display_name: "Oracle Source",
      location: "us-central1",
      oracle_profile: [
        {
          database_service: "ORCL",
          hostname: "hostname",
          password: "pass",
          port: 1521,
          username: "user",
        },
      ],
    }
  );
new google.datastreamStream.DatastreamStream(this, "stream5", {
  backfill_all: [
    {
      oracle_excluded_objects: [
        {
          oracle_schemas: [
            {
              oracle_tables: [
                {
                  oracle_columns: [
                    {
                      column: "column",
                    },
                  ],
                  table: "table",
                },
              ],
              schema: "schema",
            },
          ],
        },
      ],
    },
  ],
  desired_state: "RUNNING",
  destination_config: [
    {
      bigquery_destination_config: [
        {
          data_freshness: "900s",
          source_hierarchy_datasets: [
            {
              dataset_template: [
                {
                  location: "us-central1",
                },
              ],
            },
          ],
        },
      ],
      destination_connection_profile:
        googleDatastreamConnectionProfileDestination.id,
    },
  ],
  display_name: "Oracle to BigQuery",
  location: "us-central1",
  source_config: [
    {
      oracle_source_config: [
        {
          drop_large_objects: [{}],
          exclude_objects: [
            {
              oracle_schemas: [
                {
                  oracle_tables: [
                    {
                      oracle_columns: [
                        {
                          column: "column",
                        },
                      ],
                      table: "table",
                    },
                  ],
                  schema: "schema",
                },
              ],
            },
          ],
          include_objects: [
            {
              oracle_schemas: [
                {
                  oracle_tables: [
                    {
                      oracle_columns: [
                        {
                          column: "column",
                        },
                      ],
                      table: "table",
                    },
                  ],
                  schema: "schema",
                },
              ],
            },
          ],
          max_concurrent_backfill_tasks: 12,
          max_concurrent_cdc_tasks: 8,
        },
      ],
      source_connection_profile: googleDatastreamConnectionProfileSource.id,
    },
  ],
  stream_id: "my-stream",
});

Example Usage - Datastream Stream Postgresql Bigquery Dataset

resource "google_bigquery_dataset" "postgres" {
  dataset_id    = "postgres%{random_suffix}"
  friendly_name = "postgres"
  description   = "Database of postgres"
  location      = "us-central1"
}

resource "google_datastream_stream" "default" {
  display_name  = "postgres to bigQuery"
  location      = "us-central1"
  stream_id     = "postgres-to-big-query%{random_suffix}"

   source_config {
    source_connection_profile = google_datastream_connection_profile.source_connection_profile.id
    mysql_source_config {}
  }

  destination_config {
    destination_connection_profile = google_datastream_connection_profile.destination_connection_profile2.id
    bigquery_destination_config {
      data_freshness = "900s"
      single_target_dataset {
        dataset_id = google_bigquery_dataset.postgres.id
      }
    }
  }

  backfill_all {
  }

}

resource "google_datastream_connection_profile" "destination_connection_profile2" {
    display_name          = "Connection profile"
    location              = "us-central1"
    connection_profile_id = "tf-test-destination-profile%{random_suffix}"
    bigquery_profile {}
}

resource "google_sql_database_instance" "instance" {
    name             = "tf-test-my-instance%{random_suffix}"
    database_version = "MYSQL_8_0"
    region           = "us-central1"
    settings {
        tier = "db-f1-micro"
        backup_configuration {
            enabled            = true
            binary_log_enabled = true
        }

        ip_configuration {
            // Datastream IPs will vary by region.
            authorized_networks {
                value = "34.71.242.81"
            }

            authorized_networks {
                value = "34.72.28.29"
            }

            authorized_networks {
                value = "34.67.6.157"
            }

            authorized_networks {
                value = "34.67.234.134"
            }

            authorized_networks {
                value = "34.72.239.218"
            }
        }
    }

    deletion_protection  = false
}

resource "google_sql_database" "db" {
    instance = google_sql_database_instance.instance.name
    name     = "db"
}

resource "random_password" "pwd" {
    length = 16
    special = false
}

resource "google_sql_user" "user" {
    name     = "user%{random_suffix}"
    instance = google_sql_database_instance.instance.name
    host     = "%"
    password = random_password.pwd.result
}

resource "google_datastream_connection_profile" "source_connection_profile" {
    display_name          = "Source connection profile"
    location              = "us-central1"
    connection_profile_id = "tf-test-source-profile%{random_suffix}"

    mysql_profile {
        hostname = google_sql_database_instance.instance.public_ip_address
        username = google_sql_user.user.name
        password = google_sql_user.user.password
    }
}

Example Usage - Datastream Stream Bigquery

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as google from "./.gen/providers/google";
import * as random from "./.gen/providers/random";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: google, random.
For a more precise conversion please use the --provider flag in convert.*/
const googleDatastreamConnectionProfileDestinationConnectionProfile =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "destination_connection_profile",
    {
      bigquery_profile: [{}],
      connection_profile_id: "destination-profile",
      display_name: "Connection profile",
      location: "us-central1",
    }
  );
const googleSqlDatabaseInstanceInstance =
  new google.sqlDatabaseInstance.SqlDatabaseInstance(this, "instance", {
    database_version: "MYSQL_8_0",
    deletion_protection: true,
    name: "my-instance",
    region: "us-central1",
    settings: [
      {
        backup_configuration: [
          {
            binary_log_enabled: true,
            enabled: true,
          },
        ],
        ip_configuration: [
          {
            authorized_networks: [
              {
                value: "34.71.242.81",
              },
              {
                value: "34.72.28.29",
              },
              {
                value: "34.67.6.157",
              },
              {
                value: "34.67.234.134",
              },
              {
                value: "34.72.239.218",
              },
            ],
          },
        ],
        tier: "db-f1-micro",
      },
    ],
  });
const randomPasswordPwd = new random.password.Password(this, "pwd", {
  length: 16,
  special: false,
});
const dataGoogleBigqueryDefaultServiceAccountBqSa =
  new google.dataGoogleBigqueryDefaultServiceAccount.DataGoogleBigqueryDefaultServiceAccount(
    this,
    "bq_sa",
    {}
  );
new google.dataGoogleProject.DataGoogleProject(this, "project", {});
const googleKmsCryptoKeyIamMemberBigqueryKeyUser =
  new google.kmsCryptoKeyIamMember.KmsCryptoKeyIamMember(
    this,
    "bigquery_key_user",
    {
      crypto_key_id: "bigquery-kms-name",
      member: `serviceAccount:\${${dataGoogleBigqueryDefaultServiceAccountBqSa.email}}`,
      role: "roles/cloudkms.cryptoKeyEncrypterDecrypter",
    }
  );
new google.sqlDatabase.SqlDatabase(this, "db", {
  instance: googleSqlDatabaseInstanceInstance.name,
  name: "db",
});
const googleSqlUserUser = new google.sqlUser.SqlUser(this, "user", {
  host: "%",
  instance: googleSqlDatabaseInstanceInstance.name,
  name: "user",
  password: randomPasswordPwd.result,
});
const googleDatastreamConnectionProfileSourceConnectionProfile =
  new google.datastreamConnectionProfile.DatastreamConnectionProfile(
    this,
    "source_connection_profile",
    {
      connection_profile_id: "source-profile",
      display_name: "Source connection profile",
      location: "us-central1",
      mysql_profile: [
        {
          hostname: googleSqlDatabaseInstanceInstance.publicIpAddress,
          password: googleSqlUserUser.password,
          username: googleSqlUserUser.name,
        },
      ],
    }
  );
new google.datastreamStream.DatastreamStream(this, "default", {
  backfill_none: [{}],
  depends_on: [`\${${googleKmsCryptoKeyIamMemberBigqueryKeyUser.fqn}}`],
  destination_config: [
    {
      bigquery_destination_config: [
        {
          source_hierarchy_datasets: [
            {
              dataset_template: [
                {
                  kms_key_name: "bigquery-kms-name",
                  location: "us-central1",
                },
              ],
            },
          ],
        },
      ],
      destination_connection_profile:
        googleDatastreamConnectionProfileDestinationConnectionProfile.id,
    },
  ],
  display_name: "my stream",
  location: "us-central1",
  source_config: [
    {
      mysql_source_config: [{}],
      source_connection_profile:
        googleDatastreamConnectionProfileSourceConnectionProfile.id,
    },
  ],
  stream_id: "my-stream",
});

Argument Reference

The following arguments are supported:

  • displayName - (Required) Display name.

  • sourceConfig - (Required) Source connection profile configuration. Structure is documented below.

  • destinationConfig - (Required) Destination connection profile configuration. Structure is documented below.

  • streamId - (Required) The stream identifier.

  • location - (Required) The name of the location this stream is located in.

The sourceConfig block supports:

  • sourceConnectionProfile - (Required) Source connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}

  • mysqlSourceConfig - (Optional) MySQL data source configuration. Structure is documented below.

  • oracleSourceConfig - (Optional) MySQL data source configuration. Structure is documented below.

  • postgresqlSourceConfig - (Optional) PostgreSQL data source configuration. Structure is documented below.

The mysqlSourceConfig block supports:

  • includeObjects - (Optional) MySQL objects to retrieve from the source. Structure is documented below.

  • excludeObjects - (Optional) MySQL objects to exclude from the stream. Structure is documented below.

  • maxConcurrentCdcTasks - (Optional) Maximum number of concurrent CDC tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.

The includeObjects block supports:

  • mysqlDatabases - (Required) MySQL databases on the server Structure is documented below.

The mysqlDatabases block supports:

  • database - (Required) Database name.

  • mysqlTables - (Optional) Tables in the database. Structure is documented below.

The mysqlTables block supports:

  • table - (Required) Table name.

  • mysqlColumns - (Optional) MySQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The mysqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The MySQL data type. Full data types list can be found here: https://dev.mysql.com/doc/refman/8.0/en/data-types.html

  • length - (Output) Column length.

  • collation - (Optional) Column collation.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The excludeObjects block supports:

  • mysqlDatabases - (Required) MySQL databases on the server Structure is documented below.

The mysqlDatabases block supports:

  • database - (Required) Database name.

  • mysqlTables - (Optional) Tables in the database. Structure is documented below.

The mysqlTables block supports:

  • table - (Required) Table name.

  • mysqlColumns - (Optional) MySQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The mysqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The MySQL data type. Full data types list can be found here: https://dev.mysql.com/doc/refman/8.0/en/data-types.html

  • length - (Output) Column length.

  • collation - (Optional) Column collation.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The oracleSourceConfig block supports:

  • includeObjects - (Optional) Oracle objects to retrieve from the source. Structure is documented below.

  • excludeObjects - (Optional) Oracle objects to exclude from the stream. Structure is documented below.

  • maxConcurrentCdcTasks - (Optional) Maximum number of concurrent CDC tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.

  • maxConcurrentBackfillTasks - (Optional) Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.

  • dropLargeObjects - (Optional) Configuration to drop large object values.

  • streamLargeObjects - (Optional) Configuration to drop large object values.

The includeObjects block supports:

  • oracleSchemas - (Required) Oracle schemas/databases in the database server Structure is documented below.

The oracleSchemas block supports:

  • schema - (Required) Schema name.

  • oracleTables - (Optional) Tables in the database. Structure is documented below.

The oracleTables block supports:

  • table - (Required) Table name.

  • oracleColumns - (Optional) Oracle columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The oracleColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The Oracle data type. Full data types list can be found here: https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/Data-Types.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • encoding - (Output) Column encoding.

  • primaryKey - (Output) Whether or not the column represents a primary key.

  • nullable - (Output) Whether or not the column can accept a null value.

  • ordinalPosition - (Output) The ordinal position of the column in the table.

The excludeObjects block supports:

  • oracleSchemas - (Required) Oracle schemas/databases in the database server Structure is documented below.

The oracleSchemas block supports:

  • schema - (Required) Schema name.

  • oracleTables - (Optional) Tables in the database. Structure is documented below.

The oracleTables block supports:

  • table - (Required) Table name.

  • oracleColumns - (Optional) Oracle columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The oracleColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The Oracle data type. Full data types list can be found here: https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/Data-Types.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • encoding - (Output) Column encoding.

  • primaryKey - (Output) Whether or not the column represents a primary key.

  • nullable - (Output) Whether or not the column can accept a null value.

  • ordinalPosition - (Output) The ordinal position of the column in the table.

The postgresqlSourceConfig block supports:

  • includeObjects - (Optional) PostgreSQL objects to retrieve from the source. Structure is documented below.

  • excludeObjects - (Optional) PostgreSQL objects to exclude from the stream. Structure is documented below.

  • replicationSlot - (Required) The name of the logical replication slot that's configured with the pgoutput plugin.

  • publication - (Required) The name of the publication that includes the set of all tables that are defined in the stream's include_objects.

  • maxConcurrentBackfillTasks - (Optional) Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.

The includeObjects block supports:

  • postgresqlSchemas - (Required) PostgreSQL schemas on the server Structure is documented below.

The postgresqlSchemas block supports:

  • schema - (Required) Database name.

  • postgresqlTables - (Optional) Tables in the schema. Structure is documented below.

The postgresqlTables block supports:

  • table - (Required) Table name.

  • postgresqlColumns - (Optional) PostgreSQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The postgresqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The PostgreSQL data type. Full data types list can be found here: https://www.postgresql.org/docs/current/datatype.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The excludeObjects block supports:

  • postgresqlSchemas - (Required) PostgreSQL schemas on the server Structure is documented below.

The postgresqlSchemas block supports:

  • schema - (Required) Database name.

  • postgresqlTables - (Optional) Tables in the schema. Structure is documented below.

The postgresqlTables block supports:

  • table - (Required) Table name.

  • postgresqlColumns - (Optional) PostgreSQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The postgresqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The PostgreSQL data type. Full data types list can be found here: https://www.postgresql.org/docs/current/datatype.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The destinationConfig block supports:

  • destinationConnectionProfile - (Required) Destination connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}

  • gcsDestinationConfig - (Optional) A configuration for how data should be loaded to Cloud Storage. Structure is documented below.

  • bigqueryDestinationConfig - (Optional) A configuration for how data should be loaded to Cloud Storage. Structure is documented below.

The gcsDestinationConfig block supports:

  • path - (Optional) Path inside the Cloud Storage bucket to write data to.

  • fileRotationMb - (Optional) The maximum file size to be saved in the bucket.

  • fileRotationInterval - (Optional) The maximum duration for which new events are added before a file is closed and a new file is created. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". Defaults to 900s.

  • avroFileFormat - (Optional) AVRO file format configuration.

  • jsonFileFormat - (Optional) JSON file format configuration. Structure is documented below.

The jsonFileFormat block supports:

  • schemaFileFormat - (Optional) The schema file format along JSON data files. Possible values are noSchemaFile and avroSchemaFile.

  • compression - (Optional) Compression of the loaded JSON file. Possible values are noCompression and gzip.

The bigqueryDestinationConfig block supports:

  • dataFreshness - (Optional) The guaranteed data freshness (in seconds) when querying tables created by the stream. Editing this field will only affect new tables created in the future, but existing tables will not be impacted. Lower values mean that queries will return fresher data, but may result in higher cost. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". Defaults to 900s.

  • singleTargetDataset - (Optional) A single target dataset to which all data will be streamed. Structure is documented below.

  • sourceHierarchyDatasets - (Optional) Destination datasets are created so that hierarchy of the destination data objects matches the source hierarchy. Structure is documented below.

The singleTargetDataset block supports:

  • datasetId - (Required) Dataset ID in the format projects/{project}/datasets/{dataset_id} or {project}:{dataset_id}

The sourceHierarchyDatasets block supports:

  • datasetTemplate - (Required) Dataset template used for dynamic dataset creation. Structure is documented below.

The datasetTemplate block supports:

  • location - (Required) The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.

  • datasetIdPrefix - (Optional) If supplied, every created dataset will have its name prefixed by the provided value. The prefix and name will be separated by an underscore. i.e. _.

  • kmsKeyName - (Optional) Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key. i.e. projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}. See https://cloud.google.com/bigquery/docs/customer-managed-encryption for more information.


  • labels - (Optional) Labels.

  • backfillAll - (Optional) Backfill strategy to automatically backfill the Stream's objects. Specific objects can be excluded. Structure is documented below.

  • backfillNone - (Optional) Backfill strategy to disable automatic backfill for the Stream's objects.

  • customerManagedEncryptionKey - (Optional) A reference to a KMS encryption key. If provided, it will be used to encrypt the data. If left blank, data will be encrypted using an internal Stream-specific encryption key provisioned through KMS.

  • project - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

  • desiredState - (Optional) Desired state of the Stream. Set this field to running to start the stream, and paused to pause the stream.

The backfillAll block supports:

  • mysqlExcludedObjects - (Optional) MySQL data source objects to avoid backfilling. Structure is documented below.

  • postgresqlExcludedObjects - (Optional) PostgreSQL data source objects to avoid backfilling. Structure is documented below.

  • oracleExcludedObjects - (Optional) PostgreSQL data source objects to avoid backfilling. Structure is documented below.

The mysqlExcludedObjects block supports:

  • mysqlDatabases - (Required) MySQL databases on the server Structure is documented below.

The mysqlDatabases block supports:

  • database - (Required) Database name.

  • mysqlTables - (Optional) Tables in the database. Structure is documented below.

The mysqlTables block supports:

  • table - (Required) Table name.

  • mysqlColumns - (Optional) MySQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The mysqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The MySQL data type. Full data types list can be found here: https://dev.mysql.com/doc/refman/8.0/en/data-types.html

  • length - (Output) Column length.

  • collation - (Optional) Column collation.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The postgresqlExcludedObjects block supports:

  • postgresqlSchemas - (Required) PostgreSQL schemas on the server Structure is documented below.

The postgresqlSchemas block supports:

  • schema - (Required) Database name.

  • postgresqlTables - (Optional) Tables in the schema. Structure is documented below.

The postgresqlTables block supports:

  • table - (Required) Table name.

  • postgresqlColumns - (Optional) PostgreSQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The postgresqlColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The PostgreSQL data type. Full data types list can be found here: https://www.postgresql.org/docs/current/datatype.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • primaryKey - (Optional) Whether or not the column represents a primary key.

  • nullable - (Optional) Whether or not the column can accept a null value.

  • ordinalPosition - (Optional) The ordinal position of the column in the table.

The oracleExcludedObjects block supports:

  • oracleSchemas - (Required) Oracle schemas/databases in the database server Structure is documented below.

The oracleSchemas block supports:

  • schema - (Required) Schema name.

  • oracleTables - (Optional) Tables in the database. Structure is documented below.

The oracleTables block supports:

  • table - (Required) Table name.

  • oracleColumns - (Optional) Oracle columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. Structure is documented below.

The oracleColumns block supports:

  • column - (Optional) Column name.

  • dataType - (Optional) The Oracle data type. Full data types list can be found here: https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/Data-Types.html

  • length - (Output) Column length.

  • precision - (Output) Column precision.

  • scale - (Output) Column scale.

  • encoding - (Output) Column encoding.

  • primaryKey - (Output) Whether or not the column represents a primary key.

  • nullable - (Output) Whether or not the column can accept a null value.

  • ordinalPosition - (Output) The ordinal position of the column in the table.

Attributes Reference

In addition to the arguments listed above, the following computed attributes are exported:

  • id - an identifier for the resource with format projects/{{project}}/locations/{{location}}/streams/{{streamId}}

  • name - The stream's name.

  • state - The state of the stream.

Timeouts

This resource provides the following Timeouts configuration options:

  • create - Default is 20 minutes.
  • update - Default is 20 minutes.
  • delete - Default is 20 minutes.

Import

Stream can be imported using any of these accepted formats:

$ terraform import google_datastream_stream.default projects/{{project}}/locations/{{location}}/streams/{{stream_id}}
$ terraform import google_datastream_stream.default {{project}}/{{location}}/{{stream_id}}
$ terraform import google_datastream_stream.default {{location}}/{{stream_id}}

User Project Overrides

This resource supports User Project Overrides.