Skip to content

azurermSynapseSparkPool

Manages a Synapse Spark Pool.

Example Usage

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as azurerm from "./.gen/providers/azurerm";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: azurerm.
For a more precise conversion please use the --provider flag in convert.*/
const azurermResourceGroupExample = new azurerm.resourceGroup.ResourceGroup(
  this,
  "example",
  {
    location: "West Europe",
    name: "example-resources",
  }
);
const azurermStorageAccountExample = new azurerm.storageAccount.StorageAccount(
  this,
  "example_1",
  {
    account_kind: "StorageV2",
    account_replication_type: "LRS",
    account_tier: "Standard",
    is_hns_enabled: "true",
    location: azurermResourceGroupExample.location,
    name: "examplestorageacc",
    resource_group_name: azurermResourceGroupExample.name,
  }
);
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermStorageAccountExample.overrideLogicalId("example");
const azurermStorageDataLakeGen2FilesystemExample =
  new azurerm.storageDataLakeGen2Filesystem.StorageDataLakeGen2Filesystem(
    this,
    "example_2",
    {
      name: "example",
      storage_account_id: azurermStorageAccountExample.id,
    }
  );
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermStorageDataLakeGen2FilesystemExample.overrideLogicalId("example");
const azurermSynapseWorkspaceExample =
  new azurerm.synapseWorkspace.SynapseWorkspace(this, "example_3", {
    identity: [
      {
        type: "SystemAssigned",
      },
    ],
    location: azurermResourceGroupExample.location,
    name: "example",
    resource_group_name: azurermResourceGroupExample.name,
    sql_administrator_login: "sqladminuser",
    sql_administrator_login_password: "H@Sh1CoR3!",
    storage_data_lake_gen2_filesystem_id:
      azurermStorageDataLakeGen2FilesystemExample.id,
  });
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermSynapseWorkspaceExample.overrideLogicalId("example");
const azurermSynapseSparkPoolExample =
  new azurerm.synapseSparkPool.SynapseSparkPool(this, "example_4", {
    auto_pause: [
      {
        delay_in_minutes: 15,
      },
    ],
    auto_scale: [
      {
        max_node_count: 50,
        min_node_count: 3,
      },
    ],
    cache_size: 100,
    library_requirement: [
      {
        content: "appnope==0.1.0\nbeautifulsoup4==4.6.3\n",
        filename: "requirements.txt",
      },
    ],
    name: "example",
    node_size: "Small",
    node_size_family: "MemoryOptimized",
    spark_config: [
      {
        content: "spark.shuffle.spill                true\n",
        filename: "config.txt",
      },
    ],
    synapse_workspace_id: azurermSynapseWorkspaceExample.id,
    tags: {
      ENV: "Production",
    },
  });
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermSynapseSparkPoolExample.overrideLogicalId("example");

Arguments Reference

The following arguments are supported:

  • name - (Required) The name which should be used for this Synapse Spark Pool. Changing this forces a new Synapse Spark Pool to be created.

  • synapseWorkspaceId - (Required) The ID of the Synapse Workspace where the Synapse Spark Pool should exist. Changing this forces a new Synapse Spark Pool to be created.

  • nodeSizeFamily - (Required) The kind of nodes that the Spark Pool provides. Possible values are memoryOptimized and none.

  • nodeSize - (Required) The level of node in the Spark Pool. Possible values are small, medium, large, none, xLarge, xxLarge and xxxLarge.

  • nodeCount - (Optional) The number of nodes in the Spark Pool. Exactly one of nodeCount or autoScale must be specified.

  • autoScale - (Optional) An autoScale block as defined below. Exactly one of nodeCount or autoScale must be specified.

  • autoPause - (Optional) An autoPause block as defined below.

  • cacheSize - (Optional) The cache size in the Spark Pool.

  • computeIsolationEnabled - (Optional) Indicates whether compute isolation is enabled or not. Defaults to false.

\~> NOTE: The computeIsolationEnabled is only available with the XXXLarge (80 vCPU / 504 GB) node size and only available in the following regions: East US, West US 2, South Central US, US Gov Arizona, US Gov Virginia. See Isolated Compute for more information.

  • dynamicExecutorAllocationEnabled - (Optional) Indicates whether Dynamic Executor Allocation is enabled or not. Defaults to false.

  • minExecutors - (Optional) The minimum number of executors allocated only when dynamicExecutorAllocationEnabled set to true.

  • maxExecutors - (Optional) The maximum number of executors allocated only when dynamicExecutorAllocationEnabled set to true.

  • libraryRequirement - (Optional) A libraryRequirement block as defined below.

  • sessionLevelPackagesEnabled - (Optional) Indicates whether session level packages are enabled or not. Defaults to false.

  • sparkConfig - (Optional) A sparkConfig block as defined below.

  • sparkLogFolder - (Optional) The default folder where Spark logs will be written. Defaults to /logs.

  • sparkEventsFolder - (Optional) The Spark events folder. Defaults to /events.

  • sparkVersion - (Optional) The Apache Spark version. Possible values are 24 , 31 , 32 and 33. Defaults to 24.

  • tags - (Optional) A mapping of tags which should be assigned to the Synapse Spark Pool.


An autoPause block supports the following:

  • delayInMinutes - (Required) Number of minutes of idle time before the Spark Pool is automatically paused. Must be between 5 and 10080.

An autoScale block supports the following:

  • maxNodeCount - (Required) The maximum number of nodes the Spark Pool can support. Must be between 3 and 200.

  • minNodeCount - (Required) The minimum number of nodes the Spark Pool can support. Must be between 3 and 200.


An libraryRequirement block supports the following:

  • content - (Required) The content of library requirements.

  • filename - (Required) The name of the library requirements file.


An sparkConfig block supports the following:

  • content - (Required) The contents of a spark configuration.

  • filename - (Required) The name of the file where the spark configuration content will be stored.

Attributes Reference

In addition to the Arguments listed above - the following Attributes are exported:

  • id - The ID of the Synapse Spark Pool.

Timeouts

The timeouts block allows you to specify timeouts for certain actions:

  • create - (Defaults to 30 minutes) Used when creating the Synapse Spark Pool.
  • read - (Defaults to 5 minutes) Used when retrieving the Synapse Spark Pool.
  • update - (Defaults to 30 minutes) Used when updating the Synapse Spark Pool.
  • delete - (Defaults to 30 minutes) Used when deleting the Synapse Spark Pool.

Import

Synapse Spark Pool can be imported using the resourceId, e.g.

terraform import azurerm_synapse_spark_pool.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Synapse/workspaces/workspace1/bigDataPools/sparkPool1