Resource: awsAutoscalingPolicy
Provides an AutoScaling Scaling Policy resource.
\~> NOTE: You may want to omit desiredCapacity
attribute from attached awsAutoscalingGroup
when using autoscaling policies. It's good practice to pick either manual or dynamic (policy-based) scaling.
Hands-on: Try the Manage AWS Auto Scaling Groups tutorial on HashiCorp Learn.
Example Usage
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
const awsAutoscalingGroupBar = new aws.autoscalingGroup.AutoscalingGroup(
this,
"bar",
{
availabilityZones: ["us-east-1a"],
forceDelete: true,
healthCheckGracePeriod: 300,
healthCheckType: "ELB",
launchConfiguration: "${aws_launch_configuration.foo.name}",
maxSize: 5,
minSize: 2,
name: "foobar3-terraform-test",
}
);
new aws.autoscalingPolicy.AutoscalingPolicy(this, "bat", {
adjustmentType: "ChangeInCapacity",
autoscalingGroupName: awsAutoscalingGroupBar.name,
cooldown: 300,
name: "foobar3-terraform-test",
scalingAdjustment: 4,
});
Create target tarcking scaling policy using metric math
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.autoscalingPolicy.AutoscalingPolicy(this, "example", {
autoscalingGroupName: "my-test-asg",
name: "foo",
policyType: "TargetTrackingScaling",
targetTrackingConfiguration: {
customizedMetricSpecification: {
metrics: [
{
id: "m1",
label:
"Get the queue size (the number of messages waiting to be processed)",
metricStat: {
metric: {
dimensions: [
{
name: "QueueName",
value: "my-queue",
},
],
metricName: "ApproximateNumberOfMessagesVisible",
namespace: "AWS/SQS",
},
stat: "Sum",
},
returnData: false,
},
{
id: "m2",
label: "Get the group size (the number of InService instances)",
metricStat: {
metric: {
dimensions: [
{
name: "AutoScalingGroupName",
value: "my-asg",
},
],
metricName: "GroupInServiceInstances",
namespace: "AWS/AutoScaling",
},
stat: "Average",
},
returnData: false,
},
{
expression: "m1 / m2",
id: "e1",
label: "Calculate the backlog per instance",
returnData: true,
},
],
},
targetValue: 100,
},
});
Create predictive scaling policy using customized metrics
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.autoscalingPolicy.AutoscalingPolicy(this, "example", {
autoscalingGroupName: "my-test-asg",
name: "foo",
policyType: "PredictiveScaling",
predictiveScalingConfiguration: {
metricSpecification: {
customizedCapacityMetricSpecification: {
metricDataQueries: [
{
expression:
"SUM(SEARCH('{AWS/AutoScaling,AutoScalingGroupName} MetricName=\"GroupInServiceIntances\" my-test-asg', 'Average', 300))",
id: "capacity_sum",
},
],
},
customizedLoadMetricSpecification: {
metricDataQueries: [
{
expression:
"SUM(SEARCH('{AWS/EC2,AutoScalingGroupName} MetricName=\"CPUUtilization\" my-test-asg', 'Sum', 3600))",
id: "load_sum",
},
],
},
customizedScalingMetricSpecification: {
metricDataQueries: [
{
expression:
"SUM(SEARCH('{AWS/AutoScaling,AutoScalingGroupName} MetricName=\"GroupInServiceIntances\" my-test-asg', 'Average', 300))",
id: "capacity_sum",
returnData: false,
},
{
expression:
"SUM(SEARCH('{AWS/EC2,AutoScalingGroupName} MetricName=\"CPUUtilization\" my-test-asg', 'Sum', 300))",
id: "load_sum",
returnData: false,
},
{
expression: "load_sum / (capacity_sum * PERIOD(capacity_sum) / 60)",
id: "weighted_average",
},
],
},
targetValue: 10,
},
},
});
Create predictive scaling policy using customized scaling and predefined load metric
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.autoscalingPolicy.AutoscalingPolicy(this, "example", {
autoscalingGroupName: "my-test-asg",
name: "foo",
policyType: "PredictiveScaling",
predictiveScalingConfiguration: {
metricSpecification: {
customizedScalingMetricSpecification: {
metricDataQueries: [
{
id: "scaling",
metricStat: {
metric: {
dimensions: [
{
name: "AutoScalingGroupName",
value: "my-test-asg",
},
],
metricName: "CPUUtilization",
namespace: "AWS/EC2",
},
stat: "Average",
},
},
],
},
predefinedLoadMetricSpecification: {
predefinedMetricType: "ASGTotalCPUUtilization",
resourceLabel: "testLabel",
},
targetValue: 10,
},
},
});
Argument Reference
name
- (Required) Name of the policy.autoscalingGroupName
- (Required) Name of the autoscaling group.adjustmentType
- (Optional) Whether the adjustment is an absolute number or a percentage of the current capacity. Valid values arechangeInCapacity
,exactCapacity
, andpercentChangeInCapacity
.policyType
- (Optional) Policy type, either "SimpleScaling", "StepScaling", "TargetTrackingScaling", or "PredictiveScaling". If this value isn't provided, AWS will default to "SimpleScaling."predictiveScalingConfiguration
- (Optional) Predictive scaling policy configuration to use with Amazon EC2 Auto Scaling.estimatedInstanceWarmup
- (Optional) Estimated time, in seconds, until a newly launched instance will contribute CloudWatch metrics. Without a value, AWS will default to the group's specified cooldown period.enabled
- (Optional) Whether the scaling policy is enabled or disabled. Default:true
.
The following argument is only available to "SimpleScaling" and "StepScaling" type policies:
minAdjustmentMagnitude
- (Optional) Minimum value to scale by whenadjustmentType
is set topercentChangeInCapacity
.
The following arguments are only available to "SimpleScaling" type policies:
cooldown
- (Optional) Amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start.scalingAdjustment
- (Optional) Number of instances by which to scale.adjustmentType
determines the interpretation of this number (e.g., as an absolute number or as a percentage of the existing Auto Scaling group size). A positive increment adds to the current capacity and a negative value removes from the current capacity.
The following arguments are only available to "StepScaling" type policies:
metricAggregationType
- (Optional) Aggregation type for the policy's metrics. Valid values are "Minimum", "Maximum", and "Average". Without a value, AWS will treat the aggregation type as "Average".stepAdjustment
- (Optional) Set of adjustments that manage group scaling. These have the following structure:
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.autoscalingPolicy.AutoscalingPolicy(this, "example", {
stepAdjustment: [
{
metricIntervalLowerBound: 1,
metricIntervalUpperBound: 2,
scalingAdjustment: -1,
},
{
metricIntervalLowerBound: 2,
metricIntervalUpperBound: 3,
scalingAdjustment: 1,
},
],
});
The following fields are available in step adjustments:
scalingAdjustment
- (Required) Number of members by which to scale, when the adjustment bounds are breached. A positive value scales up. A negative value scales down.metricIntervalLowerBound
- (Optional) Lower bound for the difference between the alarm threshold and the CloudWatch metric. Without a value, AWS will treat this bound as negative infinity.metricIntervalUpperBound
- (Optional) Upper bound for the difference between the alarm threshold and the CloudWatch metric. Without a value, AWS will treat this bound as positive infinity. The upper bound must be greater than the lower bound.
Notice the bounds are relative to the alarm threshold, meaning that the starting point is not 0%, but the alarm threshold. Check the official docs for a detailed example.
The following arguments are only available to "TargetTrackingScaling" type policies:
targetTrackingConfiguration
- (Optional) Target tracking policy. These have the following structure:
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.autoscalingPolicy.AutoscalingPolicy(this, "example", {
targetTrackingConfiguration: {
predefinedMetricSpecification: {
predefinedMetricType: "ASGAverageCPUUtilization",
},
targetValue: 40,
},
});
The following fields are available in target tracking configuration:
predefinedMetricSpecification
- (Optional) Predefined metric. Conflicts withcustomizedMetricSpecification
.customizedMetricSpecification
- (Optional) Customized metric. Conflicts withpredefinedMetricSpecification
.targetValue
- (Required) Target value for the metric.disableScaleIn
- (Optional, Default: false) Whether scale in by the target tracking policy is disabled.
predefinedMetricSpecification
The following arguments are supported:
predefinedMetricType
- (Required) Metric type.resourceLabel
- (Optional) Identifies the resource associated with the metric type.
customizedMetricSpecification
The following arguments are supported:
metricDimension
- (Optional) Dimensions of the metric.metricName
- (Optional) Name of the metric.namespace
- (Optional) Namespace of the metric.statistic
- (Optional) Statistic of the metric.unit
- (Optional) Unit of the metric.metrics
- (Optional) Metrics to include, as a metric data query.
metricDimension
The following arguments are supported:
name
- (Required) Name of the dimension.value
- (Required) Value of the dimension.
metrics
The following arguments are supported:
expression
- (Optional) Math expression used on the returned metric. You must specify eitherexpression
ormetricStat
, but not both.id
- (Required) Short name for the metric used in target tracking scaling policy.label
- (Optional) Human-readable label for this metric or expression.metricStat
- (Optional) Structure that defines CloudWatch metric to be used in target tracking scaling policy. You must specify eitherexpression
ormetricStat
, but not both.returnData
- (Optional) Boolean that indicates whether to return the timestamps and raw data values of this metric, the default is true
metricStat
The following arguments are supported:
metric
- (Required) Structure that defines the CloudWatch metric to return, including the metric name, namespace, and dimensions.stat
- (Required) Statistic of the metrics to return.unit
- (Optional) Unit of the metrics to return.
metric
The following arguments are supported:
dimensions
- (Optional) Dimensions of the metric.metricName
- (Required) Name of the metric.namespace
- (Required) Namespace of the metric.
dimensions
The following arguments are supported:
name
- (Required) Name of the dimension.value
- (Required) Value of the dimension.
predictiveScalingConfiguration
The following arguments are supported:
maxCapacityBreachBehavior
- (Optional) Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group. Valid values arehonorMaxCapacity
orincreaseMaxCapacity
. Default ishonorMaxCapacity
.maxCapacityBuffer
- (Optional) Size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. Valid range is0
to100
. If set to0
, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity.metricSpecification
- (Required) This structure includes the metrics and target utilization to use for predictive scaling.mode
- (Optional) Predictive scaling mode. Valid values areforecastAndScale
andforecastOnly
. Default isforecastOnly
.schedulingBufferTime
- (Optional) Amount of time, in seconds, by which the instance launch time can be advanced. Minimum is0
.
metricSpecification
The following arguments are supported:
customizedCapacityMetricSpecification
- (Optional) Customized capacity metric specification. The field is only valid when you usecustomizedLoadMetricSpecification
customizedLoadMetricSpecification
- (Optional) Customized load metric specification.customizedScalingMetricSpecification
- (Optional) Customized scaling metric specification.predefinedLoadMetricSpecification
- (Optional) Predefined load metric specification.predefinedMetricPairSpecification
- (Optional) Metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.predefinedScalingMetricSpecification
- (Optional) Predefined scaling metric specification.
predefinedLoadMetricSpecification
The following arguments are supported:
predefinedMetricType
- (Required) Metric type. Valid values areasgTotalCpuUtilization
,asgTotalNetworkIn
,asgTotalNetworkOut
, oralbTargetGroupRequestCount
.resourceLabel
- (Required) Label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group.
predefinedMetricPairSpecification
The following arguments are supported:
predefinedMetricType
- (Required) Which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type isasgcpuUtilization
, the Auto Scaling group's total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric. Valid values areasgcpuUtilization
,asgNetworkIn
,asgNetworkOut
, oralbRequestCount
.resourceLabel
- (Required) Label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group.
predefinedScalingMetricSpecification
The following arguments are supported:
predefinedMetricType
- (Required) Describes a scaling metric for a predictive scaling policy. Valid values areasgAverageCpuUtilization
,asgAverageNetworkIn
,asgAverageNetworkOut
, oralbRequestCountPerTarget
.resourceLabel
- (Required) Label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group.
customizedScalingMetricSpecification
The following arguments are supported:
metricDataQueries
- (Required) List of up to 10 structures that defines custom scaling metric in predictive scaling policy
customizedLoadMetricSpecification
The following arguments are supported:
metricDataQueries
- (Required) List of up to 10 structures that defines custom load metric in predictive scaling policy
customizedCapacityMetricSpecification
The following arguments are supported:
metricDataQueries
- (Required) List of up to 10 structures that defines custom capacity metric in predictive scaling policy
metricDataQueries
The following arguments are supported:
expression
- (Optional) Math expression used on the returned metric. You must specify eitherexpression
ormetricStat
, but not both.id
- (Required) Short name for the metric used in predictive scaling policy.label
- (Optional) Human-readable label for this metric or expression.metricStat
- (Optional) Structure that defines CloudWatch metric to be used in predictive scaling policy. You must specify eitherexpression
ormetricStat
, but not both.returnData
- (Optional) Boolean that indicates whether to return the timestamps and raw data values of this metric, the default is true
metricStat
The following arguments are supported:
metric
- (Required) Structure that defines the CloudWatch metric to return, including the metric name, namespace, and dimensions.stat
- (Required) Statistic of the metrics to return.unit
- (Optional) Unit of the metrics to return.
metric
The following arguments are supported:
dimensions
- (Optional) Dimensions of the metric.metricName
- (Required) Name of the metric.namespace
- (Required) Namespace of the metric.
dimensions
The following arguments are supported:
name
- (Required) Name of the dimension.value
- (Required) Value of the dimension.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn
- ARN assigned by AWS to the scaling policy.name
- Scaling policy's name.autoscalingGroupName
- The scaling policy's assigned autoscaling group.adjustmentType
- Scaling policy's adjustment type.policyType
- Scaling policy's type.
Import
AutoScaling scaling policy can be imported using the role autoscaling_group_name and name separated by /
.