Skip to content

Resource: awsLambdaEventSourceMapping

Provides a Lambda event source mapping. This allows Lambda functions to get events from Kinesis, DynamoDB, SQS, Amazon MQ and Managed Streaming for Apache Kafka (MSK).

For information about Lambda and how to use it, see What is AWS Lambda?. For information about event source mappings, see CreateEventSourceMapping in the API docs.

Example Usage

DynamoDB

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  eventSourceArn: "${aws_dynamodb_table.example.stream_arn}",
  functionName: "${aws_lambda_function.example.arn}",
  startingPosition: "LATEST",
});

Kinesis

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  eventSourceArn: "${aws_kinesis_stream.example.arn}",
  functionName: "${aws_lambda_function.example.arn}",
  startingPosition: "LATEST",
});

Managed Streaming for Apache Kafka (MSK)

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  eventSourceArn: "${aws_msk_cluster.example.arn}",
  functionName: "${aws_lambda_function.example.arn}",
  startingPosition: "TRIM_HORIZON",
  topics: ["Example"],
});

Self Managed Apache Kafka

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  functionName: "${aws_lambda_function.example.arn}",
  selfManagedEventSource: {
    endpoints: {
      kafkaBootstrapServers: "kafka1.example.com:9092,kafka2.example.com:9092",
    },
  },
  sourceAccessConfiguration: [
    {
      type: "VPC_SUBNET",
      uri: "subnet:subnet-example1",
    },
    {
      type: "VPC_SUBNET",
      uri: "subnet:subnet-example2",
    },
    {
      type: "VPC_SECURITY_GROUP",
      uri: "security_group:sg-example",
    },
  ],
  startingPosition: "TRIM_HORIZON",
  topics: ["Example"],
});

SQS

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  eventSourceArn: "${aws_sqs_queue.sqs_queue_test.arn}",
  functionName: "${aws_lambda_function.example.arn}",
});

SQS with event filter

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  eventSourceArn: "${aws_sqs_queue.sqs_queue_test.arn}",
  filterCriteria: {
    filter: [
      {
        pattern:
          '${jsonencode({\n        body = {\n          Temperature : [{ numeric : [">", 0, "<=", 100] }]\n          Location : ["New York"]\n        }\n      })}',
      },
    ],
  },
  functionName: "${aws_lambda_function.example.arn}",
});

Amazon MQ (ActiveMQ)

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  batchSize: 10,
  enabled: true,
  eventSourceArn: "${aws_mq_broker.example.arn}",
  functionName: "${aws_lambda_function.example.arn}",
  queues: ["example"],
  sourceAccessConfiguration: [
    {
      type: "BASIC_AUTH",
      uri: "${aws_secretsmanager_secret_version.example.arn}",
    },
  ],
});

Amazon MQ (RabbitMQ)

/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as aws from "./.gen/providers/aws";
new aws.lambdaEventSourceMapping.LambdaEventSourceMapping(this, "example", {
  batchSize: 1,
  enabled: true,
  eventSourceArn: "${aws_mq_broker.example.arn}",
  functionName: "${aws_lambda_function.example.arn}",
  queues: ["example"],
  sourceAccessConfiguration: [
    {
      type: "VIRTUAL_HOST",
      uri: "/example",
    },
    {
      type: "BASIC_AUTH",
      uri: "${aws_secretsmanager_secret_version.example.arn}",
    },
  ],
});

Argument Reference

  • amazonManagedKafkaEventSourceConfig - (Optional) Additional configuration block for Amazon Managed Kafka sources. Incompatible with "self_managed_event_source" and "self_managed_kafka_event_source_config". Detailed below.
  • batchSize - (Optional) The largest number of records that Lambda will retrieve from your event source at the time of invocation. Defaults to 100 for DynamoDB, Kinesis, MQ and MSK, 10 for SQS.
  • bisectBatchOnFunctionError: - (Optional) If the function returns an error, split the batch in two and retry. Only available for stream sources (DynamoDB and Kinesis). Defaults to false.
  • destinationConfig: - (Optional) An Amazon SQS queue or Amazon SNS topic destination for failed records. Only available for stream sources (DynamoDB and Kinesis). Detailed below.
  • enabled - (Optional) Determines if the mapping will be enabled on creation. Defaults to true.
  • eventSourceArn - (Optional) The event source ARN - this is required for Kinesis stream, DynamoDB stream, SQS queue, MQ broker or MSK cluster. It is incompatible with a Self Managed Kafka source.
  • filterCriteria - (Optional) The criteria to use for event filtering Kinesis stream, DynamoDB stream, SQS queue event sources. Detailed below.
  • functionName - (Required) The name or the ARN of the Lambda function that will be subscribing to events.
  • functionResponseTypes - (Optional) A list of current response type enums applied to the event source mapping for AWS Lambda checkpointing. Only available for SQS and stream sources (DynamoDB and Kinesis). Valid values: reportBatchItemFailures.
  • maximumBatchingWindowInSeconds - (Optional) The maximum amount of time to gather records before invoking the function, in seconds (between 0 and 300). Records will continue to buffer (or accumulate in the case of an SQS queue event source) until either maximumBatchingWindowInSeconds expires or batchSize has been met. For streaming event sources, defaults to as soon as records are available in the stream. If the batch it reads from the stream/queue only has one record in it, Lambda only sends one record to the function. Only available for stream sources (DynamoDB and Kinesis) and SQS standard queues.
  • maximumRecordAgeInSeconds: - (Optional) The maximum age of a record that Lambda sends to a function for processing. Only available for stream sources (DynamoDB and Kinesis). Must be either -1 (forever, and the default value) or between 60 and 604800 (inclusive).
  • maximumRetryAttempts: - (Optional) The maximum number of times to retry when the function returns an error. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of -1 (forever), maximum of 10000.
  • parallelizationFactor: - (Optional) The number of batches to process from each shard concurrently. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of 1, maximum of 10.
  • queues - (Optional) The name of the Amazon MQ broker destination queue to consume. Only available for MQ sources. A single queue name must be specified.
  • scalingConfig - (Optional) Scaling configuration of the event source. Only available for SQS queues. Detailed below.
  • selfManagedEventSource: - (Optional) For Self Managed Kafka sources, the location of the self managed cluster. If set, configuration must also include sourceAccessConfiguration. Detailed below.
  • selfManagedKafkaEventSourceConfig - (Optional) Additional configuration block for Self Managed Kafka sources. Incompatible with "event_source_arn" and "amazon_managed_kafka_event_source_config". Detailed below.
  • sourceAccessConfiguration: (Optional) For Self Managed Kafka sources, the access configuration for the source. If set, configuration must also include selfManagedEventSource. Detailed below.
  • startingPosition - (Optional) The position in the stream where AWS Lambda should start reading. Must be one of AT_TIMESTAMP (Kinesis only), latest or TRIM_HORIZON if getting events from Kinesis, DynamoDB, MSK or Self Managed Apache Kafka. Must not be provided if getting events from SQS. More information about these positions can be found in the AWS DynamoDB Streams API Reference and AWS Kinesis API Reference.
  • startingPositionTimestamp - (Optional) A timestamp in RFC3339 format of the data record which to start reading when using startingPosition set to AT_TIMESTAMP. If a record with this exact timestamp does not exist, the next later record is chosen. If the timestamp is older than the current trim horizon, the oldest available record is chosen.
  • topics - (Optional) The name of the Kafka topics. Only available for MSK sources. A single topic name must be specified.
  • tumblingWindowInSeconds - (Optional) The duration in seconds of a processing window for AWS Lambda streaming analytics. The range is between 1 second up to 900 seconds. Only available for stream sources (DynamoDB and Kinesis).

amazon_managed_kafka_event_source_config Configuration Block

  • consumerGroupId - (Optional) A Kafka consumer group ID between 1 and 200 characters for use when creating this event source mapping. If one is not specified, this value will be automatically generated. See AmazonManagedKafkaEventSourceConfig Syntax.

destination_config Configuration Block

  • onFailure - (Optional) The destination configuration for failed invocations. Detailed below.

destination_config on_failure Configuration Block

  • destinationArn - (Required) The Amazon Resource Name (ARN) of the destination resource.

filter_criteria Configuration Block

  • filter - (Optional) A set of up to 5 filter. If an event satisfies at least one, Lambda sends the event to the function or adds it to the next batch. Detailed below.

filter_criteria filter Configuration Block

scaling_config Configuration Block

self_managed_event_source Configuration Block

  • endpoints - (Required) A map of endpoints for the self managed source. For Kafka self-managed sources, the key should be KAFKA_BOOTSTRAP_SERVERS and the value should be a string with a comma separated list of broker endpoints.

self_managed_kafka_event_source_config Configuration Block

  • consumerGroupId - (Optional) A Kafka consumer group ID between 1 and 200 characters for use when creating this event source mapping. If one is not specified, this value will be automatically generated. See SelfManagedKafkaEventSourceConfig Syntax.

source_access_configuration Configuration Block

  • type - (Required) The type of this configuration. For Self Managed Kafka you will need to supply blocks for type VPC_SUBNET and VPC_SECURITY_GROUP.
  • uri - (Required) The URI for this configuration. For type VPC_SUBNET the value should be subnet:subnetId where subnetId is the value you would find in an aws_subnet resource's id attribute. For type VPC_SECURITY_GROUP the value should be securityGroup:securityGroupId where securityGroupId is the value you would find in an aws_security_group resource's id attribute.

Attributes Reference

In addition to all arguments above, the following attributes are exported:

  • functionArn - The the ARN of the Lambda function the event source mapping is sending events to. (Note: this is a computed value that differs from functionName above.)
  • lastModified - The date this resource was last modified.
  • lastProcessingResult - The result of the last AWS Lambda invocation of your Lambda function.
  • state - The state of the event source mapping.
  • stateTransitionReason - The reason the event source mapping is in its current state.
  • uuid - The UUID of the created event source mapping.

Import

Lambda event source mappings can be imported using the uuid (event source mapping identifier), e.g.,

$ terraform import aws_lambda_event_source_mapping.event_source_mapping 12345kxodurf3443