azurermMediaTransform
Manages a Transform.
Example Usage
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as azurerm from "./.gen/providers/azurerm";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: azurerm.
For a more precise conversion please use the --provider flag in convert.*/
const azurermResourceGroupExample = new azurerm.resourceGroup.ResourceGroup(
this,
"example",
{
location: "West Europe",
name: "media-resources",
}
);
const azurermStorageAccountExample = new azurerm.storageAccount.StorageAccount(
this,
"example_1",
{
account_replication_type: "GRS",
account_tier: "Standard",
location: azurermResourceGroupExample.location,
name: "examplestoracc",
resource_group_name: azurermResourceGroupExample.name,
}
);
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermStorageAccountExample.overrideLogicalId("example");
const azurermMediaServicesAccountExample =
new azurerm.mediaServicesAccount.MediaServicesAccount(this, "example_2", {
location: azurermResourceGroupExample.location,
name: "examplemediaacc",
resource_group_name: azurermResourceGroupExample.name,
storage_account: [
{
id: azurermStorageAccountExample.id,
is_primary: true,
},
],
});
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermMediaServicesAccountExample.overrideLogicalId("example");
const azurermMediaTransformExample = new azurerm.mediaTransform.MediaTransform(
this,
"example_3",
{
description: "My transform description",
media_services_account_name: azurermMediaServicesAccountExample.name,
name: "transform1",
output: [
{
builtin_preset: [
{
preset_name: "AACGoodQualityAudio",
},
],
on_error_action: "ContinueJob",
relative_priority: "Normal",
},
],
resource_group_name: azurermResourceGroupExample.name,
}
);
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermMediaTransformExample.overrideLogicalId("example");
Example Usage with Multiple Outputs
/*Provider bindings are generated by running cdktf get.
See https://cdk.tf/provider-generation for more details.*/
import * as azurerm from "./.gen/providers/azurerm";
/*The following providers are missing schema information and might need manual adjustments to synthesize correctly: azurerm.
For a more precise conversion please use the --provider flag in convert.*/
const azurermResourceGroupExample = new azurerm.resourceGroup.ResourceGroup(
this,
"example",
{
location: "West Europe",
name: "media-resources",
}
);
const azurermStorageAccountExample = new azurerm.storageAccount.StorageAccount(
this,
"example_1",
{
account_replication_type: "GRS",
account_tier: "Standard",
location: azurermResourceGroupExample.location,
name: "examplestoracc",
resource_group_name: azurermResourceGroupExample.name,
}
);
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermStorageAccountExample.overrideLogicalId("example");
const azurermMediaServicesAccountExample =
new azurerm.mediaServicesAccount.MediaServicesAccount(this, "example_2", {
location: azurermResourceGroupExample.location,
name: "examplemediaacc",
resource_group_name: azurermResourceGroupExample.name,
storage_account: [
{
id: azurermStorageAccountExample.id,
is_primary: true,
},
],
});
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermMediaServicesAccountExample.overrideLogicalId("example");
const azurermMediaTransformExample = new azurerm.mediaTransform.MediaTransform(
this,
"example_3",
{
description: "My transform description",
media_services_account_name: azurermMediaServicesAccountExample.name,
name: "transform1",
output: [
{
builtin_preset: [
{
preset_name: "AACGoodQualityAudio",
},
],
on_error_action: "ContinueJob",
relative_priority: "Normal",
},
{
audio_analyzer_preset: [
{
audio_analysis_mode: "Basic",
audio_language: "en-US",
},
],
on_error_action: "ContinueJob",
relative_priority: "Low",
},
{
face_detector_preset: [
{
analysis_resolution: "StandardDefinition",
},
],
on_error_action: "StopProcessingJob",
relative_priority: "Low",
},
],
resource_group_name: azurermResourceGroupExample.name,
}
);
/*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/
azurermMediaTransformExample.overrideLogicalId("example");
Arguments Reference
The following arguments are supported:
-
mediaServicesAccountName
- (Required) The Media Services account name. Changing this forces a new Transform to be created. -
name
- (Required) The name which should be used for this Transform. Changing this forces a new Transform to be created. -
resourceGroupName
- (Required) The name of the Resource Group where the Transform should exist. Changing this forces a new Transform to be created.
-
description
- (Optional) An optional verbose description of the Transform. -
output
- (Optional) One or moreoutput
blocks as defined below. At least oneoutput
must be defined.
A output
block supports the following:
-
audioAnalyzerPreset
- (Optional) AaudioAnalyzerPreset
block as defined below. -
builtinPreset
- (Optional) AbuiltinPreset
block as defined below. -
faceDetectorPreset
- (Optional) AfaceDetectorPreset
block as defined below. -
onErrorAction
- (Optional) A Transform can define more than one outputs. This property defines what the service should do when one output fails - either continue to produce other outputs, or, stop the other outputs. The overall Job state will not reflect failures of outputs that are specified withcontinueJob
. Possibles value arestopProcessingJob
orcontinueJob
. -
relativePriority
- (Optional) Sets the relative priority of the TransformOutputs within a Transform. This sets the priority that the service uses for processing Transform Outputs. Possibles value arehigh
,normal
orlow
. -
videoAnalyzerPreset
- (Optional) AvideoAnalyzerPreset
block as defined below.
-> NOTE: Each output can only have one type of preset: builtin_preset,audio_analyzer_preset,face_detector_preset or video_analyzer_preset. If you need to apply different presets you must create one output for each one.
A builtinPreset
block supports the following:
presetName
- (Required) The built-in preset to be used for encoding videos. The Possible values areaacGoodQualityAudio
,adaptiveStreaming
,contentAwareEncoding
,contentAwareEncodingExperimental
,copyAllBitrateNonInterleaved
,h265AdaptiveStreaming
,h265ContentAwareEncoding
,h265SingleBitrate4K
,h265SingleBitrate1080P
,h265SingleBitrate720P
,h264MultipleBitrate1080P
,h264MultipleBitrateSd
,h264MultipleBitrate720P
,h264SingleBitrate1080P
,h264SingleBitrateSd
andh264SingleBitrate720P
.
A audioAnalyzerPreset
block supports the following:
-
audioLanguage
- (Optional) The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode:Basic, since automatic language detection is not included in basic mode. If the language isn't specified, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernible speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463. Possible values arearEg
,arSy
,deDe
,enAu
,enGb
,enUs
,esEs
,esMx
,frFr
,hiIn
,itIt
,jaJp
,koKr
,ptBr
,ruRu
andzhCn
. -
audioAnalysisMode
- (Optional) Possibles value arebasic
orstandard
. Determines the set of audio analysis operations to be performed.
A videoAnalyzerPreset
block supports the following:
-
audioLanguage
- (Optional) The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode:Basic, since automatic language detection is not included in basic mode. If the language isn't specified, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernible speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463. Possible values arearEg
,arSy
,deDe
,enAu
,enGb
,enUs
,esEs
,esMx
,frFr
,hiIn
,itIt
,jaJp
,koKr
,ptBr
,ruRu
andzhCn
. -
audioAnalysisMode
- (Optional) Possibles value arebasic
orstandard
. Determines the set of audio analysis operations to be performed. -
insightsType
- (Optional) Defines the type of insights that you want the service to generate. The allowed values areaudioInsightsOnly
,videoInsightsOnly
, andallInsights
. If you set this toallInsights
and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not useaudioInsightsOnly
if you expect some of your inputs to be video only; or usevideoInsightsOnly
if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
A faceDetectorPreset
block supports the following:
analysisResolution
- (Optional) Possibles value aresourceResolution
orstandardDefinition
. Specifies the maximum resolution at which your video is analyzed. The default behavior issourceResolution
which will keep the input video at its original resolution when analyzed. UsingstandardDefinition
will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching tostandardDefinition
will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
Attributes Reference
In addition to the Arguments listed above - the following Attributes are exported:
id
- The ID of the Transform.
Timeouts
The timeouts
block allows you to specify timeouts for certain actions:
create
- (Defaults to 30 minutes) Used when creating the Transform.read
- (Defaults to 5 minutes) Used when retrieving the Transform.update
- (Defaults to 30 minutes) Used when updating the Transform.delete
- (Defaults to 30 minutes) Used when deleting the Transform.
Import
Transforms can be imported using the resourceId
, e.g.