@Stability(value=Stable)
See: Description
| Interface | Description |
|---|---|
| AlgorithmSpecification |
(experimental) Specify the training algorithm and algorithm-specific metadata.
|
| AthenaGetQueryExecutionProps |
(experimental) Properties for getting a Query Execution.
|
| AthenaGetQueryResultsProps |
(experimental) Properties for getting a Query Results.
|
| AthenaStartQueryExecutionProps |
(experimental) Properties for starting a Query Execution.
|
| AthenaStopQueryExecutionProps |
(experimental) Properties for stoping a Query Execution.
|
| BatchContainerOverrides |
The overrides that should be sent to a container.
|
| BatchJobDependency |
An object representing an AWS Batch job dependency.
|
| BatchSubmitJobProps |
Properties for RunBatchJob.
|
| Channel |
(experimental) Describes the training, validation or test dataset and the Amazon S3 location where it is stored.
|
| CodeBuildStartBuildProps |
Properties for CodeBuildStartBuild.
|
| CommonEcsRunTaskProps |
Basic properties for ECS Tasks.
|
| ContainerDefinitionConfig |
Configuration options for the ContainerDefinition.
|
| ContainerDefinitionOptions |
(experimental) Properties to define a ContainerDefinition.
|
| ContainerOverride |
A list of container overrides that specify the name of a container and the overrides it should receive.
|
| ContainerOverrides |
The overrides that should be sent to a container.
|
| DataSource |
(experimental) Location of the channel data.
|
| DockerImageConfig |
(experimental) Configuration for a using Docker image.
|
| DynamoDeleteItemProps |
Properties for DynamoDeleteItem Task.
|
| DynamoGetItemProps |
Properties for DynamoGetItem Task.
|
| DynamoPutItemProps |
Properties for DynamoPutItem Task.
|
| DynamoUpdateItemProps |
Properties for DynamoUpdateItem Task.
|
| EcsEc2LaunchTargetOptions |
Options to run an ECS task on EC2 in StepFunctions and ECS.
|
| EcsFargateLaunchTargetOptions |
Properties to define an ECS service.
|
| EcsLaunchTargetConfig |
Configuration options for the ECS launch type.
|
| EcsRunTaskBaseProps |
Construction properties for the BaseRunTaskProps.
|
| EcsRunTaskProps |
Properties for ECS Tasks.
|
| EmrAddStepProps |
(experimental) Properties for EmrAddStep.
|
| EmrCancelStepProps |
(experimental) Properties for EmrCancelStep.
|
| EmrCreateCluster.ApplicationConfigProperty |
(experimental) Properties for the EMR Cluster Applications.
|
| EmrCreateCluster.AutoScalingPolicyProperty |
(experimental) An automatic scaling policy for a core instance group or task instance group in an Amazon EMR cluster.
|
| EmrCreateCluster.BootstrapActionConfigProperty |
(experimental) Configuration of a bootstrap action.
|
| EmrCreateCluster.CloudWatchAlarmDefinitionProperty |
(experimental) The definition of a CloudWatch metric alarm, which determines when an automatic scaling activity is triggered.
|
| EmrCreateCluster.ConfigurationProperty |
(experimental) An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR.
|
| EmrCreateCluster.EbsBlockDeviceConfigProperty |
(experimental) Configuration of requested EBS block device associated with the instance group with count of volumes that will be associated to every instance.
|
| EmrCreateCluster.EbsConfigurationProperty |
(experimental) The Amazon EBS configuration of a cluster instance.
|
| EmrCreateCluster.InstanceFleetConfigProperty |
(experimental) The configuration that defines an instance fleet.
|
| EmrCreateCluster.InstanceFleetProvisioningSpecificationsProperty |
(experimental) The launch specification for Spot instances in the fleet, which determines the defined duration and provisioning timeout behavior.
|
| EmrCreateCluster.InstanceGroupConfigProperty |
(experimental) Configuration defining a new instance group.
|
| EmrCreateCluster.InstancesConfigProperty |
(experimental) A specification of the number and type of Amazon EC2 instances.
|
| EmrCreateCluster.InstanceTypeConfigProperty |
(experimental) An instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities.
|
| EmrCreateCluster.KerberosAttributesProperty |
(experimental) Attributes for Kerberos configuration when Kerberos authentication is enabled using a security configuration.
|
| EmrCreateCluster.MetricDimensionProperty |
(experimental) A CloudWatch dimension, which is specified using a Key (known as a Name in CloudWatch), Value pair.
|
| EmrCreateCluster.PlacementTypeProperty |
(experimental) The Amazon EC2 Availability Zone configuration of the cluster (job flow).
|
| EmrCreateCluster.ScalingActionProperty |
(experimental) The type of adjustment the automatic scaling activity makes when triggered, and the periodicity of the adjustment.
|
| EmrCreateCluster.ScalingConstraintsProperty |
(experimental) The upper and lower EC2 instance limits for an automatic scaling policy.
|
| EmrCreateCluster.ScalingRuleProperty |
(experimental) A scale-in or scale-out rule that defines scaling activity, including the CloudWatch metric alarm that triggers activity, how EC2 instances are added or removed, and the periodicity of adjustments.
|
| EmrCreateCluster.ScalingTriggerProperty |
(experimental) The conditions that trigger an automatic scaling activity and the definition of a CloudWatch metric alarm.
|
| EmrCreateCluster.ScriptBootstrapActionConfigProperty |
(experimental) Configuration of the script to run during a bootstrap action.
|
| EmrCreateCluster.SimpleScalingPolicyConfigurationProperty |
(experimental) An automatic scaling configuration, which describes how the policy adds or removes instances, the cooldown period, and the number of EC2 instances that will be added each time the CloudWatch metric alarm condition is satisfied.
|
| EmrCreateCluster.SpotProvisioningSpecificationProperty |
(experimental) The launch specification for Spot instances in the instance fleet, which determines the defined duration and provisioning timeout behavior.
|
| EmrCreateCluster.VolumeSpecificationProperty |
(experimental) EBS volume specifications such as volume type, IOPS, and size (GiB) that will be requested for the EBS volume attached to an EC2 instance in the cluster.
|
| EmrCreateClusterProps |
(experimental) Properties for EmrCreateCluster.
|
| EmrModifyInstanceFleetByNameProps |
(experimental) Properties for EmrModifyInstanceFleetByName.
|
| EmrModifyInstanceGroupByName.InstanceGroupModifyConfigProperty |
(experimental) Modify the size or configurations of an instance group.
|
| EmrModifyInstanceGroupByName.InstanceResizePolicyProperty |
(experimental) Custom policy for requesting termination protection or termination of specific instances when shrinking an instance group.
|
| EmrModifyInstanceGroupByName.ShrinkPolicyProperty |
(experimental) Policy for customizing shrink operations.
|
| EmrModifyInstanceGroupByNameProps |
(experimental) Properties for EmrModifyInstanceGroupByName.
|
| EmrSetClusterTerminationProtectionProps |
(experimental) Properties for EmrSetClusterTerminationProtection.
|
| EmrTerminateClusterProps |
(experimental) Properties for EmrTerminateCluster.
|
| EncryptionConfiguration |
(experimental) Encryption Configuration of the S3 bucket.
|
| EvaluateExpressionProps |
(experimental) Properties for EvaluateExpression.
|
| GlueStartJobRunProps |
Properties for starting an AWS Glue job as a task.
|
| IContainerDefinition |
(experimental) Configuration of the container used to host the model.
|
| IContainerDefinition.Jsii$Default |
Internal default implementation for
IContainerDefinition. |
| IEcsLaunchTarget |
An Amazon ECS launch type determines the type of infrastructure on which your tasks and services are hosted.
|
| IEcsLaunchTarget.Jsii$Default |
Internal default implementation for
IEcsLaunchTarget. |
| InvokeActivityProps |
Properties for FunctionTask.
|
| InvokeFunctionProps | Deprecated
use `LambdaInvoke`
|
| ISageMakerTask |
(experimental) Task to train a machine learning model using Amazon SageMaker.
|
| ISageMakerTask.Jsii$Default |
Internal default implementation for
ISageMakerTask. |
| JobDependency |
An object representing an AWS Batch job dependency.
|
| LambdaInvokeProps |
Properties for invoking a Lambda function with LambdaInvoke.
|
| LaunchTargetBindOptions |
Options for binding a launch target to an ECS run job task.
|
| MetricDefinition |
(experimental) Specifies the metric name and regular expressions used to parse algorithm logs.
|
| ModelClientOptions |
(experimental) Configures the timeout and maximum number of retries for processing a transform job invocation.
|
| OutputDataConfig |
(experimental) Configures the S3 bucket where SageMaker will save the result of model training.
|
| ProductionVariant |
(experimental) Identifies a model that you want to host and the resources to deploy for hosting it.
|
| PublishToTopicProps | Deprecated
Use `SnsPublish`
|
| QueryExecutionContext |
(experimental) Database and data catalog context in which the query execution occurs.
|
| ResourceConfig |
(experimental) Specifies the resources, ML compute instances, and ML storage volumes to deploy for model training.
|
| ResultConfiguration |
(experimental) Location of query result along with S3 bucket configuration.
|
| RunBatchJobProps | Deprecated
use `BatchSubmitJob`
|
| RunEcsEc2TaskProps |
Properties to run an ECS task on EC2 in StepFunctionsan ECS.
|
| RunEcsFargateTaskProps |
Properties to define an ECS service.
|
| RunGlueJobTaskProps | Deprecated
use `GlueStartJobRun`
|
| RunLambdaTaskProps | Deprecated
Use `LambdaInvoke`
|
| S3DataSource |
(experimental) S3 location of the channel data.
|
| S3LocationBindOptions |
(experimental) Options for binding an S3 Location.
|
| S3LocationConfig |
(experimental) Stores information about the location of an object in Amazon S3.
|
| SageMakerCreateEndpointConfigProps |
(experimental) Properties for creating an Amazon SageMaker endpoint configuration.
|
| SageMakerCreateEndpointProps |
(experimental) Properties for creating an Amazon SageMaker endpoint.
|
| SageMakerCreateModelProps |
(experimental) Properties for creating an Amazon SageMaker model.
|
| SageMakerCreateTrainingJobProps |
(experimental) Properties for creating an Amazon SageMaker training job.
|
| SageMakerCreateTransformJobProps |
(experimental) Properties for creating an Amazon SageMaker transform job task.
|
| SageMakerUpdateEndpointProps |
(experimental) Properties for updating Amazon SageMaker endpoint.
|
| SendToQueueProps | Deprecated
Use `SqsSendMessage`
|
| ShuffleConfig |
(experimental) Configuration for a shuffle option for input data in a channel.
|
| SnsPublishProps |
Properties for publishing a message to an SNS topic.
|
| SqsSendMessageProps |
Properties for sending a message to an SQS queue.
|
| StartExecutionProps | Deprecated
- use 'StepFunctionsStartExecution'
|
| StepFunctionsInvokeActivityProps |
Properties for invoking an Activity worker.
|
| StepFunctionsStartExecutionProps |
Properties for StartExecution.
|
| StoppingCondition |
(experimental) Specifies a limit to how long a model training job can run.
|
| TaskEnvironmentVariable |
An environment variable to be set in the container run as a task.
|
| TransformDataSource |
(experimental) S3 location of the input data that the model can consume.
|
| TransformInput |
(experimental) Dataset to be transformed and the Amazon S3 location where it is stored.
|
| TransformOutput |
(experimental) S3 location where you want Amazon SageMaker to save the results from the transform job.
|
| TransformResources |
(experimental) ML compute instances for the transform job.
|
| TransformS3DataSource |
(experimental) Location of the channel data.
|
| VpcConfig |
(experimental) Specifies the VPC that you want your Amazon SageMaker training job to connect to.
|
| Enum | Description |
|---|---|
| ActionOnFailure |
(experimental) The action to take when the cluster step fails.
|
| AssembleWith |
(experimental) How to assemble the results of the transform job as a single S3 object.
|
| BatchStrategy |
(experimental) Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
| CompressionType |
(experimental) Compression type of the data.
|
| DynamoConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned.
|
| DynamoItemCollectionMetrics |
Determines whether item collection metrics are returned.
|
| DynamoReturnValues |
Use ReturnValues if you want to get the item attributes as they appear before or after they are changed.
|
| EmrCreateCluster.CloudWatchAlarmComparisonOperator |
(experimental) CloudWatch Alarm Comparison Operators.
|
| EmrCreateCluster.CloudWatchAlarmStatistic |
(experimental) CloudWatch Alarm Statistics.
|
| EmrCreateCluster.CloudWatchAlarmUnit |
(experimental) CloudWatch Alarm Units.
|
| EmrCreateCluster.EbsBlockDeviceVolumeType |
(experimental) EBS Volume Types.
|
| EmrCreateCluster.EmrClusterScaleDownBehavior |
(experimental) Valid valus for the Cluster ScaleDownBehavior.
|
| EmrCreateCluster.InstanceMarket |
(experimental) EC2 Instance Market.
|
| EmrCreateCluster.InstanceRoleType |
(experimental) Instance Role Types.
|
| EmrCreateCluster.ScalingAdjustmentType |
(experimental) AutoScaling Adjustment Type.
|
| EmrCreateCluster.SpotTimeoutAction |
(experimental) Spot Timeout Actions.
|
| EncryptionOption |
(experimental) Encryption Options of the S3 bucket.
|
| InputMode |
(experimental) Input mode that the algorithm supports.
|
| InvocationType |
Invocation type of a Lambda.
|
| LambdaInvocationType |
Invocation type of a Lambda.
|
| Mode |
(experimental) Specifies how many models the container hosts.
|
| RecordWrapperType |
(experimental) Define the format of the input data.
|
| S3DataDistributionType |
(experimental) S3 Data Distribution Type.
|
| S3DataType |
(experimental) S3 Data Type.
|
| SplitType |
(experimental) Method to use to split the transform job's data files into smaller batches.
|
---
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
A Task state represents a single unit of work performed by a state machine. All work in your state machine is performed by tasks.
This module is part of the AWS Cloud Development Kit project.
A Task state represents a single unit of work performed by a state machine. In the
CDK, the exact work to be done is determined by a class that implements IStepFunctionsTask.
AWS Step Functions integrates with some AWS services so that you can call API actions, and coordinate executions directly from the Amazon States Language in Step Functions. You can directly call and pass parameters to the APIs of those services.
In the Amazon States Language, a path is a string beginning with $ that you
can use to identify components within JSON text.
Learn more about input and output processing in Step Functions here
Both InputPath and Parameters fields provide a way to manipulate JSON as it
moves through your workflow. AWS Step Functions applies the InputPath field first,
and then the Parameters field. You can first filter your raw input to a selection
you want using InputPath, and then apply Parameters to manipulate that input
further, or add new values. If you don't specify an InputPath, a default value
of $ will be used.
The following example provides the field named input as the input to the Task
state that runs a Lambda function.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object submitJob = LambdaInvoke.Builder.create(stack, "Invoke Handler")
.lambdaFunction(submitJobLambda)
.inputPath("$.input")
.build();
Tasks also allow you to select a portion of the state output to pass to the next
state. This enables you to filter out unwanted information, and pass only the
portion of the JSON that you care about. If you don't specify an OutputPath,
a default value of $ will be used. This passes the entire JSON node to the next
state.
The response from a Lambda function includes the response from the function as well as other metadata.
The following example assigns the output from the Task to a field named result
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object submitJob = LambdaInvoke.Builder.create(stack, "Invoke Handler")
.lambdaFunction(submitJobLambda)
.outputPath("$.Payload.result")
.build();
The output of a state can be a copy of its input, the result it produces (for
example, output from a Task state’s Lambda function), or a combination of its
input and result. Use ResultPath to control which combination of these is
passed to the state output. If you don't specify an ResultPath, a default
value of $ will be used.
The following example adds the item from calling DynamoDB's getItem API to the state
input and passes it to the next state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
DynamoGetItem.Builder.create(this, "PutItem")
.item(Map.of("MessageId", Map.of("s", "12345")))
.tableName("my-table")
.resultPath("$.Item")
.build();
⚠️ The OutputPath is computed after applying ResultPath. All service integrations
return metadata as part of their response. When using ResultPath, it's not possible to
merge a subset of the task output to the input.
Most tasks take parameters. Parameter values can either be static, supplied directly
in the workflow definition (by specifying their values), or a value available at runtime
in the state machine's execution (either as its input or an output of a prior state).
Parameter values available at runtime can be specified via the Data class,
using methods such as JsonPath.stringAt().
The following example provides the field named input as the input to the Lambda function
and invokes it asynchronously.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object submitJob = LambdaInvoke.Builder.create(stack, "Invoke Handler")
.lambdaFunction(submitJobLambda)
.payload(sfn.JsonPath.StringAt("$.input"))
.invocationType(tasks.InvocationType.getEVENT())
.build();
Each service integration has its own set of parameters that can be supplied.
Use the EvaluateExpression to perform simple operations referencing state paths. The
expression referenced in the task will be evaluated in a Lambda function
(eval()). This allows you to not have to write Lambda code for simple operations.
Example: convert a wait time from milliseconds to seconds, concat this in a message and wait:
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object convertToSeconds = EvaluateExpression.Builder.create(this, "Convert to seconds")
.expression("$.waitMilliseconds / 1000")
.resultPath("$.waitSeconds")
.build();
Object createMessage = EvaluateExpression.Builder.create(this, "Create message")
// Note: this is a string inside a string.
.expression("`Now waiting ${$.waitSeconds} seconds...`")
.runtime(lambda.Runtime.getNODEJS_10_X())
.resultPath("$.message")
.build();
Object publishMessage = SnsPublish.Builder.create(this, "Publish message")
.topic(topic)
.message(sfn.TaskInput.fromDataAt("$.message"))
.resultPath("$.sns")
.build();
Object wait = Wait.Builder.create(this, "Wait")
.time(sfn.WaitTime.secondsPath("$.waitSeconds"))
.build();
StateMachine.Builder.create(this, "StateMachine")
.definition(convertToSeconds
.next(createMessage)
.next(publishMessage).next(wait))
.build();
The EvaluateExpression supports a runtime prop to specify the Lambda
runtime to use to evaluate the expression. Currently, the only runtime
supported is lambda.Runtime.NODEJS_10_X.
Step Functions supports Athena through the service integration pattern.
The StartQueryExecution API runs the SQL query statement.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import .*;
Object startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(stack, "Start Athena Query")
.queryString(sfn.JsonPath.stringAt("$.queryString"))
.queryExecutionContext(Map.of(
"database", "mydatabase"))
.resultConfiguration(Map.of(
"encryptionConfiguration", Map.of(
"encryptionOption", tasks.EncryptionOption.getS3_MANAGED()),
"outputLocation", sfn.JsonPath.stringAt("$.outputLocation")))
.build();
The GetQueryExecution API gets information about a single execution of a query.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import .*;
Object getQueryExecutionJob = AthenaGetQueryExecution.Builder.create(stack, "Get Query Execution")
.queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId"))
.build();
The GetQueryResults API that streams the results of a single query execution specified by QueryExecutionId from S3.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import .*;
Object getQueryResultsJob = AthenaGetQueryResults.Builder.create(stack, "Get Query Results")
.queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId"))
.build();
The StopQueryExecution API that stops a query execution.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import .*;
Object stopQueryExecutionJob = AthenaStopQueryExecution.Builder.create(stack, "Stop Query Execution")
.queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId"))
.build();
Step Functions supports Batch through the service integration pattern.
The SubmitJob API submits an AWS Batch job from a job definition.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.batch.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
JobQueue batchQueue = new JobQueue(this, "JobQueue", new JobQueueProps()
.computeEnvironments(asList(new JobQueueComputeEnvironment()
.order(1)
.computeEnvironment(new ComputeEnvironment(this, "ComputeEnv", new ComputeEnvironmentProps()
.computeResources(new ComputeResources().vpc(vpc)))))));
JobDefinition batchJobDefinition = new JobDefinition(this, "JobDefinition", new JobDefinitionProps()
.container(new JobDefinitionContainer()
.image(ecs.ContainerImage.fromAsset(path.resolve(__dirname, "batchjob-image")))));
BatchSubmitJob task = new BatchSubmitJob(this, "Submit Job", new BatchSubmitJobProps()
.jobDefinition(batchJobDefinition)
.jobName("MyJob")
.jobQueue(batchQueue));
Step Functions supports CodeBuild through the service integration pattern.
StartBuild starts a CodeBuild Project by Project Name.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.codebuild.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
import software.amazon.awscdk.services.stepfunctions.*;
Project codebuildProject = new Project(stack, "Project", new ProjectProps()
.projectName("MyTestProject")
.buildSpec(codebuild.BuildSpec.fromObject(Map.of(
"version", "0.2",
"phases", Map.of(
"build", Map.of(
"commands", asList("echo \"Hello, CodeBuild!\"")))))));
CodeBuildStartBuild task = new CodeBuildStartBuild(stack, "Task", new CodeBuildStartBuildProps()
.project(codebuildProject)
.integrationPattern(sfn.IntegrationPattern.getRUN_JOB())
.environmentVariablesOverride(Map.of(
"ZONE", new BuildEnvironmentVariable()
.type(codebuild.BuildEnvironmentVariableType.getPLAINTEXT())
.value(sfn.JsonPath.stringAt("$.envVariables.zone")))));
You can call DynamoDB APIs from a Task state.
Read more about calling DynamoDB APIs here
The GetItem operation returns a set of attributes for the item with the given primary key.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
DynamoGetItem.Builder.create(this, "Get Item")
.key(Map.of("messageId", tasks.DynamoAttributeValue.fromString("message-007")))
.table(table)
.build();
The PutItem operation creates a new item, or replaces an old item with a new item.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
DynamoPutItem.Builder.create(this, "PutItem")
.item(Map.of(
"MessageId", tasks.DynamoAttributeValue.fromString("message-007"),
"Text", tasks.DynamoAttributeValue.fromString(sfn.JsonPath.stringAt("$.bar")),
"TotalCount", tasks.DynamoAttributeValue.fromNumber(10)))
.table(table)
.build();
The DeleteItem operation deletes a single item in a table by primary key.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
new DynamoDeleteItem(this, "DeleteItem", new DynamoDeleteItemProps()
.key(Map.of("MessageId", tasks.DynamoAttributeValue.fromString("message-007")))
.table(table)
.resultPath(sfn.JsonPath.getDISCARD()));
The UpdateItem operation edits an existing item's attributes, or adds a new item to the table if it does not already exist.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
DynamoUpdateItem.Builder.create(this, "UpdateItem")
.key(Map.of("MessageId", tasks.DynamoAttributeValue.fromString("message-007")))
.table(table)
.expressionAttributeValues(Map.of(
":val", tasks.DynamoAttributeValue.numberFromString(sfn.JsonPath.stringAt("$.Item.TotalCount.N")),
":rand", tasks.DynamoAttributeValue.fromNumber(20)))
.updateExpression("SET TotalCount = :val + :rand")
.build();
Step Functions supports ECS/Fargate through the service integration pattern.
RunTask starts a new task using the specified task definition.
The EC2 launch type allows you to run your containerized applications on a cluster of Amazon EC2 instances that you manage.
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can apply task placement strategies and constraints to customize how Amazon ECS places and terminates tasks. Learn more about task placement
The following example runs a job from a task definition on EC2
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.ecs.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
import software.amazon.awscdk.services.stepfunctions.*;
Object vpc = ec2.Vpc.fromLookup(stack, "Vpc", Map.of(
"isDefault", true));
Cluster cluster = new Cluster(stack, "Ec2Cluster", new ClusterProps().vpc(vpc));
cluster.addCapacity("DefaultAutoScalingGroup", new AddCapacityOptions()
.instanceType(new InstanceType("t2.micro"))
.vpcSubnets(new SubnetSelection().subnetType(ec2.SubnetType.getPUBLIC())));
TaskDefinition taskDefinition = new TaskDefinition(stack, "TD", new TaskDefinitionProps()
.compatibility(ecs.Compatibility.getEC2()));
taskDefinition.addContainer("TheContainer", new ContainerDefinitionOptions()
.image(ecs.ContainerImage.fromRegistry("foo/bar"))
.memoryLimitMiB(256));
EcsRunTask runTask = new EcsRunTask(stack, "Run", new EcsRunTaskProps()
.integrationPattern(sfn.IntegrationPattern.getRUN_JOB())
.cluster(cluster)
.taskDefinition(taskDefinition)
.launchTarget(new EcsEc2LaunchTarget(new EcsEc2LaunchTargetOptions()
.placementStrategies(asList(ecs.PlacementStrategy.spreadAcrossInstances(), ecs.PlacementStrategy.packedByCpu(), ecs.PlacementStrategy.randomly()))
.placementConstraints(asList(ecs.PlacementConstraint.memberOf("blieptuut"))))));
AWS Fargate is a serverless compute engine for containers that works with Amazon Elastic Container Service (ECS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Learn more about Fargate
The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you.
The following example runs a job from a task definition on Fargate
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.ecs.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
import software.amazon.awscdk.services.stepfunctions.*;
Object vpc = ec2.Vpc.fromLookup(stack, "Vpc", Map.of(
"isDefault", true));
Cluster cluster = new Cluster(stack, "FargateCluster", new ClusterProps().vpc(vpc));
TaskDefinition taskDefinition = new TaskDefinition(stack, "TD", new TaskDefinitionProps()
.memoryMiB("512")
.cpu("256")
.compatibility(ecs.Compatibility.getFARGATE()));
ContainerDefinition containerDefinition = taskDefinition.addContainer("TheContainer", new ContainerDefinitionOptions()
.image(ecs.ContainerImage.fromRegistry("foo/bar"))
.memoryLimitMiB(256));
EcsRunTask runTask = new EcsRunTask(stack, "RunFargate", new EcsRunTaskProps()
.integrationPattern(sfn.IntegrationPattern.getRUN_JOB())
.cluster(cluster)
.taskDefinition(taskDefinition)
.assignPublicIp(true)
.containerOverrides(asList(new ContainerOverride()
.containerDefinition(containerDefinition)
.environment(asList(new TaskEnvironmentVariable().name("SOME_KEY").value(sfn.JsonPath.stringAt("$.SomeKey"))))))
.launchTarget(new EcsFargateLaunchTarget()));
Step Functions supports Amazon EMR through the service integration pattern. The service integration APIs correspond to Amazon EMR APIs but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Creates and starts running a cluster (job flow).
Corresponds to the runJobFlow API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object clusterRole = Role.Builder.create(stack, "ClusterRole")
.assumedBy(new ServicePrincipal("ec2.amazonaws.com"))
.build();
Object serviceRole = Role.Builder.create(stack, "ServiceRole")
.assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))
.build();
Object autoScalingRole = Role.Builder.create(stack, "AutoScalingRole")
.assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))
.build();
autoScalingRole.assumeRolePolicy.addStatements(
PolicyStatement.Builder.create()
.effect(iam.Effect.getALLOW())
.principals(asList(
new ServicePrincipal("application-autoscaling.amazonaws.com")))
.actions(asList("sts:AssumeRole"))
.build());
EmrCreateCluster.Builder.create(stack, "Create Cluster")
.instances(Map.of())
.clusterRole(clusterRole)
.name(sfn.TaskInput.fromDataAt('$.ClusterName').getValue())
.serviceRole(serviceRole)
.autoScalingRole(autoScalingRole)
.integrationPattern(sfn.ServiceIntegrationPattern.getFIRE_AND_FORGET())
.build();
Locks a cluster (job flow) so the EC2 instances in the cluster cannot be terminated by user intervention, an API call, or a job-flow error.
Corresponds to the setTerminationProtection API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrSetClusterTerminationProtection.Builder.create(stack, "Task")
.clusterId("ClusterId")
.terminationProtected(false)
.build();
Shuts down a cluster (job flow).
Corresponds to the terminateJobFlows API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrTerminateCluster.Builder.create(stack, "Task")
.clusterId("ClusterId")
.build();
Adds a new step to a running cluster.
Corresponds to the addJobFlowSteps API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrAddStep.Builder.create(stack, "Task")
.clusterId("ClusterId")
.name("StepName")
.jar("Jar")
.actionOnFailure(tasks.ActionOnFailure.getCONTINUE())
.build();
Cancels a pending step in a running cluster.
Corresponds to the cancelSteps API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrCancelStep.Builder.create(stack, "Task")
.clusterId("ClusterId")
.stepId("StepId")
.build();
Modifies the target On-Demand and target Spot capacities for the instance fleet with the specified InstanceFleetName.
Corresponds to the modifyInstanceFleet API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrModifyInstanceFleetByName.Builder.create(stack, "Task")
.clusterId("ClusterId")
.instanceFleetName("InstanceFleetName")
.targetOnDemandCapacity(2)
.targetSpotCapacity(0)
.build();
Modifies the number of nodes and configuration settings of an instance group.
Corresponds to the modifyInstanceGroups API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
EmrModifyInstanceGroupByName.Builder.create(stack, "Task")
.clusterId("ClusterId")
.instanceGroupName(sfn.JsonPath.stringAt("$.InstanceGroupName"))
.instanceGroup(Map.of(
"instanceCount", 1))
.build();
Step Functions supports AWS Glue through the service integration pattern.
You can call the StartJobRun API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
GlueStartJobRun.Builder.create(stack, "Task")
.jobName("my-glue-job")
.arguments(Map.of(
"key", "value"))
.timeout(cdk.Duration.minutes(30))
.notifyDelayAfter(cdk.Duration.minutes(5))
.build();
Invoke a Lambda function.
You can specify the input to your Lambda function through the payload attribute.
By default, Step Functions invokes Lambda function with the state input (JSON path '$')
as the input.
The following snippet invokes a Lambda Function with the state input as the payload
by referencing the $ path.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.lambda.*;
import software.amazon.awscdk.services.stepfunctions.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
Function myLambda = new Function(this, "my sample lambda", new FunctionProps()
.code(Code.fromInline("exports.handler = async () => {\n return {\n statusCode: '200',\n body: 'hello, world!'\n };\n };"))
.runtime(Runtime.getNODEJS_12_X())
.handler("index.handler"));
new LambdaInvoke(this, "Invoke with state input", new LambdaInvokeProps()
.lambdaFunction(myLambda));
When a function is invoked, the Lambda service sends these response elements back.
⚠️ The response from the Lambda function is in an attribute called Payload
The following snippet invokes a Lambda Function by referencing the $.Payload path
to reference the output of a Lambda executed before it.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
LambdaInvoke.Builder.create(this, "Invoke with empty object as payload")
.lambdaFunction(myLambda)
.payload(sfn.TaskInput.fromObject(Map.of()))
.build();
// use the output of myLambda as input
// use the output of myLambda as input
LambdaInvoke.Builder.create(this, "Invoke with payload field in the state input")
.lambdaFunction(myOtherLambda)
.payload(sfn.TaskInput.fromDataAt("$.Payload"))
.build();
The following snippet invokes a Lambda and sets the task output to only include the Lambda function response.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
LambdaInvoke.Builder.create(this, "Invoke and set function response as task output")
.lambdaFunction(myLambda)
.payload(sfn.TaskInput.fromDataAt("$"))
.outputPath("$.Payload")
.build();
If you want to combine the input and the Lambda function response you can use
the payloadResponseOnly property and specify the resultPath. This will put the
Lambda function ARN directly in the "Resource" string, but it conflicts with the
integrationPattern, invocationType, clientContext, and qualifier properties.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
LambdaInvoke.Builder.create(this, "Invoke and combine function response with task input")
.lambdaFunction(myLambda)
.payloadResponseOnly(true)
.resultPath("$.myLambda")
.build();
You can have Step Functions pause a task, and wait for an external process to return a task token. Read more about the callback pattern
To use the callback pattern, set the token property on the task. Call the Step
Functions SendTaskSuccess or SendTaskFailure APIs with the token to
indicate that the task has completed and the state machine should resume execution.
The following snippet invokes a Lambda with the task token as part of the input to the Lambda.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
LambdaInvoke.Builder.create(stack, "Invoke with callback")
.lambdaFunction(myLambda)
.integrationPattern(sfn.IntegrationPattern.getWAIT_FOR_TASK_TOKEN())
.payload(sfn.TaskInput.fromObject(Map.of(
"token", sfn.JsonPath.getTaskToken(),
"input", sfn.JsonPath.stringAt("$.someField"))))
.build();
⚠️ The task will pause until it receives that task token back with a SendTaskSuccess or SendTaskFailure
call. Learn more about Callback with the Task
Token.
AWS Lambda can occasionally experience transient service errors. In this case, invoking Lambda
results in a 500 error, such as ServiceException, AWSLambdaException, or SdkClientException.
As a best practive, the LambdaInvoke task will retry on those errors with an interval of 2 seconds,
a back-off rate of 2 and 6 maximum attempts. Set the retryOnServiceExceptions prop to false to
disable this behavior.
Step Functions supports AWS SageMaker through the service integration pattern.
You can call the CreateTrainingJob API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SagemakerTrainTask.Builder.create(this, "TrainSagemaker")
.trainingJobName(sfn.JsonPath.stringAt("$.JobName"))
.role(role)
.algorithmSpecification(Map.of(
"algorithmName", "BlazingText",
"trainingInputMode", tasks.InputMode.getFILE()))
.inputDataConfig(asList(Map.of(
"channelName", "train",
"dataSource", Map.of(
"s3DataSource", Map.of(
"s3DataType", tasks.S3DataType.getS3_PREFIX(),
"s3Location", tasks.S3Location.fromJsonExpression("$.S3Bucket"))))))
.outputDataConfig(Map.of(
"s3OutputLocation", tasks.S3Location.fromBucket(s3.Bucket.fromBucketName(stack, "Bucket", "mybucket"), "myoutputpath")))
.resourceConfig(Map.of(
"instanceCount", 1,
"instanceType", ec2.InstanceType.of(ec2.InstanceClass.getP3(), ec2.InstanceSize.getXLARGE2()),
"volumeSize", cdk.Size.gibibytes(50)))
.stoppingCondition(Map.of(
"maxRuntime", cdk.Duration.hours(1)))
.build();
You can call the CreateTransformJob API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SagemakerTransformTask.Builder.create(this, "Batch Inference")
.transformJobName("MyTransformJob")
.modelName("MyModelName")
.modelClientOptions(Map.of(
"invocationMaxRetries", 3, // default is 0
"invocationTimeout", cdk.Duration.minutes(5)))
.role(role)
.transformInput(Map.of(
"transformDataSource", Map.of(
"s3DataSource", Map.of(
"s3Uri", "s3://inputbucket/train",
"s3DataType", S3DataType.getS3Prefix()))))
.transformOutput(Map.of(
"s3OutputPath", "s3://outputbucket/TransformJobOutputPath"))
.transformResources(Map.of(
"instanceCount", 1,
"instanceType", ec2.InstanceType.of(ec2.InstanceClass.getM4(), ec2.InstanceSize.getXLarge())))
.build();
You can call the CreateEndpoint API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SageMakerCreateEndpoint.Builder.create(this, "SagemakerEndpoint")
.endpointName(sfn.JsonPath.stringAt("$.EndpointName"))
.endpointConfigName(sfn.JsonPath.stringAt("$.EndpointConfigName"))
.build();
You can call the CreateEndpointConfig API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SageMakerCreateEndpointConfig.Builder.create(this, "SagemakerEndpointConfig")
.endpointConfigName("MyEndpointConfig")
.productionVariants(asList(Map.of(
"initialInstanceCount", 2,
"instanceType", ec2.InstanceType.of(ec2.InstanceClass.getM5(), ec2.InstanceSize.getXLARGE()),
"modelName", "MyModel",
"variantName", "awesome-variant")))
.build();
You can call the CreateModel API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SageMakerCreateModel.Builder.create(this, "Sagemaker")
.modelName("MyModel")
.primaryContainer(ContainerDefinition.Builder.create()
.image(tasks.DockerImage.fromJsonExpression(sfn.JsonPath.stringAt("$.Model.imageName")))
.mode(tasks.Mode.getSINGLE_MODEL())
.modelS3Location(tasks.S3Location.fromJsonExpression("$.TrainingJob.ModelArtifacts.S3ModelArtifacts"))
.build())
.build();
You can call the UpdateEndpoint API from a Task state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SageMakerUpdateEndpoint.Builder.create(this, "SagemakerEndpoint")
.endpointName(sfn.JsonPath.stringAt("$.Endpoint.Name"))
.endpointConfigName(sfn.JsonPath.stringAt("$.Endpoint.EndpointConfig"))
.build();
Step Functions supports Amazon SNS through the service integration pattern.
You can call the Publish API from a Task state to publish to an SNS topic.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.sns.*;
import software.amazon.awscdk.services.stepfunctions.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
// ...
Topic topic = new Topic(this, "Topic");
// Use a field from the execution data as message.
SnsPublish task1 = new SnsPublish(this, "Publish1", new SnsPublishProps()
.topic(topic)
.integrationPattern(sfn.IntegrationPattern.getREQUEST_RESPONSE())
.message(sfn.TaskInput.fromDataAt("$.state.message")));
// Combine a field from the execution data with
// a literal object.
SnsPublish task2 = new SnsPublish(this, "Publish2", new SnsPublishProps()
.topic(topic)
.message(sfn.TaskInput.fromObject(Map.of(
"field1", "somedata",
"field2", sfn.JsonPath.stringAt("$.field2")))));
You can manage AWS Step Functions executions.
AWS Step Functions supports it's own StartExecution API as a service integration.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
// Define a state machine with one Pass state
Object child = StateMachine.Builder.create(stack, "ChildStateMachine")
.definition(sfn.Chain.start(new Pass(stack, "PassState")))
.build();
// Include the state machine in a Task state with callback pattern
Object task = StepFunctionsStartExecution.Builder.create(stack, "ChildTask")
.stateMachine(child)
.integrationPattern(sfn.IntegrationPattern.getWAIT_FOR_TASK_TOKEN())
.input(sfn.TaskInput.fromObject(Map.of(
"token", sfn.JsonPath.getTaskToken(),
"foo", "bar")))
.name("MyExecutionName")
.build();
// Define a second state machine with the Task state above
// Define a second state machine with the Task state above
StateMachine.Builder.create(stack, "ParentStateMachine")
.definition(task)
.build();
You can invoke a Step Functions Activity which enables you to have a task in your state machine where the work is performed by a worker that can be hosted on Amazon EC2, Amazon ECS, AWS Lambda, basically anywhere. Activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine.
When Step Functions reaches an activity task state, the workflow waits for an activity worker to poll for a task. An activity worker polls Step Functions by using GetActivityTask, and sending the ARN for the related activity.
After the activity worker completes its work, it can provide a report of its
success or failure by using SendTaskSuccess or SendTaskFailure. These two
calls use the taskToken provided by GetActivityTask to associate the result
with that task.
The following example creates an activity and creates a task that invokes the activity.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Object submitJobActivity = new Activity(this, "SubmitJob");
StepFunctionsInvokeActivity.Builder.create(this, "Submit Job")
.activity(submitJobActivity)
.build();
Step Functions supports Amazon SQS
You can call the SendMessage API from a Task state
to send a message to an SQS queue.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.services.stepfunctions.*;
import software.amazon.awscdk.services.stepfunctions.tasks.*;
import software.amazon.awscdk.services.sqs.*;
// ...
Queue queue = new Queue(this, "Queue");
// Use a field from the execution data as message.
SqsSendMessage task1 = new SqsSendMessage(this, "Send1", new SqsSendMessageProps()
.queue(queue)
.messageBody(sfn.TaskInput.fromDataAt("$.message")));
// Combine a field from the execution data with
// a literal object.
SqsSendMessage task2 = new SqsSendMessage(this, "Send2", new SqsSendMessageProps()
.queue(queue)
.messageBody(sfn.TaskInput.fromObject(Map.of(
"field1", "somedata",
"field2", sfn.JsonPath.stringAt("$.field2")))));
Copyright © 2020. All rights reserved.