Boomi

How To Integrate Amazon S3 With Boomi

INTEGRATE AMAZON S3 WITH BOOMI

How to Integrate Amazon S3 with Boomi

In this blog, we will see how to integrate Amazon S3 with Boomi by creating a bucket in AWS S3 and storing the file that comes from the Boomi Process.

What is AWS S3?

This is an offering from AWS (Amazon Web Services) that allows us to store or retrieve data from the cloud. Here, S3 stands for Simple Storage Service. It has a web service interface that is used to store and retrieve data. It uses the concept of Buckets and Objects. The bucket is like a container and an Object is similar to a file that is stored in it. We store all the data in a bucket.

Let us see how to create a bucket in AWS S3.

Step 1: Search for the AWS free tier (https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc). We will see a page to create AWS Free Account or sign in to the console.

  • If you are a new user, you will have to click on Create a free account and proceed with the required details. They ask for credit or debit card details for verification purposes and then select a basic free plan. If you are an existing user, we can directly click on Sign in to the console.

Step 2: Once we click on sign-in to the console, we will have to provide the Root user email address that we have given while creating the account. Click Next and give the password.

Step 3: Once we have authenticated with a username and password, we will see the AWS Management Console.

Step 4: In Find Services, type S3 and press enter. We see an option of creating a bucket where we can store files and documents in the bucket.

Step 5: Let us now create a bucket. Click on Create Bucket. We have to name the bucket by following certain rules i.e., Bucket name should start with lowercase, use only specific symbols, and so on. We can click on see rules for bucket naming and name the bucket accordingly as per the screenshot. In this Use Case, the bucket name will be my-demo1 and the same bucket name should be given while configuring Amazon S3 in Boomi.

  • We create buckets in regions and store data in that specific region. We can choose a region from the drop-down. Here, it will be ap-south-1 and the same region should be selected while we store the data in this region using the Amazon S3 connector in Boomi.

  • We will leave all other fields with default settings and click on Create bucket

  • We see that bucket got created and now we can store any type of data, upload files or documents in the bucket.

  • We have an option of creating folders inside the bucket and if we click on the action, we can perform different actions such as download, move, rename,e and so on as shown in the screenshot.

Now, let us see how to get the access key ID and secret key which is required while configuring the AmazonS3 REST connector in Boomi.

  • Click on services and choose IAM from Security, Identity & Compliance as shown in the screenshot.

  • In Quick Links, open my access key as shown in the screenshot.

  • We will see the Security credentials page and then choose Create New access key.

  • We see that a new access key ID and secret key have been generated.

 We have created the bucket and got access and a secret key, we will integrate the AMAZON S3 REST connector in Boomi and store data in the bucket that we created.

Step 1: Log on to the Boomi platform (https://platform.boomi.com/) with the required credentials i.e., Email Address and Password.

Step 2: Once logged into the Boomi platform, we will be able to view the Home page.

Step 3: Now, click on Services followed by Integration. We will see the Build page. Click on New.

Step 4: Once, clicked on New, we will be able to see three fields i.e. Type, Component Name, and Folder.

  • Select Type as process as we are building a process. Component Name and Folder can be given based on your choice i.e., which name to give and where we want to create the process). Click on Create.

Step 5: We see that the process gets created with a start shape which is configured with AS2 Shared Server by default.

Step 6: Select the start shape and choose No Data. Click ok.

Step 7: Drag and drop the Amazon S3 REST connector onto the process canvas and configure it.

  • We have to configure 3 fields in the connector i.e., Action, Connector, and Operation.
  • In Action, by default “Delete” gets selected. We have to choose action as per our requirement.

GET — If we want to retrieve the object.

DELETE — If we want to delete the object.

QUERY — If we want to look upon a specific search Criteria.

SELECT — If we want to filter the data.

UPSERT — Create or upload a new object.

Here, we are storing or inserting data in the bucket so the action would be UPSERT.

  • Select + on connection and create a new connection. Name the connection and we have to give the  AWS Access key and AWS Secret Key which were generated after the  creation of the  bucket as shown in the  above steps.

  • Now set the respective keys and click on test connection.

  • Choose the atom from the dropdown and click next.

  • We see test connection is successful. Click Finish, save, and close.

  • Now, we will configure the operation. Click + on operation and name it.

  • Connector Action:  It refers to the action which we have configured in the connection. It will be UPSERT as we have set action as UPSERT in connection.
  • Object Tracking Direction: Select the document tracking direction for the operation, either Input Documents or Output Documents. Leave it to default which is Input Documents.
  • Error Behaviour: It controls whether an application error prevents an operation from completing. For now, we don’t check this box.
  • Option: It is used to categorize the storage. We will let it be default.
  • Part Size: While uploading a large object in parts, specify the size in megabytes for each part. We will leave it to default which is 5.
  • Encryption: Select the server-side encryption type to protect, encrypt and decrypt the data. Leave it to default which is NONE.
  • Encryption Key: It is used to protect and encrypt the object. We don’t provide the key in this Use Case.

AWS Region: Select the name of the AWS Region in which your bucket resides.

  • Automatically Detect — The connector requests information about the bucket from the service to automatically retrieve the region. We will leave it to default which is Automatically Detect. There is a chance of erroring out and in that case, we can choose the same region where we have created the bucket.

  • Once we import, it will ask us to select the atom and choose the connection. Click next.

  • Here, we have to select the object type. We can see all the available buckets as drop pective of the region. We can either select the buckets in which we want to store the data or choose Dynamic bucket which will allow us to dynamically provide the bucket name as a document property. This allows you to change the operation configuration at runtime and insert the bucket name.

  • Here, we will choose Dynamic Bucket and will configure this in Dynamic Property. Click next.

  • We see that the profile has been created. Click the edit button on the profile.

  • Response Profile includes Bucket, File key, and Version ID. which returns as a response.

  • Click Finish, save, and close.

Step 8: Drag and drop the disk connector shape onto the process canvas to read a file from a specific directory.

  • We have to configure 3 fields in the connector i.e., Action, Connector, and Operation.

  • We see 2 actions i.e., Get and Send in Actions.

Get – To get the data from disk location.

Send – To send to the disk location.

  • Here, we will choose action as GET as we are reading the file.

  • Click + on connection to create a new one.

  • Name the file and give the directory from where we want to read the file. Here, we are reading the XML file from D drive and Boomi Examples folder as shown in the screenshot.

  • Click save and close
  • Now, we will configure the operation. Click + on operation to create a new one.

  • Name the operation and configure the following.

  • File Filter: Read only files with a file name that matches the file filter. Here, it will be emp.xml.

File Matching Type:

  • Wildcards uses simple file filters like * and? * Represents multiple characters and? represents a single character.
  • Regular Expressions can include complex regular expressions.
  • Exact Match includes the filename which we are reading.

  • Here, file matching type would be Exact Match as we are giving the file name.

  • Maximum files to read: It sets the maximum number of files to be read at one time. Let it be default i.e.,0.
  • Delete files after reading: If we want the file to be read and deleted, we can check this option. Here, we are leaving it to default. Click save and close
  • The complete disk operation looks like,

Step 9: Drag and drop set properties shape onto the process canvas to set Dynamic Process Property, File name and Bucket name.

  • Select the shape and click + on the left side to set the properties.

  • We will set Dynamic Process Property as filename. Select dynamic process Property from drop down as shown in screenshot.

  • Give the property name as filename and click ok.

  • We will set parameter as file name which comes from Disk Connector. To set the parameter for dynamic process property, Select Dynamic Process Property and click + on parameter section.

  • Select type as document property as shown in the screenshot.

  • Choose disk connector from connectors and select file name which means we are assigning the file name that reads from disk connector. Click ok.

  • After setting the property and parameter for Dynamic Process Property, it looks like

  • Now, select the connector as Amazon S3 REST from left side and click on +. We see options to set Bucket, folder, filename, file key, content – type, Version ID, and Bucket region in the document property.

  • We will choose Bucket and File name properties and set values in set properties shape. Select Bucket and click ok.

  • Follow the same steps. Choose File Name and click ok.

                                         

  • We have configured 3 properties (i.e., Dynamic Process property, Bucket and File Name) in set properties shape.

                                                      

  • Now, let us set the values for the properties in parameter section. Select Bucket on left side and click + on the parameters section.

  • We will set Type as static and in static value, will give the bucket name which we created and will store the data. Here, it would be my-demo1. Click ok.

  • We will set the value for Filename. Select Filename and click + on parameters on right hand side.

  • Choose type as dynamic process property and assign Dynamic Process property name i.e.  filename which we have configured before in this use case. Click ok.

Step 10: Now, arrange all the shapes in an order and run the test.

Step 11: Test the process by clicking Test button on the righthand side by configuring the atom as shown in the below screenshots

Step 12: We see that the process has been executed and output has been generated. Click on stop shape

  • Click on the document in view source and can see the response which is Bucket name and Filekey.

Step 13: Once you go to AWS console and select my-demo1 bucket, we will see that the file which we received from disk connector has been pushed into my-demo1 bucket.

  • To view the file, click on name

  • Select Open and will see that file has been downloaded.

  • Once, we open the downloaded file, this is how it looks like

Author

TGH Software Solutions Pvt. Ltd.

Leave a comment

Your email address will not be published. Required fields are marked *