Host your application on the Internet

Statement : The sole purpose of this post is to learn how to host your application to the Internet so that anyone can access it across the world.

Solution :

  • Sign up for the heroku account.
  • Download heroku cli to host you application from your local terminal.
  • Login to your account by using id and password through terminal by using below command –

heroku login

  • Create a new repo on your github account.
  • Now clone your repo on your local machine using the below command –

git clone https://github.com/guptakumartanuj/Cryptocurrency-Concierge.git

  • It’s time to develop your application. Once it is done, push your whole code to your github repo by using below commands –
  1. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git add .
  2. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git commit -m “First commit of cryptocurrency Concierge””
  3. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git push
  • Now you are ready to crate a heroku app. Use the below command for the same –
cd ~/workingDir
$ heroku create
Creating app... done, ⬢ any-random-name
https://any-random-name.herokuapp.com/ | https://git.heroku.com/any-random-name.git
  • Now commit you application to heroku using the below command –

tangupta-mbp:Cryptocurrency-Concierge tangupta$ git push heroku master

  • It’s time to access your hosted application using the above highlighted url. But most probably you won’t be able to access the same. Make sure one instance of your hosted application is running. Use the below command to do the same –

heroku ps:scale web=1

  • In case, you are getting the below error while running the above command, then you need to make one file name Procfile with no extension and add the same to git repo. Then you need to push the repo to heroku again.

Scaling dynos… !

    Couldn’t find that process type.

  • In my case, to run my spring boot application, I have added the following command in the Procfile to run the application.

          web: java $JAVA_OPTS -Dserver.port=$PORT -jar target/*.war

  • Finally your application should be up and running. In case, you are facing any issues while pushing or running your application, you can check the heroku logs which will help you to troubleshoot the issue by using below commands-

heroku logs –tail

Enjoy coding and Happy Learning 🙂 

 

Advertisements

Pagination Support in DocumentDb using java SDK API

Statement : The sole purpose of this post is to implement the pagination in your application using Document Db java SDK API.

Solution :

  1. First you need to create the object of the FeedOptions by setting the page size (in my case I have taken it as 10) using the below snippet –

final FeedOptions feedOptions = new FeedOptions();

        feedOptions.setPageSize(10);

  1. Now you need to get the data using  queryDocuments() api of JAVA SDK passing the above feedOptions in this api’s arguments –

FeedResponse<Document> feedResults = documentClient.queryDocuments(collectionLink, queryString, feedOptions);

  1. This is the main step which helps you to get the page response as per the set limit of the page. Now to make it happen, continuation token comes into the picture. Use the below snippet to get the continuation token which can be used for pagination for the future calls given below –

String continuationToken = feedResults.getResponseContinuation();

  1. So, by using the above token we can make the next call –

List<Document> finalRes = new ArrayList<Document>();

boolean nextFlag= false;

if (feedResults != null) {

                if ((docs = feedResults.getQueryIterable().fetchNextBlock()) != null ) { 

                    for (Document doc : docs) {

                        finalRes.add(doc.toJson());

                    }

                    if (feedResults.getQueryIterable().iterator().hasNext()) {

                        nextFlag = true;

                    }

                }

            }

I hope this helps you to add the support of pagination in your application. Keep in mind, this only works on the single partition. In case of cross partition, continuation token does not work as it has the range component and in turn it will return you the null token in case of cross partition query. Enjoy coding 🙂

 

Implement secure HTTP server using Grizzly and Jersey

Statement : The purpose of this post is to implement secure HTTP server using Grizzly and Jersey.

Solution :

  • First you just need to create the keystore and truststore files using the below commands and this will ask you certain details about your organization and all –

keytool -genkey -keyalg RSA -keystore ./keystore_client -alias clientKey
keytool -export -alias clientKey -rfc -keystore ./keystore_client > ./client.cert
keytool -import -alias clientCert -file ./client.cert -keystore ./truststore_server

keytool -genkey -keyalg RSA -keystore ./keystore_server -alias serverKey
keytool -export -alias serverKey -rfc -keystore ./keystore_server > ./server.cert
keytool -import -alias serverCert -file ./server.cert -keystore ./truststore_client

  • Add the SSLContextConfiguration Object (containing the details about the keystore and truststore files) in the constructor of GrizzlyHttpServerFactory as per the given below code –

    private static final String KEYSTORE_LOC = “keystore_server”;

    private static final String KEYSTORE_PASS = “123456”;

    private static final String TRUSTSTORE_LOC = “truststore_server”;

    private static final String TRUSTSTORE_PASS = “123456”;

    SSLContextConfigurator sslCon = new SSLContextConfigurator();

        sslCon.setKeyStoreFile(KEYSTORE_LOC);

        sslCon.setKeyStorePass(KEYSTORE_PASS);

        sslCon.setTrustStoreFile(TRUSTSTORE_LOC);

        sslCon.setTrustStorePass(TRUSTSTORE_PASS);

URI BASE_URI = URI.create(http://0.0.0.0:&#8221; + config.getPort());

        String resources = “com.secure.server.main”;

        BeanConfig beanConfig = new BeanConfig();

        beanConfig.setVersion(“1.0.1”);

        beanConfig.setSchemes(new String[] { “https” });

        beanConfig.setBasePath(“”);

        beanConfig.setResourcePackage(resources);

        beanConfig.setScan(true);

        final ResourceConfig rc = new ResourceConfig();

        rc.packages(resources);

        rc.register(io.swagger.jaxrs.listing.ApiListingResource.class);

        rc.register(io.swagger.jaxrs.listing.SwaggerSerializers.class);

        rc.register(JacksonFeature.class);

        rc.register(JacksonJsonProvider.class);

        rc.register(new CrossDomainFilter());

    return GrizzlyHttpServerFactory.createHttpServer(BASE_URI, rc, true,

new SSLEngineConfigurator(sslCon).setClientMode(false).setNeedClientAuth(false);

  • Job is done. Now you just need to integrate all the code together. You can refer my github link to get the full code of the implementation. Happy coding 🙂

How to avoid getting extra double quotes and slashes in your API JSON response

Statement – While working with REST apis, most of the time we would be expecting the clean JSON response from the API. But few time we get the unstructured JSON response like below in case of children array –

{
“_page”: {
“start”: 0,
“next”: 12,
“count”: 2
}
“children”: [
“{\”_attachments\”:\”attachments/\”,\”_rid\”:\”wEIcANyrmAULAAAAAAAAAA==\”,\”id\”:\”11\”,\”_self\”:\”dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAULAAAAAAAAAA==/\”,\”value\”:\”Vanessa\”,\”key\”:\”38\”,\”_etag\”:\”\\”0c0054c7-0000-0000-0000-59ca35ab0000\\”\”,\”_ts\”:1506424239}”,
“{\”_attachments\”:\”attachments/\”,\”_rid\”:\”wEIcANyrmAUMAAAAAAAAAA==\”,\”id\”:\”12\”,\”_self\”:\”dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUMAAAAAAAAAA==/\”,\”value\”:\”Neal\”,\”key\”:\”13\”,\”_etag\”:\”\\”0c0056c7-0000-0000-0000-59ca35ab0000\\”\”,\”_ts\”:1506424239}”
]
}

So to avoid this, we just need to take care of few things in our implementation –

  1. In the Root class object, use JSONArray object instead of List of String or Object like below –

public class PageResponse {

@JsonProperty(“_page”)

private Pagination page;

@JsonProperty(“records”)

private JSONArray records = null;

}

  1. Wherever you are setting this JSONArray, try to set it by the following piece of code –

List<Object> finalRes = new ArrayList<Object>();

JSONParser parser = new JSONParser();

JSONArray finalJsonArray = (JSONArray) parser.parse(finalRes.toString());

PageResponse pageResponse = new PageResponse();

pageResponse.setRecords(finalJsonArray);

In this way, you would be getting the clean JSON response from your REST API given below –

{
  "_page": {
    "start": 0,
    "next": 13,
    "count": 3
  },
  "records": [
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAULAAAAAAAAAA==",
      "id": "11",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAULAAAAAAAAAA==/",
      "value": "Vanessa",
      "key": "38",
      "_etag": "\"0c0054c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    },
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUMAAAAAAAAAA==",
      "id": "12",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUMAAAAAAAAAA==/",
      "value": "Neal",
      "key": "13",
      "_etag": "\"0c0056c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    },
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUNAAAAAAAAAA==",
      "id": "13",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUNAAAAAAAAAA==/",
      "value": "Marguerite",
      "key": "13",
      "_etag": "\"0c0058c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    }
  ]
}

Hope it helps. 🙂

Approaches to achieve Multi-Tenancy in your application through Document DB

Introduction :

We can achieve the multi-tenancy by either partitioning the database or collection as per the tenant’s data size. As per the Microsoft Azure recommendation, if tenant data is smaller and tenancy numbers are higher then we should store data for multiple tenants in the same collection to reduce the overall resources necessary for the application. We can identify our tenant by a property in the document and issue filter queries to retrieve tenant-specific data. We can also use Document DB’s users and permissions to isolate tenant data and restrict access at a resource-level via authorization keys.

In cases where tenants are larger, need dedicated resources, or require further isolation – we can allocate dedicated collections or databases for tenants. In either case, Document DB will handle a significant portion of the operational burden for us as we scale-out your application’s data store.

Approaches to achieve Multi-Tenancy :

There are different below mentioned approaches to achieve multi-tenancy in your application –

  1. By using single database having one collection: In this approach, We will be having one database for our application and then we will be creating one collection which is a container to store all the JSON documents under this database. Due to storage and throughput limitations of a single collection, we need to enforce partitioning in this created collection so that we can achieve using Data-Set Id which can act as a partition key and this way we can achieve multi-tenancy as well. Security can be enforced at the Document-db level as well by creating a user for each tenant, assigning permissions to tenant’s data-set , and querying tenant data-set via the user’s authorization key.
Multi-Tenancy First Approach
  •        Pros : Major benefits of storing tenant data within a single collection include reducing complexity, ensuring transactional support across application data, and minimizing financial cost of storage.
  •        Cons : One collection can’t not store the higher amount of data using this model and in turn that will throttle the upcoming request after reaching the storage and throughput limit of Document DB for one collection.
  1. By using single database having multiple collections : In this approach, We will be having one database for our application and then we will be creating multiple collections based on the tenant id under this database. Now we can partition data across multiple collections based on the data-set id. Fortunately, there is no need to set a limit on the size of your database and the number of collections. Document DB allows us to dynamically add collections and capacity as the application grows.
Multi-Tenancy Second Approach
  •       Pros : Major benefits of storing tenant data within multiple collections include increased resource capacity and higher throughput. Moreover, we can place a tenant, who needs dedicated throughput, on to their own collection based on the permission given to the user for the resource (Collection, Document, Attachment, Stored Procedure, Trigger, UDF).
  •       Cons : High Cost as pricing of Document Db increases as soon as new collection is created based on the required throughput in a region.
  1. By using multiple database having multiple collections : In this approach, We will be having multiple database based on the tenant id for our application and then we will be creating multiple collections based on the tenant data-set under the respective database. For the most part, placing tenants across databases works fairly similar to placing tenants across collections. The major distinction between Document DB collections and databases is that users and permissions are scoped at a database-level. It means that each database has its own set of users and permissions – which you can use to isolate specific collections and documents.
Multi-Tenancy Third Approach
  •       Pros : Major benefits of storing tenant data within multiple databases include increased resource capacity and higher throughput. In this we can place a tenant in their own database having user permission at the DB level.
  •       Cons : Again, High Cost as pricing of Document Db increases as soon as new collection is created in the respective database based on the required throughput in a region.

4. By using multiple Azure accounts having respected database/collections In this approach, We will be having multiple azure accounts based on the tenant id for our application and then we will be creating the individual database for the tenants respectively. Now collections will be created based on the tenant data-set under the respective database. In this, data of the each tenant is separated from each other.

Multi-Tenancy Forth Approach

 

  •       Pros : Major benefits of storing tenant data within multiple accounts is to enforce security at account level. It also includes increased resource capacity and higher throughput.
  •       Cons : Again, Very high cost due to subscription of new account for every tenant.

Comparison of the above 3 approaches based on few parameters :

Approaches                    1st                                    2nd                                  3rd

 

Storage of Tenant’s Data-set

Single collection will be partitioned based on the partition key (Tenant Id + Data-Set Id). Single Database will be partitioned into Collections based on the Tenant Id and further respective collection will be partitioned based on the Data-Set Id of the tenant. Multiple Database will be created based on the Tenant Id and further respective database will be partitioned into Collections based on the Data-Set Id of tenant.
 

Handling a very large data-set

Not easily feasible because of the limitation of storage and throughput at collection level. Easily feasible by using multiple partition of collection. But we can’t enforce throughput at partition level. Very easily feasible by creating separate collection for each data-set. Here we can’t enforce throughput at database level.
Handling of hot spots (Generic) Document DB provides the way to scale up/down the throughput based on the requirement at the different pricing scheme. Same Same
Cost as per storage/throughput (Generic) For single-partition collection, single collection will limited to the 250, 1000, or 2500 RU/s of throughput and 10 GB storage on your S1(₹2.58/hr), S2(₹5.07/hr), or S3(₹10.14/hr) collection respectively. As per new Standard pricing model, we’ll be charged based on the throughput level we choose, between 400 and 10,000 RU/s. And yes, we can switch back to S1, S2, or S3 collections if we change our mind.
Throttling of the request (generic) Our application will be behind the Adobe IO so that will take care of the  throttling of the request. Apart from Adobe IO, Document DB gives us a provision to throttle the request at collection level. But we can’t apply throttling at db-level.
Primary Key By default, Document DB generates a unique id (string) for each document but we can make it by the combination of tenant id, data set id and time-stamp (Discussion in process). Same Same.

 

Hope it works for you and now you would be able to implement the multi-tenants scenario in your application easily. Enjoy 🙂

Approaches for Bulk/Batch Insert and Read through Document DB Java

Bulk/Batch Insert :

As per the Azure documentation, there are mainly two ways for bulkinsert documents into Document DB –

  1. Using the data migration tool, as described in Database migration tool for Azure Cosmos DB.
  2. Using the single insert i.e. createDocument() API for multiple documents but one at a time.
  3. Stored procedures, as described in Server-side JavaScript programming for Azure Cosmos DB.

    Statistics of Bulk Insert using stored procedure based on the different parameters

    Total Records

    2000

    Batch Size

    50

    Throughput (RU/S)

    400

    Execution Time (Secs)

    23

    2000 200 400 16
    2000 400 10000 10
    2000 2000 10000 6

*Sample Record :{
“id”: “integer”,
“key”: “integer”,
“value”: “string”
}
Statistics of Bulk Insert using single createDocument() API based on the different parameters –

Total Records

2000

Batch Size

1

Throughput (RU/S)

400

Execution Time (Secs)

566

2000 1 10000 556
  • Apart from this, Document DB supports two modes for the DocumentClient() API to insert the document –
    • ConnectionMode.DirectHttps  (All the api calls are directly communicated with server)
    • ConnectionMode.Gateway  (All the api calls are firstly communicated with gateway and then goes to server.)

          So I tried both the mode to implement the bulk insert API but there is no as such difference in the performance.
Bulk/Batch Read :

For bulk read, document db has given the support of setting the page size as per the convenience.

FeedOptions feedOptions = new FeedOptions();

feedOptions.setPageSize(5);

Now we can pass this feedOptions object in the constructor of queryDocuments() which returns the FeedResponse and using this response we can get the results in the blocks of the page size. But they haven’t given the support of moving forward and backward at any point of time. We just need to write out logic to jump to the appropriate page.

Note* – If we want the results in the sorted order then we can write the query using ORDER BY keyword for the desired attribute ordering like below –

SELECT * FROM c ORDER BY c.surname

Hope it helps. Rocks 🙂

Dependecny Injection using Grizzly and Jersey

Statement: Implementation of Dependency Injection using Grizzly and Jersey

Please follow the below steps to do the same –

  • Create a class called Hk2Feature which implements Feature.

package com.sample.di;

import javax.ws.rs.core.Feature;

import javax.ws.rs.core.FeatureContext;

import javax.ws.rs.ext.Provider;

@Provider

public class Hk2Feature implements Feature {

  public boolean configure(FeatureContext context) {

    context.register(new MyAppBinder());

    return true;

  }

}

  • Create a class called MyAppBinder which extends AbstractBinder and you need to register all the services here like below –

package com.sample.di;

import org.glassfish.hk2.utilities.binding.AbstractBinder;

public class MyAppBinder extends AbstractBinder {

  @Override

  protected void configure() {

    bind(MainService.class).to(MainService.class);

  }

}

  • Now, it’s time to write your own services and inject all the required services in your appropriate controllers like below code –

package com.sample.di;

public class MainService {

  public String testService(String name) {

    return “Hi” + name + “..Testing Dependency Injection using Grizlly Jersey “;

  }

}

package com.sample.di;

import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

@Path(“/main”)
public class MainController {

@Inject
public MainService mainService;

@GET
public String get(@QueryParam(“name”) String name) {
return mainService.testService(name);
}

@GET
@Path(“/test”)
@Produces(MediaType.APPLICATION_JSON)
public String ping() {
return “OK”;
}
}

Now hit the url http://localhost:8080/main?name=Tanuj and you will get your result. This is how you can achieve dependency injection in Grizzly Jersey application. Find the detailed implementation of the above skeleton in my repo. Happy Coding 🙂