Implement secure HTTP server using Grizzly and Jersey

Statement : The purpose of this post is to implement secure HTTP server using Grizzly and Jersey.

Solution :

  • First you just need to create the keystore and truststore files using the below commands and this will ask you certain details about your organization and all –

keytool -genkey -keyalg RSA -keystore ./keystore_client -alias clientKey
keytool -export -alias clientKey -rfc -keystore ./keystore_client > ./client.cert
keytool -import -alias clientCert -file ./client.cert -keystore ./truststore_server

keytool -genkey -keyalg RSA -keystore ./keystore_server -alias serverKey
keytool -export -alias serverKey -rfc -keystore ./keystore_server > ./server.cert
keytool -import -alias serverCert -file ./server.cert -keystore ./truststore_client

  • Add the SSLContextConfiguration Object (containing the details about the keystore and truststore files) in the constructor of GrizzlyHttpServerFactory as per the given below code –

    private static final String KEYSTORE_LOC = “keystore_server”;

    private static final String KEYSTORE_PASS = “123456”;

    private static final String TRUSTSTORE_LOC = “truststore_server”;

    private static final String TRUSTSTORE_PASS = “123456”;

    SSLContextConfigurator sslCon = new SSLContextConfigurator();





URI BASE_URI = URI.create(” + config.getPort());

        String resources = “”;

        BeanConfig beanConfig = new BeanConfig();


        beanConfig.setSchemes(new String[] { “https” });




        final ResourceConfig rc = new ResourceConfig();






        rc.register(new CrossDomainFilter());

    return GrizzlyHttpServerFactory.createHttpServer(BASE_URI, rc, true,

new SSLEngineConfigurator(sslCon).setClientMode(false).setNeedClientAuth(false);

  • Job is done. Now you just need to integrate all the code together. You can refer my github link to get the full code of the implementation. Happy coding 🙂

How to avoid getting extra double quotes and slashes in your API JSON response

Statement – While working with REST apis, most of the time we would be expecting the clean JSON response from the API. But few time we get the unstructured JSON response like below in case of children array –

“_page”: {
“start”: 0,
“next”: 12,
“count”: 2
“children”: [

So to avoid this, we just need to take care of few things in our implementation –

  1. In the Root class object, use JSONArray object instead of List of String or Object like below –

public class PageResponse {


private Pagination page;


private JSONArray records = null;


  1. Wherever you are setting this JSONArray, try to set it by the following piece of code –

List<Object> finalRes = new ArrayList<Object>();

JSONParser parser = new JSONParser();

JSONArray finalJsonArray = (JSONArray) parser.parse(finalRes.toString());

PageResponse pageResponse = new PageResponse();


In this way, you would be getting the clean JSON response from your REST API given below –

  "_page": {
    "start": 0,
    "next": 13,
    "count": 3
  "records": [
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAULAAAAAAAAAA==",
      "id": "11",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAULAAAAAAAAAA==/",
      "value": "Vanessa",
      "key": "38",
      "_etag": "\"0c0054c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUMAAAAAAAAAA==",
      "id": "12",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUMAAAAAAAAAA==/",
      "value": "Neal",
      "key": "13",
      "_etag": "\"0c0056c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUNAAAAAAAAAA==",
      "id": "13",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUNAAAAAAAAAA==/",
      "value": "Marguerite",
      "key": "13",
      "_etag": "\"0c0058c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239

Hope it helps. 🙂

Approaches to achieve Multi-Tenancy in your application through Document DB

Introduction :

We can achieve the multi-tenancy by either partitioning the database or collection as per the tenant’s data size. As per the Microsoft Azure recommendation, if tenant data is smaller and tenancy numbers are higher then we should store data for multiple tenants in the same collection to reduce the overall resources necessary for the application. We can identify our tenant by a property in the document and issue filter queries to retrieve tenant-specific data. We can also use Document DB’s users and permissions to isolate tenant data and restrict access at a resource-level via authorization keys.

In cases where tenants are larger, need dedicated resources, or require further isolation – we can allocate dedicated collections or databases for tenants. In either case, Document DB will handle a significant portion of the operational burden for us as we scale-out your application’s data store.

Approaches to achieve Multi-Tenancy :

There are different below mentioned approaches to achieve multi-tenancy in your application –

  1. By using single database having one collection: In this approach, We will be having one database for our application and then we will be creating one collection which is a container to store all the JSON documents under this database. Due to storage and throughput limitations of a single collection, we need to enforce partitioning in this created collection so that we can achieve using Data-Set Id which can act as a partition key and this way we can achieve multi-tenancy as well. Security can be enforced at the Document-db level as well by creating a user for each tenant, assigning permissions to tenant’s data-set , and querying tenant data-set via the user’s authorization key.
Multi-Tenancy First Approach
  •        Pros : Major benefits of storing tenant data within a single collection include reducing complexity, ensuring transactional support across application data, and minimizing financial cost of storage.
  •        Cons : One collection can’t not store the higher amount of data using this model and in turn that will throttle the upcoming request after reaching the storage and throughput limit of Document DB for one collection.
  1. By using single database having multiple collections : In this approach, We will be having one database for our application and then we will be creating multiple collections based on the tenant id under this database. Now we can partition data across multiple collections based on the data-set id. Fortunately, there is no need to set a limit on the size of your database and the number of collections. Document DB allows us to dynamically add collections and capacity as the application grows.
Multi-Tenancy Second Approach
  •       Pros : Major benefits of storing tenant data within multiple collections include increased resource capacity and higher throughput. Moreover, we can place a tenant, who needs dedicated throughput, on to their own collection based on the permission given to the user for the resource (Collection, Document, Attachment, Stored Procedure, Trigger, UDF).
  •       Cons : High Cost as pricing of Document Db increases as soon as new collection is created based on the required throughput in a region.
  1. By using multiple database having multiple collections : In this approach, We will be having multiple database based on the tenant id for our application and then we will be creating multiple collections based on the tenant data-set under the respective database. For the most part, placing tenants across databases works fairly similar to placing tenants across collections. The major distinction between Document DB collections and databases is that users and permissions are scoped at a database-level. It means that each database has its own set of users and permissions – which you can use to isolate specific collections and documents.
Multi-Tenancy Third Approach
  •       Pros : Major benefits of storing tenant data within multiple databases include increased resource capacity and higher throughput. In this we can place a tenant in their own database having user permission at the DB level.
  •       Cons : Again, High Cost as pricing of Document Db increases as soon as new collection is created in the respective database based on the required throughput in a region.

4. By using multiple Azure accounts having respected database/collections In this approach, We will be having multiple azure accounts based on the tenant id for our application and then we will be creating the individual database for the tenants respectively. Now collections will be created based on the tenant data-set under the respective database. In this, data of the each tenant is separated from each other.

Multi-Tenancy Forth Approach


  •       Pros : Major benefits of storing tenant data within multiple accounts is to enforce security at account level. It also includes increased resource capacity and higher throughput.
  •       Cons : Again, Very high cost due to subscription of new account for every tenant.

Comparison of the above 3 approaches based on few parameters :

Approaches                    1st                                    2nd                                  3rd


Storage of Tenant’s Data-set

Single collection will be partitioned based on the partition key (Tenant Id + Data-Set Id). Single Database will be partitioned into Collections based on the Tenant Id and further respective collection will be partitioned based on the Data-Set Id of the tenant. Multiple Database will be created based on the Tenant Id and further respective database will be partitioned into Collections based on the Data-Set Id of tenant.

Handling a very large data-set

Not easily feasible because of the limitation of storage and throughput at collection level. Easily feasible by using multiple partition of collection. But we can’t enforce throughput at partition level. Very easily feasible by creating separate collection for each data-set. Here we can’t enforce throughput at database level.
Handling of hot spots (Generic) Document DB provides the way to scale up/down the throughput based on the requirement at the different pricing scheme. Same Same
Cost as per storage/throughput (Generic) For single-partition collection, single collection will limited to the 250, 1000, or 2500 RU/s of throughput and 10 GB storage on your S1(₹2.58/hr), S2(₹5.07/hr), or S3(₹10.14/hr) collection respectively. As per new Standard pricing model, we’ll be charged based on the throughput level we choose, between 400 and 10,000 RU/s. And yes, we can switch back to S1, S2, or S3 collections if we change our mind.
Throttling of the request (generic) Our application will be behind the Adobe IO so that will take care of the  throttling of the request. Apart from Adobe IO, Document DB gives us a provision to throttle the request at collection level. But we can’t apply throttling at db-level.
Primary Key By default, Document DB generates a unique id (string) for each document but we can make it by the combination of tenant id, data set id and time-stamp (Discussion in process). Same Same.


Hope it works for you and now you would be able to implement the multi-tenants scenario in your application easily. Enjoy 🙂

Approaches for Bulk/Batch Insert and Read through Document DB Java

Bulk/Batch Insert :

As per the Azure documentation, there are mainly two ways for bulkinsert documents into Document DB –

  1. Using the data migration tool, as described in Database migration tool for Azure Cosmos DB.
  2. Using the single insert i.e. createDocument() API for multiple documents but one at a time.
  3. Stored procedures, as described in Server-side JavaScript programming for Azure Cosmos DB.

    Statistics of Bulk Insert using stored procedure based on the different parameters

    Total Records


    Batch Size


    Throughput (RU/S)


    Execution Time (Secs)


    2000 200 400 16
    2000 400 10000 10
    2000 2000 10000 6

*Sample Record :{
“id”: “integer”,
“key”: “integer”,
“value”: “string”
Statistics of Bulk Insert using single createDocument() API based on the different parameters –

Total Records


Batch Size


Throughput (RU/S)


Execution Time (Secs)


2000 1 10000 556
  • Apart from this, Document DB supports two modes for the DocumentClient() API to insert the document –
    • ConnectionMode.DirectHttps  (All the api calls are directly communicated with server)
    • ConnectionMode.Gateway  (All the api calls are firstly communicated with gateway and then goes to server.)

          So I tried both the mode to implement the bulk insert API but there is no as such difference in the performance.
Bulk/Batch Read :

For bulk read, document db has given the support of setting the page size as per the convenience.

FeedOptions feedOptions = new FeedOptions();


Now we can pass this feedOptions object in the constructor of queryDocuments() which returns the FeedResponse and using this response we can get the results in the blocks of the page size. But they haven’t given the support of moving forward and backward at any point of time. We just need to write out logic to jump to the appropriate page.

Note* – If we want the results in the sorted order then we can write the query using ORDER BY keyword for the desired attribute ordering like below –

SELECT * FROM c ORDER BY c.surname

Hope it helps. Rocks 🙂

Dependecny Injection using Grizzly and Jersey

Statement: Implementation of Dependency Injection using Grizzly and Jersey

Please follow the below steps to do the same –

  • Create a class called Hk2Feature which implements Feature.

package com.sample.di;





public class Hk2Feature implements Feature {

  public boolean configure(FeatureContext context) {

    context.register(new MyAppBinder());

    return true;



  • Create a class called MyAppBinder which extends AbstractBinder and you need to register all the services here like below –

package com.sample.di;

import org.glassfish.hk2.utilities.binding.AbstractBinder;

public class MyAppBinder extends AbstractBinder {


  protected void configure() {




  • Now, it’s time to write your own services and inject all the required services in your appropriate controllers like below code –

package com.sample.di;

public class MainService {

  public String testService(String name) {

    return “Hi” + name + “..Testing Dependency Injection using Grizlly Jersey “;



package com.sample.di;

import javax.inject.Inject;

public class MainController {

public MainService mainService;

public String get(@QueryParam(“name”) String name) {
return mainService.testService(name);

public String ping() {
return “OK”;

Now hit the url http://localhost:8080/main?name=Tanuj and you will get your result. This is how you can achieve dependency injection in Grizzly Jersey application. Find the detailed implementation of the above skeleton in my repo. Happy Coding 🙂

Tips to solve DS and Algorithm related problems.

Whilst facing the interview, don’t jump into the solution of the problem immediately. Instead think of a data structure which can be applicable to solve the problem. Given below is the list of data structure and algorithms/approaches, you can think of and it is recommended to have all these on your tip as well –

  • Brute Force (For/while Loops) => Complexity (exponential, n^k, etc)
  • Sorting and Searching => Complexity ( n^k, n, logn etc)
  • HashMap, HashTable, HashSet => Complexity ( n, 1 etc)
  • Stack(LIFO), Queue(FIFO) => Complexity ( n etc)
  • Heap, PriorityQueue => Complexity ( nlogn, n, logn, etc)
  • Recursion and DP => Complexity ( exponential, n^k tc)
  • Tree, BST, Tree traversal, Tries =>Complexity ( n, logn, k etc)
  • Graphs (DFS {Stack}, BFS {Queue}, Topological) => Complexity ( E+Vlogv, V+E etc)
  • LinkedList, DoublyLinkedList=> Complexity (n etc)
  • Bitwise Operation  => Complexity (n, k, 1 etc)

Note* : In the online coding test, you may come across the situation in which you will have to write too many lines of code. Even these lengthy questions are asked to test your thinking whether you leave these questions or start wasting your time keeping other problems unattended. It’s not like that you don’t need to solve these lengthy problem but spare your time in the last only when all the other easy problems are solved.  Moreover, the point I would like to highlight is that in the face to face interview, you will always be asked the basic question having 15 to 20 lines of code max as this kind of f2f interview goes around 45 minutes min or 1 hour max.  Happy coding and learning 🙂

Generate Swagger UI through Grizzly and Jersey Application.

Statement : Generate Swagger UI for the listing of all the REST APIs through Grizzly and Jersey Application.

  • Add following dependency in pom.xml –






  • Bundle Swagger UI and docs folder through you main application class using the below code –

package com.main;



import org.glassfish.grizzly.http.server.CLStaticHttpHandler;

import org.glassfish.grizzly.http.server.HttpServer;

import org.glassfish.grizzly.http.server.ServerConfiguration;

import org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory;

import org.glassfish.jersey.jackson.JacksonFeature;

import org.glassfish.jersey.server.ResourceConfig;

import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

import io.swagger.jaxrs.config.BeanConfig;

public class MainApp {

// Base URI the Grizzly HTTP server will listen on

public static final URI BASE_URI = URI.create(;);

public static HttpServer getLookupServer() {

String resources = “com.main”;

BeanConfig beanConfig = new BeanConfig();


beanConfig.setSchemes(new String[] { “http” });




final ResourceConfig resourceConfig = new ResourceConfig();






return GrizzlyHttpServerFactory.createHttpServer(BASE_URI, resourceConfig);


public static void main(String[] args) throws IOException {

final HttpServer server = getLookupServer();


ClassLoader loader = MainApp.class.getClassLoader();

CLStaticHttpHandler docsHandler = new CLStaticHttpHandler(loader, “swagger-ui/”);


ServerConfiguration cfg = server.getServerConfiguration();

cfg.addHttpHandler(docsHandler, “/docs/”);



  • Take the latest code of swagger-ui. Copy all the content of the dist folder and create a folder named  swagger-ui inside src/main/resources and paste all the copied contents. Now change the url in index.file which is inside the copied folder like below –


  • Lastly, annotate your controller with @Api and @ApiOperation.

Hope it works. Now to run your Grizzly Jersey Application, go to browser and type localhost:8080/docs/. You will see Swagger UI having all the details of your REST APIs.

Happy coding and sharing as well. 🙂 You can find the git repo for the above implementation.