General · Grizzly · Java · Jersey · Swagger

How to avoid getting extra double quotes and slashes in your API JSON response

Statement – While working with REST apis, most of the time we would be expecting the clean JSON response from the API. But few time we get the unstructured JSON response like below in case of children array –

{
“_page”: {
“start”: 0,
“next”: 12,
“count”: 2
}
“children”: [
“{\”_attachments\”:\”attachments/\”,\”_rid\”:\”wEIcANyrmAULAAAAAAAAAA==\”,\”id\”:\”11\”,\”_self\”:\”dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAULAAAAAAAAAA==/\”,\”value\”:\”Vanessa\”,\”key\”:\”38\”,\”_etag\”:\”\\”0c0054c7-0000-0000-0000-59ca35ab0000\\”\”,\”_ts\”:1506424239}”,
“{\”_attachments\”:\”attachments/\”,\”_rid\”:\”wEIcANyrmAUMAAAAAAAAAA==\”,\”id\”:\”12\”,\”_self\”:\”dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUMAAAAAAAAAA==/\”,\”value\”:\”Neal\”,\”key\”:\”13\”,\”_etag\”:\”\\”0c0056c7-0000-0000-0000-59ca35ab0000\\”\”,\”_ts\”:1506424239}”
]
}

So to avoid this, we just need to take care of few things in our implementation –

  1. In the Root class object, use JSONArray object instead of List of String or Object like below –

public class PageResponse {

@JsonProperty(“_page”)

private Pagination page;

@JsonProperty(“records”)

private JSONArray records = null;

}

  1. Wherever you are setting this JSONArray, try to set it by the following piece of code –

List<Object> finalRes = new ArrayList<Object>();

JSONParser parser = new JSONParser();

JSONArray finalJsonArray = (JSONArray) parser.parse(finalRes.toString());

PageResponse pageResponse = new PageResponse();

pageResponse.setRecords(finalJsonArray);

In this way, you would be getting the clean JSON response from your REST API given below –

{
  "_page": {
    "start": 0,
    "next": 13,
    "count": 3
  },
  "records": [
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAULAAAAAAAAAA==",
      "id": "11",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAULAAAAAAAAAA==/",
      "value": "Vanessa",
      "key": "38",
      "_etag": "\"0c0054c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    },
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUMAAAAAAAAAA==",
      "id": "12",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUMAAAAAAAAAA==/",
      "value": "Neal",
      "key": "13",
      "_etag": "\"0c0056c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    },
    {
      "_attachments": "attachments/",
      "_rid": "wEIcANyrmAUNAAAAAAAAAA==",
      "id": "13",
      "_self": "dbs/wEIcAA==/colls/wEIcANyrmAU=/docs/wEIcANyrmAUNAAAAAAAAAA==/",
      "value": "Marguerite",
      "key": "13",
      "_etag": "\"0c0058c7-0000-0000-0000-59ca35ab0000\"",
      "_ts": 1506424239
    }
  ]
}

Hope it helps. 🙂

General · Java · NoSql · Uncategorized

Approaches to achieve Multi-Tenancy in your application through Document DB

Introduction :

We can achieve the multi-tenancy by either partitioning the database or collection as per the tenant’s data size. As per the Microsoft Azure recommendation, if tenant data is smaller and tenancy numbers are higher then we should store data for multiple tenants in the same collection to reduce the overall resources necessary for the application. We can identify our tenant by a property in the document and issue filter queries to retrieve tenant-specific data. We can also use Document DB’s users and permissions to isolate tenant data and restrict access at a resource-level via authorization keys.

In cases where tenants are larger, need dedicated resources, or require further isolation – we can allocate dedicated collections or databases for tenants. In either case, Document DB will handle a significant portion of the operational burden for us as we scale-out your application’s data store.

Approaches to achieve Multi-Tenancy :

There are different below mentioned approaches to achieve multi-tenancy in your application –

  1. By using single database having one collection: In this approach, We will be having one database for our application and then we will be creating one collection which is a container to store all the JSON documents under this database. Due to storage and throughput limitations of a single collection, we need to enforce partitioning in this created collection so that we can achieve using Data-Set Id which can act as a partition key and this way we can achieve multi-tenancy as well. Security can be enforced at the Document-db level as well by creating a user for each tenant, assigning permissions to tenant’s data-set , and querying tenant data-set via the user’s authorization key.
Multi-Tenancy First Approach
  •        Pros : Major benefits of storing tenant data within a single collection include reducing complexity, ensuring transactional support across application data, and minimizing financial cost of storage.
  •        Cons : One collection can’t not store the higher amount of data using this model and in turn that will throttle the upcoming request after reaching the storage and throughput limit of Document DB for one collection.
  1. By using single database having multiple collections : In this approach, We will be having one database for our application and then we will be creating multiple collections based on the tenant id under this database. Now we can partition data across multiple collections based on the data-set id. Fortunately, there is no need to set a limit on the size of your database and the number of collections. Document DB allows us to dynamically add collections and capacity as the application grows.
Multi-Tenancy Second Approach
  •       Pros : Major benefits of storing tenant data within multiple collections include increased resource capacity and higher throughput. Moreover, we can place a tenant, who needs dedicated throughput, on to their own collection based on the permission given to the user for the resource (Collection, Document, Attachment, Stored Procedure, Trigger, UDF).
  •       Cons : High Cost as pricing of Document Db increases as soon as new collection is created based on the required throughput in a region.
  1. By using multiple database having multiple collections : In this approach, We will be having multiple database based on the tenant id for our application and then we will be creating multiple collections based on the tenant data-set under the respective database. For the most part, placing tenants across databases works fairly similar to placing tenants across collections. The major distinction between Document DB collections and databases is that users and permissions are scoped at a database-level. It means that each database has its own set of users and permissions – which you can use to isolate specific collections and documents.
Multi-Tenancy Third Approach
  •       Pros : Major benefits of storing tenant data within multiple databases include increased resource capacity and higher throughput. In this we can place a tenant in their own database having user permission at the DB level.
  •       Cons : Again, High Cost as pricing of Document Db increases as soon as new collection is created in the respective database based on the required throughput in a region.

4. By using multiple Azure accounts having respected database/collections In this approach, We will be having multiple azure accounts based on the tenant id for our application and then we will be creating the individual database for the tenants respectively. Now collections will be created based on the tenant data-set under the respective database. In this, data of the each tenant is separated from each other.

Multi-Tenancy Forth Approach

 

  •       Pros : Major benefits of storing tenant data within multiple accounts is to enforce security at account level. It also includes increased resource capacity and higher throughput.
  •       Cons : Again, Very high cost due to subscription of new account for every tenant.

Comparison of the above 3 approaches based on few parameters :

Approaches                    1st                                    2nd                                  3rd

 

Storage of Tenant’s Data-set

Single collection will be partitioned based on the partition key (Tenant Id + Data-Set Id). Single Database will be partitioned into Collections based on the Tenant Id and further respective collection will be partitioned based on the Data-Set Id of the tenant. Multiple Database will be created based on the Tenant Id and further respective database will be partitioned into Collections based on the Data-Set Id of tenant.
 

Handling a very large data-set

Not easily feasible because of the limitation of storage and throughput at collection level. Easily feasible by using multiple partition of collection. But we can’t enforce throughput at partition level. Very easily feasible by creating separate collection for each data-set. Here we can’t enforce throughput at database level.
Handling of hot spots (Generic) Document DB provides the way to scale up/down the throughput based on the requirement at the different pricing scheme. Same Same
Cost as per storage/throughput (Generic) For single-partition collection, single collection will limited to the 250, 1000, or 2500 RU/s of throughput and 10 GB storage on your S1(₹2.58/hr), S2(₹5.07/hr), or S3(₹10.14/hr) collection respectively. As per new Standard pricing model, we’ll be charged based on the throughput level we choose, between 400 and 10,000 RU/s. And yes, we can switch back to S1, S2, or S3 collections if we change our mind.
Throttling of the request (generic) Our application will be behind the Platform IO so that will take care of the  throttling of the request. Apart from Platform IO, Document DB gives us a provision to throttle the request at collection level. But we can’t apply throttling at db-level.
Primary Key By default, Document DB generates a unique id (string) for each document but we can make it by the combination of tenant id, data set id and time-stamp (Discussion in process). Same Same.

 

Hope it works for you and now you would be able to implement the multi-tenants scenario in your application easily. Enjoy 🙂

General · Java · NoSql · Uncategorized

Approaches for Bulk/Batch Insert and Read through Document DB Java

Bulk/Batch Insert :

As per the Azure documentation, there are mainly two ways for bulkinsert documents into Document DB –

  1. Using the data migration tool, as described in Database migration tool for Azure Cosmos DB.
  2. Using the single insert i.e. createDocument() API for multiple documents but one at a time.
  3. Stored procedures, as described in Server-side JavaScript programming for Azure Cosmos DB.

    Statistics of Bulk Insert using stored procedure based on the different parameters

    Total Records

    2000

    Batch Size

    50

    Throughput (RU/S)

    400

    Execution Time (Secs)

    23

    2000 200 400 16
    2000 400 10000 10
    2000 2000 10000 6

*Sample Record :{
“id”: “integer”,
“key”: “integer”,
“value”: “string”
}
Statistics of Bulk Insert using single createDocument() API based on the different parameters –

Total Records

2000

Batch Size

1

Throughput (RU/S)

400

Execution Time (Secs)

566

2000 1 10000 556
  • Apart from this, Document DB supports two modes for the DocumentClient() API to insert the document –
    • ConnectionMode.DirectHttps  (All the api calls are directly communicated with server)
    • ConnectionMode.Gateway  (All the api calls are firstly communicated with gateway and then goes to server.)

          So I tried both the mode to implement the bulk insert API but there is no as such difference in the performance.
Bulk/Batch Read :

For bulk read, document db has given the support of setting the page size as per the convenience.

FeedOptions feedOptions = new FeedOptions();

feedOptions.setPageSize(5);

Now we can pass this feedOptions object in the constructor of queryDocuments() which returns the FeedResponse and using this response we can get the results in the blocks of the page size. But they haven’t given the support of moving forward and backward at any point of time. We just need to write out logic to jump to the appropriate page.

Note* – If we want the results in the sorted order then we can write the query using ORDER BY keyword for the desired attribute ordering like below –

SELECT * FROM c ORDER BY c.surname

Hope it helps. Rocks 🙂

Grizzly · Java · Jersey · Uncategorized

Dependecny Injection using Grizzly and Jersey

Statement: Implementation of Dependency Injection using Grizzly and Jersey

Please follow the below steps to do the same –

  • Create a class called Hk2Feature which implements Feature.

package com.sample.di;

import javax.ws.rs.core.Feature;

import javax.ws.rs.core.FeatureContext;

import javax.ws.rs.ext.Provider;

@Provider

public class Hk2Feature implements Feature {

  public boolean configure(FeatureContext context) {

    context.register(new MyAppBinder());

    return true;

  }

}

  • Create a class called MyAppBinder which extends AbstractBinder and you need to register all the services here like below –

package com.sample.di;

import org.glassfish.hk2.utilities.binding.AbstractBinder;

public class MyAppBinder extends AbstractBinder {

  @Override

  protected void configure() {

    bind(MainService.class).to(MainService.class);

  }

}

  • Now, it’s time to write your own services and inject all the required services in your appropriate controllers like below code –

package com.sample.di;

public class MainService {

  public String testService(String name) {

    return “Hi” + name + “..Testing Dependency Injection using Grizlly Jersey “;

  }

}

package com.sample.di;

import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

@Path(“/main”)
public class MainController {

@Inject
public MainService mainService;

@GET
public String get(@QueryParam(“name”) String name) {
return mainService.testService(name);
}

@GET
@Path(“/test”)
@Produces(MediaType.APPLICATION_JSON)
public String ping() {
return “OK”;
}
}

Now hit the url http://localhost:8080/main?name=Tanuj and you will get your result. This is how you can achieve dependency injection in Grizzly Jersey application. Find the detailed implementation of the above skeleton in my repo. Happy Coding 🙂

DS and Algo · Java

Tips to solve DS and Algorithm related problems.

Whilst facing the interview, don’t jump into the solution of the problem immediately. Instead think of a data structure which can be applicable to solve the problem. Given below is the list of data structure and algorithms/approaches, you can think of and it is recommended to have all these on your tip as well –

  • Brute Force (For/while Loops) => Complexity (exponential, n^k, etc)
  • Sorting and Searching => Complexity ( n^k, n, logn etc)
  • HashMap, HashTable, HashSet => Complexity ( n, 1 etc)
  • Stack(LIFO), Queue(FIFO) => Complexity ( n etc)
  • Heap, PriorityQueue => Complexity ( nlogn, n, logn, etc)
  • Recursion and DP => Complexity ( exponential, n^k tc)
  • Tree, BST, Tree traversal, Tries =>Complexity ( n, logn, k etc)
  • Graphs (DFS {Stack}, BFS {Queue}, Topological) => Complexity ( E+Vlogv, V+E etc)
  • LinkedList, DoublyLinkedList=> Complexity (n etc)
  • Bitwise Operation  => Complexity (n, k, 1 etc)

Note* : In the online coding test, you may come across the situation in which you will have to write too many lines of code. Even these lengthy questions are asked to test your thinking whether you leave these questions or start wasting your time keeping other problems unattended. It’s not like that you don’t need to solve these lengthy problem but spare your time in the last only when all the other easy problems are solved.  Moreover, the point I would like to highlight is that in the face to face interview, you will always be asked the basic question having 15 to 20 lines of code max as this kind of f2f interview goes around 45 minutes min or 1 hour max.  Happy coding and learning 🙂

GIT · Grizzly · Java · Jersey · MAC · Swagger

Generate Swagger UI through Grizzly and Jersey Application.

Statement : Generate Swagger UI for the listing of all the REST APIs through Grizzly and Jersey Application.

  • Add following dependency in pom.xml –

<dependency>

<groupId>io.swagger</groupId>

<artifactId>swagger-jersey2-jaxrs</artifactId>

<version>1.5.9</version>

</dependency>

  • Bundle Swagger UI and docs folder through you main application class using the below code –

package com.main;

import java.io.IOException;

import java.net.URI;

import org.glassfish.grizzly.http.server.CLStaticHttpHandler;

import org.glassfish.grizzly.http.server.HttpServer;

import org.glassfish.grizzly.http.server.ServerConfiguration;

import org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory;

import org.glassfish.jersey.jackson.JacksonFeature;

import org.glassfish.jersey.server.ResourceConfig;

import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

import io.swagger.jaxrs.config.BeanConfig;

public class MainApp {

// Base URI the Grizzly HTTP server will listen on

public static final URI BASE_URI = URI.create(http://0.0.0.0:8080&#8221;);

public static HttpServer getLookupServer() {

String resources = “com.main”;

BeanConfig beanConfig = new BeanConfig();

beanConfig.setVersion(“1.0.1”);

beanConfig.setSchemes(new String[] { “http” });

beanConfig.setBasePath(“”);

beanConfig.setResourcePackage(resources);

beanConfig.setScan(true);

final ResourceConfig resourceConfig = new ResourceConfig();

resourceConfig.packages(resources);

resourceConfig.register(io.swagger.jaxrs.listing.ApiListingResource.class);

resourceConfig.register(io.swagger.jaxrs.listing.SwaggerSerializers.class);

resourceConfig.register(JacksonFeature.class);

resourceConfig.register(JacksonJsonProvider.class);

return GrizzlyHttpServerFactory.createHttpServer(BASE_URI, resourceConfig);

}

public static void main(String[] args) throws IOException {

final HttpServer server = getLookupServer();

server.start();

ClassLoader loader = MainApp.class.getClassLoader();

CLStaticHttpHandler docsHandler = new CLStaticHttpHandler(loader, “swagger-ui/”);

docsHandler.setFileCacheEnabled(false);

ServerConfiguration cfg = server.getServerConfiguration();

cfg.addHttpHandler(docsHandler, “/docs/”);

}

}

  • Take the latest code of swagger-ui. Copy all the content of the dist folder and create a folder named  swagger-ui inside src/main/resources and paste all the copied contents. Now change the url in index.file which is inside the copied folder like below –

    url: http://0.0.0.0:8080/swagger.json

  • Lastly, annotate your controller with @Api and @ApiOperation.

Hope it works. Now to run your Grizzly Jersey Application, go to browser and type localhost:8080/docs/. You will see Swagger UI having all the details of your REST APIs.

Happy coding and sharing as well. 🙂 You can find the git repo for the above implementation.

Java · MAC · Spring · Swagger

Generate Swagger UI through Spring Boot Application.

Statement : Generate Swagger UI for the listing of all the REST APIs through Spring Boot Application.

Follow the below steps to generate the Swagger UI  through Spring Boot application –

  1. Add following dependency in pom.xml –

<dependency>

<groupId>io.springfox</groupId>

<artifactId>springfox-swagger2</artifactId>

<version>2.6.1</version>

</dependency>

<dependency>

<groupId>io.springfox</groupId>

<artifactId>springfox-swagger-ui</artifactId>

<version>2.6.1</version>

</dependency>

  1.  Add the following piece of code in your main application class having the @EnableSwagger2 annotation.

@EnableSwagger2

@SpringBootApplication

public class MyApp {

public static void main(String[] args) {

SpringApplication.run(MyApp.class, args);

}

@Bean

  public Docket api() {

    return new Docket(DocumentationType.SWAGGER_2).select()

        .apis(RequestHandlerSelectors.withClassAnnotation(Api.class)).paths(PathSelectors.any()).build()

        .pathMapping(“/”).apiInfo(apiInfo()).useDefaultResponseMessages(false);

  }

  @Bean

  public ApiInfo apiInfo() {

    final ApiInfoBuilder builder = new ApiInfoBuilder();

    builder.title(“My Application API through Swagger UI”).version(“1.0”).license(“(C) Copyright Test”)

        .description(“List of all the APIs of My Application App through Swagger UI”);

    return builder.build();

  }

}

  1. Add the below RootController class in your code to redirect to the Swagger UI page. In this way, you don’t need to put the dist folder of Swagger-UI in your  resources directory.

import org.springframework.stereotype.Controller;

import org.springframework.web.bind.annotation.RequestMapping;

import org.springframework.web.bind.annotation.RequestMethod;

@Controller

@RequestMapping(“/”)

public class RootController {

@RequestMapping(method = RequestMethod.GET)

public String swaggerUi() {

return “redirect:/swagger-ui.html”;

}

}

  1. Being a final steps, add the @Api and @ApiOperation notation in all your RESTControllers like below –

import static org.springframework.web.bind.annotation.RequestMethod.GET;

import org.springframework.http.HttpStatus;

import org.springframework.web.bind.annotation.RequestMapping;

import org.springframework.web.bind.annotation.RequestMethod;

import org.springframework.web.bind.annotation.ResponseStatus;

import org.springframework.web.bind.annotation.RestController;

import io.swagger.annotations.Api;

import io.swagger.annotations.ApiOperation;

@RestController

@RequestMapping(“/hello”)

@Api(value = “hello”, description = “Sample hello world application”)

public class TestController {

@ApiOperation(value = “Just to test the sample test api of My App Service”)

@RequestMapping(method = RequestMethod.GET, value = “/test”)

// @Produces(MediaType.APPLICATION_JSON)

public String test() {

return “Hello to check Swagger UI”;

}

@ResponseStatus(HttpStatus.OK)

@RequestMapping(value = “/test1”, method = GET)

@ApiOperation(value = “My App Service get test1 API”, position = 1)

public String test1() {

System.out.println(“Testing”);

if (true) {

return “Tanuj”;

}

return “Gupta”;

}

}

Now your are done. Now to run your Spring Boot Application, go to browser and type localhost:8080. You will see Swagger UI having all the details of your REST APIs.

Happy Coding 🙂 Enjoy the source code of the above implementation in my git repo.

Continue reading “Generate Swagger UI through Spring Boot Application.”

Docker · Java · Uncategorized

Dockerize Java RESTful Application

Please follow the below steps to dockerize your java RESTful application –

Prerequisites : Please insure that Docker, java and mvn is installed on your machine.

  1. Create Dockerfile : Go to the root of the application where pom.xml is contained. Below is the content of my Dockerfile –

cd yourProjectFolder -> vi Dockerfile

#Fetch the base Jav8 image
FROM java:8
#Expose the local application port
EXPOSE 8088
#Place the jar file to the docker location
ADD /target/yourService-1.0-SNAPSHOT.jar yourService-1.0-SNAPSHOT.jar
#Place the config file as a part of application
ADD /src/main/java/com/test/config/config.properties config.properties
#execute the application
ENTRYPOINT ["java","-jar","yourService-1.0-SNAPSHOT.jar"]
  •  Build Docker Image :
docker build -f Dockerfile -t yourservice .
  • Run the Docker Image :
docker run -p 8088:8088 -t yourservice

Option -p publishes or maps host system port 18080 to container port 8080.

Note* Please don’t forget to change your base uri that is localhost:8088 to 0.0.0.0:8080 network interface created inside the container listen to all available address.

Few important commands of docker to know :

  1. To find the Id of your running container –
docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

77558fabec6c       yourservice       “java -jar lookupS…”   38 minutes ago      Up 38 minutes       0.0.0.0:8088->8088/tcp   affectionate_brahmagupta

  1. To kill the process inside the docker using container is –
docker container kill 77558fabec6c(Container Id)

Now you can test your RESt call on port 0.0.0.0:8080 using any of the REST client . Hope it helps 🙂