Wednesday, August 17, 2016

DSL example with Scala

As you know I just started to learn Scala few months ago, and I have to say that each time I am developing in Scala I see in it a lot of powerful utilities.

 Today I want to implement a simple DSL (Domain Specific Language).

Imagine we want to design a language to communication our software with an external hardware, in this case a robot. We are going to be able to communicate with our robot in this way:

val robot = newRobot(0, 0)
robot at 30.km_hour towards (20, 15) go
robot at 60.km_hour towards (60, 15) go
robot at 2.meters_second towards (22, 17) go
view raw RobotApp.scala hosted with ❤ by GitHub

There are some key concepts applied in the code showed above:

  • Syntax sugar. In Scala there are several situations where ',' and '()' are optionals. For example when the method has only one parameter.
  • Builder pattern. To make a fluent Api you can use the Builder pattern which consists in return the object 'this', then in this way, you can concatenate the calls to the different methods in one line. 
  • Implicit conversions. Implicit Conversions are a set of methods that Scala tries to apply when it encounters an object of the wrong type being used. To use implicit conversions, you need to import the class that contains the implicit methods to do the conversions.
The main application.
import Robot._
import Speed._
object RobotApp {
def main(args: Array[String]): Unit = {
// scala style
val robot = newRobot(0, 0)
robot at 30.km_hour towards (20, 15) go
robot at 60.km_hour towards (60, 15) go
robot at 2.meters_second towards (22, 17) go
// java style
/*
Robot robot = Robot.newRobot(0, 0);
robot.at(new Speed(30).km_hour()).towards(20, 20).go();
*/
}
}
view raw RobotApp1.scala hosted with ❤ by GitHub

The Robot class.
import com.hdbandit.dsl.Speed._
object Robot {
def newRobot(initialXposition: Int = 0, initialYposition: Int = 0): Robot = new Robot(initialXposition, initialYposition)
}
class Robot(initialXposition: Int, initialYposition: Int) {
private var previousX = -1
private var previousY = -1
private var xPosition = initialXposition
private var yPosition = initialYposition
private var speed = 0.km_hour
def at(s: Speed): Robot = {
speed = Speed(s.value)
this
}
def towards(coordinate: (Int, Int)): Robot = {
previousX = xPosition
previousY = yPosition
xPosition = coordinate._1
yPosition = coordinate._2
println(s"Moving the robot from (x=$previousX, y=$previousY) to (x=$xPosition, y=$yPosition) with speed: $speed")
this
}
def go(): Unit = {
println(s"Moving the robot from (x=$previousX, y=$previousY) to (x=$xPosition, y=$yPosition) with speed: $speed")
}
}
view raw Robot.scala hosted with ❤ by GitHub

Finally the Speed class that enriches the Int type.
object Speed {
implicit def Int2Speed(value: Int): Speed = new Speed(value)
}
case class Speed(value: Int) {
def km_hour: Speed = this
def milles_hour: Speed = this
def meters_second: Speed = this
}
view raw Speed.scala hosted with ❤ by GitHub
You can find all the code in the my github account here

Sunday, July 31, 2016

Fast Scala: Constructing a Rational

Today I am going to summarize the chapter 6 of the Programming in Scala book of Martin Odersky. This chapter guide you through several technical aspects of Scala, implementing a real example. In concrete implementing the class Rational.

 A specification for class Rational

A rational number is a number that can be expressed as a ratio n/d, where n and d are integers, except that d cannot be zero

Constructing a Rational

class Rational(n: Int, d: Int) {
require(d != 0, "Denominator cannot be 0")
println("My rational is created")
override def toString = s"$n/$d"
}
view raw Rational.scala hosted with ❤ by GitHub
Note that the Scala compiler will compile any code you place in the class body, which isn't part of a field or a method definition, into the primary constructor. In our case, every time you construct a Rational a message "My Rational is created" will be printed.

Another interesting thing is the 'require' method. The 'require' method takes one boolean parameter. If the passed value is true, require will return normally. Otherwise, require will prevent the object from being constructed by throwing an IllegalArgumentException. The require method belongs to the Predef object which is imported by default by the Scala compiler.

Finally we added an implementation of the toString method. By default the toString method prints the memory location where lives the object in the heap.

Adding Rationals

For example, to add 1/2 + 2/3, you multiply both parts of the left operand by 3 and both parts of the right operand by 2, which gives you 3/6 + 4/6. Adding the two numerators yields the result 7/6.


class Rational(n: Int, d: Int) {
require(d != 0, "Denominator cannot be 0")
println("My rational is created")
val num = n
val denom = d
override def toString = s"$n/$d"
def +(that: Rational): Rational = {
new Rational(
n * that.denom + that.num * d,
d * that.denom
)
}
}
view raw Rational.scala hosted with ❤ by GitHub
Note that we added two fields named num and denom, and initialized them with the values of class parameters n and d. This is because you cannot access in this way : that.n or that.d, because n and d are in scope in the code of your + method, you can only access their value on the object on which + was invoked.

With our defined method +. You can write code like that: x + y , where x and y and rational objects.
And is equivalent to write that: x.+(y). Of course, write x + y is more natural instead of write something like x.add(y)

Another thing to note, is that it would be great do things like that: x + x * y , where x and y are Rationals. Of course is not the same to that : (x + x) * y than that : x + (x * y). Scala has defined by default rules for operator precedence and for this reason, expressions involving + , * and / operations on Rationals will behave as expected.

Auxiliary constructors

Sometimes, you need multiple constructors in a class. In Scala, constructors other than primary constructors are called auxiliary constructors. For example, a rational number with a denominator of 1 can be written more succinctly as simply the numerator. Instead of 5/1, for example, you can just write 5. Therefore instead of write new Rational(5, 1) simply write new Rational(5). But this requires an auxiliary constructor. Auxiliary constructors in Scala start with def this(...).

class Rational(n: Int, d: Int) {
require(d != 0, "Denominator cannot be 0")
println("My rational is created")
val num = n
val denom = d
def this(n: Int) = this(n, 1)
override def toString = s"$n/$d"
def +(that: Rational): Rational = {
new Rational(
n * that.denom + that.num * d,
d * that.denom
)
}
}
view raw Rational.scala hosted with ❤ by GitHub

Method overloading

Now you can add Rationals, but cannot add an Integer and an Rational. For this, we have to overload the method +. Then you could write code like x + 4 , where x is a Rational

class Rational(n: Int, d: Int) {
require(d != 0, "Denominator cannot be 0")
println("My rational is created")
val num = n
val denom = d
def this(n: Int) = this(n, 1)
override def toString = s"$n/$d"
def +(that: Rational): Rational = {
new Rational(
n * that.denom + that.num * d,
d * that.denom
)
}
def +(that: Int): Rational = {
new Rational(
n + that * d,
d
)
}
}
view raw Rational.scala hosted with ❤ by GitHub
Implicit conversions

Now that you can write r + 2, you might also want to swap the operands, as in 2 + r. Unfortunately this does not work  yet. Because the Int class doesn't have any method tu add Rationals.

However, there is a way to solve this problem in Scala. You can  create an implicit conversion that automatically converts integers to rational numbers when needed. The implicit conversion could be in the companion object.

object Rational {
implicit def intToRational(x: Int) = new Rational(x)
}
class Rational(n: Int, d: Int) {
require(d != 0, "Denominator cannot be 0")
println("My rational is created")
val num = n
val denom = d
def this(n: Int) = this(n, 1)
override def toString = s"$n/$d"
def +(that: Rational): Rational = {
new Rational(
n * that.denom + that.num * d,
d * that.denom
)
}
def +(that: Int): Rational = {
new Rational(
n + that * d,
d
)
}
}
view raw Rational.scala hosted with ❤ by GitHub
I hope it was interesting for you, I will try to summarize some interesting chapters of Programming in Scala, in the next posts. We keep Scaling!

Sunday, July 24, 2016

5 minutes of Scala: Prefix unary operators

Well, I am reading the book Programming in Scala, and I am discovering new capabilities of this  language. Today I want to talk about unary operators (also called prefix operators).

The only identifiers that can be used as prefix operators are +, -, ! and ~
Then you have to define your method using unary_<here your operator>, and you could invoke that method using prefix operator notation, such as <your operator><your variable>.

I know maybe all of this, is a bit confusing but seeing code all will be more easy.

Imagine, we want to implement our class Color. This class defines a color using three pixels (red, green, blue). Also we want operate with our Color class, and one required operation is the capability to inverting the color. As Java programmer, I would define a method called inverse, and I would call it in this way : myColor.inverse() . It is a good approach, totally acceptable, but here is when the magic of unary operators comes, you can do the same in Scala with this : !myColor

object Color {
def create(r: Int, g: Int, b: Int): Color = {
new Color(r, g, b)
}
}
class Color(r: Int, g: Int, b: Int) {
def unary_! = {
new Color(255-r, 255-g, 255-b)
}
def getR() : Int = r
def getG() : Int = g
def getB() : Int = b
override def toString(): String = s"r: $r, g: $g, b: $b"
}
view raw Color.scala hosted with ❤ by GitHub

Note the definition of the method using unary_!. Also remember that only there are 4 supported unary operators: +, -, ! and ~


object MyCoolApp {
def main(args: Array[String]): Unit = {
val myColor = Color.create(23, 14, 9)
println("Initial color: " + myColor)
val inverseColor1 = !myColor
println("Inverse color: " + inverseColor1)
println("Inverse color again: " + !inverseColor1)
}
}

Sunday, July 17, 2016

Sending messages to your Akka Actors from JMX

Lately I was a bit busy learning new technologies like Scala, Akka and changing my mind to start to think in functional way. As I promised, my next posts will be about Scala and all technologies related to this language.

 Today I want to show you how to combine an old friend like JMX with and application done with Akka. Scala runs on the JVM so you have all the tools available in the Java platform. To follow this tutorial, it is necessary to have an idea about what does it means the Actor Model

Also you need to have installed Scala. Please follow this instructions if you don't have any installation. I recommend to you IntelliJ as a development environment, it provides a cool support to work with Scala.

Cool! We are ready to start. Our application is very simple, we are going to create an Actor called Pong, and this actor will be waiting a message from a JMX client. This client will be able to send messages to our actor Pong. If the message sent is "end", Pong will be terminated. Otherwise the message will be stored in memory. From our JMX client, we can ask for the last message received by the actor Pong. Let's see some code.

First of all , our support to code actors with JMX support. Extending this trait the actor will be registered in our MBeanServer automatically using the methods preStart and postStop. Note that method getMXTypeName is an abstract method and is implemented by the child class.

trait ActorWithJMXSupport extends Actor {
val mBeanServer = ManagementFactory.getPlatformMBeanServer();
val objName = new ObjectName("hdbandit", {
import scala.collection.JavaConverters._
new java.util.Hashtable(
Map(
"name" -> self.path.toStringWithoutAddress,
"type" -> getMXTypeName
).asJava
)
})
def getMXTypeName : String
override def preStart() = mBeanServer.registerMBean(this, objName)
override def postStop() = mBeanServer.unregisterMBean(objName)
}
Is time to define our MBean. By convention the MBean is defined with the suffix MBean. Here is the trait that exposes the operations available from our MBean.


trait PongMBean {
def lastReceivedMessage(): String
def sendMessage(p1: String): Unit
}
view raw PongMBean.scala hosted with ❤ by GitHub
Finally the actor Pong implements the trait showed above and extends from our ActorWithJMXSupport.

class Pong extends ActorWithJMXSupport with ActorLogging with PongMBean{
import Pong._
var lastReceivedMessage = "UNKNOWN"
def receive = {
case PongMessage(text) =>
log.info("In PongActor - received message: {}", text)
lastReceivedMessage = text
}
override def getMXTypeName: String = "InstrumentedPongActor"
override def sendMessage(p1: String): Unit = {
if (p1.equals("end")) {
self ! PoisonPill
} else {
self ! PongMessage(p1)
}
}
}
object Pong {
val props = Props[Pong]
case class PongMessage(text: String)
}
view raw Pong.scala hosted with ❤ by GitHub
Then we need a main application to start up the actor Pong.

object ApplicationMain extends App {
val system = ActorSystem("MyActorSystem")
val pongActor = system.actorOf(Pong.props, "pongActor")
pongActor ! PongMessage("Initial message")
// waiting for JMX "end" message to terminate
system.awaitTermination()
}
view raw Main.scala hosted with ❤ by GitHub
Ok! The hard work is done. Now you have to start your application passing to the JVM this parameters:

-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.port=1617 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false

At this point, you have the application running with the actor Pong waiting for requests. As you can see the MBean server is listening in port 1617.

Finally executing jmc in a console, the JMX client provided by the jdk, will be opened. Then you can browse in the MBean browser to find our MBean registered. In the tab operations, you can execute the operations defined by the PongMBean. In the next video, you can see this last part clearly.



Sunday, June 12, 2016

Distributed configuration with Spring Cloud

Move from your Monolithic Architecture to a Microservices Architecture style, it implies to deal with some challenges. One of them, is the configuration of all your backend services. Every microservice requires the injection of its configuration. If you don't centralize your configuration, probably you are duplicating a lot of configuration along all your microservices infrastructure.

To make this task easy, it seems that put all your configuration in one place could be a good idea. In fact, it is. When your services are starting up, one of the things that they will do first is to ask for its configuration.

In the next picture you can see the basic workflow of this communication.



Configuration service

To do that, we are going to use Spring Cloud config support. First of all, we need to create a configuration service, that it will expose a Rest API to be able to do the communication that is showed in the picture above. To achieve that is so simple like add in your pom.xml the spring-cloud-config-server dependency as follows:

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
view raw pom.xml hosted with ❤ by GitHub
And in your application.properties add the follow properties:

// it indicates where is the git repository that contains all your configuration files
spring.cloud.config.server.git.uri=/Users/gerard/Documents/workspace_gerard/app_configuration
server.port=8888
Finally you have to annotate your class with @EnableServerConfig.

@EnableConfigServer
@SpringBootApplication
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
Now we have running our configuration service on port 8888, and is ready to serve configurations to other backend services. Work is almost done.

Client service

Now we are going to do a demo service and configure it as a client of our configuration service.

First of all, to configure a client of the configuration service you have to add the follow dependency in your pom.xml :

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
view raw pom.xml hosted with ❤ by GitHub
Now you have to tell to Spring context that the required configuration is loaded in remote mode. To do this, you need to replace the application.properties by the file bootstrap.properties with the following contents:

spring.application.name=example-service
spring.cloud.config.uri=http://localhost:8888
Note that in the git repository, the configuration is stored in a hierarchy way. Each backend service has an identifier defined in their bootsrap.properties. For example, a backend service with identifier 'service_1' , its configuration is stored in the file service_1.properties. Also you can define a common configuration called application.properties that is inherited by all services, and also they can override it in their configuration files.

Now when our service_1 is starting, it will ask to configuration service for its configuration before configure the Spring application context.

Then we can create a Rest service which accesses to one property (message.hello) provided by the configuration service.

@RestController
@RequestMapping("/example")
@RefreshScope
public class ExampleService {
@Value("${message.hello}")
private String message;
@RequestMapping(method = RequestMethod.GET)
public String getExample() {
return message;
}
}

Ok, we have the work done. But we can improve our solution, with Spring Actuator. If we annotate our Rest service with the annotation @RefreshScope, Spring Actuator can refreshes this bean, and then you can reload the configuration without restart your backend service. Let's see how.

Add the dependency :

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
view raw pom.xml hosted with ❤ by GitHub

After that make sure your class is annotated with @RefreshScope. Finally your can refresh your beans using the endpoints that exposes Spring Actuator in this way:

curl -d {} http://localhost:8080/refresh

And that is all. I hope you found it interesting. See you in the next post.

You can find all the code in the listed links:

Tuesday, May 24, 2016

Multitenancy with Spring Boot

For those who are not familiar with the term Multitenancy, let me start answering some questions that may be come to your mind.

What is multitenancy?

It is a software architecture where the same instance of the software serves multiple tenants. You can see a tenant as an organization. For example, if I have a software platform to sell TV's, my tenants could be Samsung, Philips and Sony. The good thing is that with this approach you are sharing resources, and if you work with cloud platforms this could be a good idea. On the other hand, as you know nothing is free, you have to be aware about the security and the isolation of the data and other things. Before you decide if this kind architecture matches with your requirements, you have to study in deep your requirements.

How to implement that?

I will not delve into the details that motivate the choice of a strategy or another. There are three approaches to do that:
  • Separate database per tenant (picture 2).
  • Separate schema per tenant.
  • Share database and schema.

Ok, we are ready to start. In this example, I am going to implement the first option (separata database per tenant). Note that in this example, the tenants will be added dynamically, this means that only adding the configuration of the tenant in the application.yml will be enough for the application be aware of the new tenant.

I remember you, that this example is using Spring Boot and Hibernate.

First of all we have to add in the application.yml the multitenancy configuration we want. In this case, I have two tenants.

multitenancy:
tenants:
-
name: tenant_1
default: true
url: jdbc:mysql://localhost:3306/tenant_1?serverTimezone=UTC
username: user
password: pass
driver-class-name: com.mysql.jdbc.Driver
-
name: tenant_2
default: false
url: jdbc:mysql://localhost:3306/tenant_2?serverTimezone=UTC
username: user
password: pass
driver-class-name: com.mysql.jdbc.Driver
view raw application.yml hosted with ❤ by GitHub
Also we have to exclude the default data source configuration that provides Spring Boot.

// we have to exclude the default configuration, because we want to provide
// our multi tenant data source configuration.
@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
view raw App.java hosted with ❤ by GitHub
The next step, is to provide a mechanism to determine, in runtime, which tenant is accessing to the application instance. To do that, Spring provides an interface to implement it.

@Component
public class CurrentTenantIdentifierResolverImpl implements CurrentTenantIdentifierResolver {
@Override
public String resolveCurrentTenantIdentifier() {
return RequestContextHolderUtils.getCurrentTenantIdentifier();
}
@Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
RequestContextHolderUtil is a class which determines the tenant based on a pattern contained in the Uri of the request.

public class RequestContextHolderUtils {
private static final String DEFAULT_TENANT_ID = "tenant_1";
public static final String getCurrentTenantIdentifier() {
RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
if (requestAttributes != null) {
String identifier = (String) requestAttributes.getAttribute(CustomRequestAttributes.CURRENT_TENANT_IDENTIFIER,RequestAttributes.SCOPE_REQUEST);
if (identifier != null) {
return identifier;
}
}
return DEFAULT_TENANT_ID;
}
}
Now we need a mechanism to select the correct database (DataSource object) based on the current tenant which is accessing to the application. Once you have the tenant id (above step), you only have to extend the class AbstractDataSourceBasedMultiTenantConnectionProviderImpl provided by Spring.

public class DataSourceBasedMultiTenantConnectionProviderImpl extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
private static final long serialVersionUID = 1L;
private String defaultTenant;
private Map<String, DataSource> map;
public DataSourceBasedMultiTenantConnectionProviderImpl(String defaultTenant, Map<String, DataSource> map) {
super();
this.defaultTenant = defaultTenant;
this.map = map;
}
@Override
protected DataSource selectAnyDataSource() {
return map.get(defaultTenant);
}
@Override
protected DataSource selectDataSource(String tenantIdentifier) {
return map.get(tenantIdentifier);
}
public DataSource getDefaultDataSource() {
return map.get(defaultTenant);
}
}
Note that the object Map (attribute of class AbstractDataSourceBasedMultiTenantConnectionProviderImpl) is configured when the Spring context is loaded reading the multitenancy properties from application.yml. This configuration is performed in a class annotated with @Configuration. You can see the code below.

@Configuration
@EnableConfigurationProperties(MultitenancyConfigurationProperties.class)
public class MultitenancyConfiguration {
@Autowired
private MultitenancyConfigurationProperties multitenancyProperties;
@Bean(name = "multitenantProvider")
public DataSourceBasedMultiTenantConnectionProviderImpl dataSourceBasedMultiTenantConnectionProvider() {
HashMap<String, DataSource> dataSources = new HashMap<String, DataSource>();
multitenancyProperties.getTenants().stream().forEach(tc -> dataSources.put(tc.getName(), DataSourceBuilder
.create()
.driverClassName(tc.getDriverClassName())
.username(tc.getUsername())
.password(tc.getPassword())
.url(tc.getUrl()).build()));
return new DataSourceBasedMultiTenantConnectionProviderImpl(multitenancyProperties.getDefaultTenant().getName(), dataSources);
}
@Bean
@DependsOn("multitenantProvider")
public DataSource defaultDataSource() {
return dataSourceBasedMultiTenantConnectionProvider().getDefaultDataSource();
}
}

The last step is to implement a Rest controller to access to the data. Note that the Uri of the endpoint contains the 'tenantId' pattern. Of course, there are different ways to do that. For example using domains or subdomains and so on.

@RestController
@RequestMapping("/{tenantId}/invoice")
public class InvoiceController {
@Autowired
private InvoiceRepository invoiceRepository;
@RequestMapping(method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public List<Invoice> invoices() {
return Lists.newArrayList(invoiceRepository.findAll());
}
}
Finally, you can find all the implementation in my GitHub profile here


Feel free to contact me for questions or any doubt.

Happy codying

Sunday, May 22, 2016

Template method and Builder pattern using lambdas

Today I want to show you how easy is to implement a Template method pattern using lambdas and Consumer functional interface in Java 8. To illustrate our example, we are going to develop a software to send commands to a robot.

 Before the command is sent, it is required to establish the connection with the robot and set the robot to status ready (ready to receive commands). Finally after the command execution the robot must be set to idle status. Then each command execution requires 4 steps listed below.

  1. Establish connection with the robot.
  2. Set the robot to ready status.
  3. Execute the command.
  4. Set the robot to idle status.
Take a look how to implement this, with Java 8 using lambdas.

RobotCommand.execute(command ->
command.cmd("MOVE")
.arg("-from 130")
.arg("-to 200")
.arg("-speed 5"));
view raw Template.java hosted with ❤ by GitHub
As you can see in the above code, the client is not worried about how to obtain a robot connection, it only sends the command it want to execute on the robot. At the same time, we are using the lambda to provide a fluent interface to configure the command (Builder pattern).

public class RobotCommand {
private String command;
private ArrayList<String> arguments;
protected RobotCommand() {
this.arguments = new ArrayList<String>();
}
public RobotCommand cmd(String command) {
this.command = command;
return this;
}
public RobotCommand arg(String argument) {
this.arguments.add(argument);
return this;
}
public String getCommand() {
return command;
}
public List<String> getArguments() {
return arguments;
}
public static void execute(Consumer<RobotCommand> request) {
System.out.println("Establishing robot conection...");
System.out.println("Set robot on status READY");
RobotCommand robotCmd = new RobotCommand();
request.accept(robotCmd);
StringBuilder params = new StringBuilder();
robotCmd.getArguments().stream().forEach(arg -> params.append(arg + " "));
System.out.println(String.format("Execute robot command: %s %s", robotCmd.getCommand(), params));
System.out.println("Set robot on status IDLE");
}
}
The method 'executes' acts as template method, and you can see how the RobotCommand object is instantiated locally. Then using the method 'accept', the created RobotCommand is passed to the lambda context.

You can find the complete code here

Saturday, May 14, 2016

Decorator pattern using lambdas

The patterns mentioned in the book Gang of Four, persist over the time. In fact, although every days appear a lot new tools and frameworks, the base or the principles of the software engineering don't change so often like it seems (fortunately :P ).

Most of the frameworks and tools you are using right now, are an abstraction of the design patterns and concepts that maybe are more old than you. They introduce you another face of the design pattern, usually intending to hide some complexity.

Today, I want to show you guys what shape (face) takes the decorator pattern  when it is implemented with the lambda functionality provided by Java 8. Maybe I think that the key is thinking in the functions like objects.

Let's start. I like so much pizza, specially those days you don't want to cook, which it is quite often.
So you take the phone and call to order a pizza. Your pizza have a price (a base price), for each complement your are adding to your order, the price is increasing.

Take a look to this approach to implement a Order pizza software.


// The client orders a Barbacoa pizza. The base price of the pizza is 15 dollars.
Pizza myOrder = Pizza.newPizza(15, baicon(), cheese(), tomato(), barbacoaSauce());
// The client adds some complements
// We are decorating the pizza with complements
myOrder.setComplements(
Complements::chips,
Complements::extraDrink,
Complements::iceCream,
Complements::cinemaDisccount);
// Finally get the price
System.out.println("Base price of the barbacoa pizza: " + myOrder.getBasePrice());
System.out.println("Total price to pay with the complements: " + myOrder.getTotalPrice());
view raw PizzaApp.java hosted with ❤ by GitHub

Here is the interesting part. Note that the reduce method is combining all the complements in a chain of calls with the method AndThen.

public class Pizza {
private Ingredient[] ingredients;
private Function<Pizza, Pizza> complement;
private int basePrice;
public static Pizza newPizza(int price, Ingredient... ingredients) {
return new Pizza(price, ingredients);
}
private Pizza(int basePrice, Ingredient... ingredients) {
this.ingredients = ingredients;
this.basePrice = basePrice;
}
public void setComplements(Function<Pizza, Pizza>... complements) {
complement = Stream.of(complements).reduce(Function.identity(), Function::andThen);
}
public int getTotalPrice() {
Pizza p = complement.apply(this);
return p.getBasePrice();
}
public int getBasePrice() {
return basePrice;
}
}
view raw Pizza.java hosted with ❤ by GitHub

private static final Ingredient TOMATO = new Ingredient("tomato", 2);
private static final Ingredient CHEESE = new Ingredient("cheese", 1);
private static final Ingredient EGG = new Ingredient("egg", 3);
private static final Ingredient BAICON = new Ingredient("baicon", 5);
private static final Ingredient BARBACOA_SAUCE = new Ingredient("barbacoa sauce", 2);
public static Ingredient tomato() {
return TOMATO;
}
public static Ingredient baicon() {
return BAICON;
}
public static Ingredient egg() {
return EGG;
}
public static Ingredient cheese() {
return CHEESE;
}
public static Ingredient barbacoaSauce() {
return BARBACOA_SAUCE;
}

As you can see this solution has more expressiveness and also avoids the boilerplate code of the traditional way to implement the decorator pattern.

You can find the complete code in my Github profile here
Thank you very much, we keep sharing knowledge.

Sunday, May 8, 2016

Avoiding denied of service with rate limits

There is a lot of guides to help us to define a good Restful services. Mainly they talk about the following points:

  • Restful URLs and actions
  • Usability
  • Security
  • Versioning
  • Hateoas
  • Documentation
  • Cache
  • Pagination
  • Filtering
  • Response envelopes
  • Rate limit
  • ...... and so on

In this case we are going to focus on rate limit aspect. In the next screencast, it is presented an approach to implement a rate limit by IP client, based on RateLimiter class provided by guava library.



See you in the next post!

Thursday, April 7, 2016

Improve your performance with Spring Cache

Spring provides and abstraction layer to work with different cache providers. It is very useful to improve the performance of your software avoiding to compute data already calculated and without any state change. To learn more about this, you can check the Spring documentation here.

In the next screencast you can see a simple example of use of the different annotations supported by Spring Cache abstraction.


I hope you will find it interesting.

Monday, April 4, 2016

Serializing views with Jackson

As the title of this article says, you can define different views of the object which you want to serialize, typically the response of a REST service.

For example, you can define a view called Summary and at the same time another called Complete. Le't me show you with a code example.

First of all, you have to define, in your DTO, the required views. In this case we have two views, MessageSummary and MessageEntire.


public class Message {
@JsonView(MessageView.MessageSummary.class)
private String content;
@JsonView(MessageView.MessageSummary.class)
private String author;
@JsonView(MessageView.MessageEntire.class)
private int likes;
public String getContent() {
return content;
}
public void setContent(String content) {
this.content = content;
}
public String getAuthor() {
return author;
}
public void setAuthor(String author) {
this.author = author;
}
public int getLikes() {
return likes;
}
public void setLikes(int likes) {
this.likes = likes;
}
}
view raw Message.java hosted with ❤ by GitHub

Note that the class used to define your views is an interface which includes other interfaces. Also is important to note, that the MessageEntire view extends from MessageSummary, then all fields marked with MessageSummary are included in the view MessageEntire by inheritance.


public interface MessageView {
interface MessageSummary {}
interface MessageEntire extends MessageSummary {}
}


Finally, from your REST controller you can choose the required view by the response. For example, in our case, when we construct a new message, it doesn't have sense to return the likes fields because always is equals to zero, otherwise it has sense when the GET method is invoked.

@RestController
public class MessageController {
@RequestMapping(method = RequestMethod.POST)
@JsonView(MessageView.MessageSummary.class)
public Message createMessage() {
Message m = new Message();
m.setAuthor("John Doe");
m.setContent("I am John Doe");
m.setLikes(0);
return m;
}
@JsonView(MessageView.MessageEntire.class)
@RequestMapping(method = RequestMethod.GET)
public Message getMessage() {
Message m = new Message();
m.setAuthor("John Doe");
m.setContent("I am John Doe");
m.setLikes(23);
return m;
}
}


Jackson Power!

Monday, March 28, 2016

Tracer , support to trace interactions with your Spring Beans.


Today I want to share with you an interesting utility to help you tracing interactions with your Spring Beans called Tracer. In general very useful taking information about the performance of your business logic.

Also it is provided as starter (following the Spring Boot philosophy) , that way only putting the tracer's jar in the classpath, a configuration by default is loaded and it is ready to work without any configuration effort by the developer.

Take a look here.

Thursday, March 24, 2016

Focus Driven Development (FDD)

First came TDD (Test Driven Development), then came BDD (Behavior Driven Development) and finally DDD (Domain Driven Design)...and now FDD. Are we craizy ?...Probably ;).

 I will not explain in deep the concepts TDD, BDD and DDD (you can googling about that), but note that each concept is related to a different detail level of the development. Let me explain you with a simple example.

 If you have to implement a technical design, for example a simple class called HelloWorld, you can use TDD to assure that your technical design is well implemented, in other words your class HelloWord covers all the requirements specified in the technical design, parameter preconditions, exception handling, and so on. After that, if this class is used to implement some features, you can use BDD to assure that all acceptances criteria are achieved. Finally you can orchestrate the growth of your software using the paradigm exposed by DDD.

 Once we are in situation, we can think about the magic question. Where FDD fit with all of this? Sometimes an image says more than thousand words.



Your development is FDD complaint when you assure that your CTO, Product Owner, software architecture, software analyst or software developer are working well focused, and this correct focus is well propagated to the different software stages. 

FDD is like a good smoothie with TDD, BDD and DDD as ingredients.




Saturday, February 13, 2016

Polymorphic endpoints with Jackson

When you are designing an application model, one of the most useful techniques is the polymorphism. This allow to us , to write more generic code, more maintainable, and sometimes OCP  complaint (Open Close Principle, see SOLID).

The same concept, applies when you are designing a Rest API. In other words, the same endpoint can admit different Json structures. With this technique, you avoid to create a different endpoints for the different Json formats. Then your Rest API is more confortable and more friendly for your clients.

Imagine you have to design and endpoint to create task. But you have to be able to create user task and group task. The first step is to design our request model for this endpoint.

public interface CreateTaskRequest {
String getTitle();
String getDescription();
TaskType getType();
}

As you can see down, with the annotation @JsonTypeInfo we are telling to Jackson (Jackson is the library used in this example to convert Json files to java objects) what property has to use to find the class to deserialize. When the property informed in the Json input file is "group", jackson tries to deserialize to GroupCreateTaskRequest object or to GroupCreateTaskRequest when is "user". On the other hand, with the annotation @JsonSubType we are mapping the type property value with the child class.

@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "taskType")
@JsonSubTypes({ @Type(value = UserCreateTaskRequest.class, name = "user"), @Type(value = GroupCreateTaskRequest.class, name = "group") })
public abstract class AbstractCreateTaskRequest implements CreateTaskRequest {
private String title;
private String description;
private TaskType taskType;
public AbstractCreateTaskRequest(TaskType taskType) {
this.taskType = taskType;
}
@Override
public String getTitle() {
return this.title;
}
@Override
public String getDescription() {
return this.description;
}
@Override
public TaskType getType() {
return this.taskType;
}
public void setTitle(String title) {
this.title = title;
}
public void setDescription(String description) {
this.description = description;
}
}
public class UserCreateTaskRequest extends AbstractCreateTaskRequest {
private String userName;
public UserCreateTaskRequest() {
super(TaskType.USER);
}
public String getUserName() {
return userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
}
public class GroupCreateTaskRequest extends AbstractCreateTaskRequest {
private String groupName;
public GroupCreateTaskRequest() {
super(TaskType.GROUP);
}
public String getGroupName() {
return groupName;
}
public void setGroupName(String groupName) {
this.groupName = groupName;
}
}
Finally, our Rest controller looks like that.

@RestController
@RequestMapping("/task")
public class TaskController {
@RequestMapping(method = RequestMethod.POST)
public void createTask(@RequestBody CreateTaskRequest createTaskRequest) {
if (TaskType.GROUP.equals(createTaskRequest.getType())) {
System.out.println("Creating group task");
}
if (TaskType.USER.equals(createTaskRequest.getType())) {
System.out.print("Creating user task");
}
}
}


You can find the code of this example here.

Sunday, January 3, 2016

Make your application more resilient with Retry4j



Nowdays, it's very important to serve a service of quality to your users. It means that your aplication's uptime must be uninterrupted or at least very near to that. Under these requirements, there are patterns to implement that. And of course, there are libraries which wrap this patterns, like probably you know Hystrix from Netflix.

But around big projects like Hystrix, always are projects more minimalistic or simple like Retry4j. This project implements stability patterns like Timeout and Circuit Breaker in a very easy way using builders. It has not any external dependency and you can integrate the Retry4j objects with your IoC without problems.

Take a look to its documentation here.

Friday, January 1, 2016

Mapping beans with Orika library

Today I want to talk about one of this useful tools that sometime can do our life more easy :).

Normally, when your are designing an enterprise application, your domain model is contained in your application's service layer. Over this layer (controllers), the communication is performed using DTOs, then you only are transfering the required information by your different clients (web user interface, another backend application, mobile applications and so on).  It is in this point where Orika can help us.

Let me show you an example. Imagine you have this domain object.

package com.hdbandit.orika_mapper;
public class Person {
private String name;
private String surname;
private int age;
private String address;
private String jobLocation;
private int salary;
private String jobCategory;
public String getJobLocation() {
return jobLocation;
}
public void setJobLocation(String jobLocation) {
this.jobLocation = jobLocation;
}
public int getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
public String getJobCategory() {
return jobCategory;
}
public void setJobCategory(String jobCategory) {
this.jobCategory = jobCategory;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getSurname() {
return surname;
}
public void setSurname(String surname) {
this.surname = surname;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
@Override
public String toString() {
return "Person{" +
"name='" + name + '\'' +
", surname='" + surname + '\'' +
", age=" + age +
", address='" + address + '\'' +
", jobLocation='" + jobLocation + '\'' +
", salary=" + salary +
", jobCategory='" + jobCategory + '\'' +
'}';
}
}
view raw Person.java hosted with ❤ by GitHub

Imagine you need two different endpoints to serves information about persons. One of them must to return a basic person information, and the other one must to return the complete information about the person. As you can imagine, you need two DTOs to transfer the required information into the body's response (using json or whatever you want).

Here are our DTOs.

public class PersonBasicInfoDTO {
private String name;
private String surname;
private int age;
private String address;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getSurname() {
return surname;
}
public void setSurname(String surname) {
this.surname = surname;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
@Override
public String toString() {
return "PersonBasicInfoDTO{" +
"name='" + name + '\'' +
", surname='" + surname + '\'' +
", age=" + age +
", address='" + address + '\'' +
'}';
}
public class PersonCompleteInfoDTO {
private String name;
private String surname;
private int age;
private String address;
private String jobLocation;
private int salary;
private String jobCategory;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getSurname() {
return surname;
}
public void setSurname(String surname) {
this.surname = surname;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
public String getJobLocation() {
return jobLocation;
}
public void setJobLocation(String jobLocation) {
this.jobLocation = jobLocation;
}
public int getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
public String getJobCategory() {
return jobCategory;
}
public void setJobCategory(String jobCategory) {
this.jobCategory = jobCategory;
}
@Override
public String toString() {
return "PersonCompleteInfoDTO{" +
"name='" + name + '\'' +
", surname='" + surname + '\'' +
", age=" + age +
", address='" + address + '\'' +
", jobLocation='" + jobLocation + '\'' +
", salary=" + salary +
", jobCategory='" + jobCategory + '\'' +
'}';
}
}


Well, at this point you have to transform the response of our service (service layer) that returns a domain object into the different DTOs required by the different endpoints. Orika can do that in a simple way. The key concepts are MappingFactory and MappingFacade. Orika by default maps the properties with the same name and type.

Take a look.

MapperFactory mapperFactory = new DefaultMapperFactory.Builder().build();
// map Person-PersonBasicInfoDTO
mapperFactory.classMap(Person.class, PersonBasicInfoDTO.class)
.byDefault()
.register();
// map Person-PersonCompleteInfoDTO
mapperFactory.classMap(Person.class, PersonCompleteInfoDTO.class)
.byDefault()
.register();
MapperFacade mapper = mapperFactory.getMapperFacade();
Person person = new Person();
// set some field values
person.setAddress("New York");
person.setAge(32);
person.setName("Peter");
person.setSurname("Parker");
person.setJobLocation("New York");
person.setJobCategory("Super Hero");
person.setSalary(0);
// get basic info
PersonBasicInfoDTO basicInfoDTO = mapper.map(person, PersonBasicInfoDTO.class);
// get complete info
PersonCompleteInfoDTO completeInfoDTO = mapper.map(person, PersonCompleteInfoDTO.class);

In fact, this example is very simple, but Orika supports more complex mappings. You can find a complete Orika's guide here.

Happy new year!