Çomak's Notes on Software https://www.entrofi.net/ Scribbles on software development practices Sun, 06 Jun 2021 11:15:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 https://i0.wp.com/www.entrofi.net/wp-content/uploads/2020/01/cropped-Screenshot-2020-01-14-at-17.14.07-1.png?fit=32%2C32&ssl=1 Çomak's Notes on Software https://www.entrofi.net/ 32 32 170778684 JPA-based Spring Boot with Docker Database that contains snapshot data https://www.entrofi.net/spring-boot-spring-data-jpa-backed-up-with-docker-database/?utm_source=rss&utm_medium=rss&utm_campaign=spring-boot-spring-data-jpa-backed-up-with-docker-database Fri, 04 Jun 2021 22:03:40 +0000 https://www.entrofi.net/?p=372 This example demonstrates a spring data JPA-based Spring Boot setup which is backed up by Docker Database images. The images will contain initial data to ease up local development or to easily initiate any staging environment. The core dependencies of the example are as follows: Spring Boot 2.5.0 Spring 5.3.7 Hibernate 5.4.31.Final PostgreSQL driver 42.2.20 … Continue reading "JPA-based Spring Boot with Docker Database that contains snapshot data"

The post JPA-based Spring Boot with Docker Database that contains snapshot data first appeared on Çomak's Notes on Software.

The post JPA-based Spring Boot with Docker Database that contains snapshot data appeared first on Çomak's Notes on Software.

]]>
Spring boot jpa with dockerized snapshot data

This example demonstrates a spring data JPA-based Spring Boot setup which is backed up by Docker Database images. The images will contain initial data to ease up local development or to easily initiate any staging environment.

The core dependencies of the example are as follows:

  • Spring Boot 2.5.0
  • Spring 5.3.7
  • Hibernate 5.4.31.Final
  • PostgreSQL driver 42.2.20
  • MySQL connector 8.0.25 (Alternative Database Option)

We are going to follow the listed steps throughout this example:

  1. Introduce PostgreSQL database as the default database to the application
  2. Create and run a PostgreSQL docker image backed by initial data
  3. Add entities and repositories to the application
  4. Test the initial setup
  5. Introduce MySQL database as a secondary database option to the application
  6. Create and run a MySQL docker image backed by initial data
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.5.0</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>net.entrofi.spring</groupId>
	<artifactId>spring-boot-data-jpa</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>spring-boot-data-jpa</name>
	<description>Spring Boot with Spring Data JPA example</description>
	<properties>
		<java.version>11</java.version>
	</properties>
	<dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
		<!-- Database and JPA Dependencies -->
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<optional>true</optional>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
				<configuration>
					<excludes>
						<exclude>
							<groupId>org.projectlombok</groupId>
							<artifactId>lombok</artifactId>
						</exclude>
					</excludes>
				</configuration>
			</plugin>
		</plugins>
	</build>

</project>

Introduce PostgreSQL database to the Spring boot application

In order for our application to be run against a postgresql database, we need to introduce postgresql driver dependency to our pom.xml Add the following snippet to the dependencies section of your pom file.

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
</dependency> 

After adding PostgreSQL driver, we need to configure spring datasource:

## Connection pool
spring.datasource.hikari.connectionTimeout=20000
spring.datasource.hikari.maximumPoolSize=5

## PostgreSQL Connection Properties
spring.datasource.url=jdbc:postgresql://localhost:5432/spring_data_jpa_postgres_db
spring.datasource.username=postgres
spring.datasource.password=password

#Let hibernate create the database tables using entity declarations. Of course one should disable this in production environments
#spring.jpa.hibernate.ddl-auto=create

Create and run a PostgreSQL docker image backed by initial data

Creating a PostgreSQL docker image with initial data in it is pretty easy. Add the following Dockerfile under the folder src/docker/databases/postgresql:

# Dockerfile for PostgreSQL
FROM postgres

ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB spring_data_jpa_postgres_db

COPY scripts/* /docker-entrypoint-initdb.d/

The last line in this Dockerfile is required to create our initial data. PostgreSQL docker image executes all of the files located under docker-entrypoint-initdb.d/ alphabetically when the container starts up. Therefore, we are going place all of our initialization scripts under the folder src/docker/databases/postgresql/scripts.

Next step is to create schema initialization and data initialization scripts. First, create a file named 0_create_tables.sql under src/docker/databases/postgresql/scripts/:

create sequence hibernate_sequence start 1 increment 1;
create table product
(
    id          int8 not null,
    description text,
    ean         varchar(255),
    name        varchar(255),
    primary key (id)
);

and then add the following data initialization script src/docker/databases/postgresql/scripts/1_initial_data.sql:

INSERT INTO public.product (id, description, ean, name) VALUES (1, '<div id="feature-bullets" class="a-section a-spacing-medium a-spacing-top-small">
       <hr>
       <h1 class="a-size-base-plus a-text-bold">
       About this item
       </h1>
       <ul class="a-unordered-list a-vertical a-spacing-mini">
       <li><span class="a-list-item">
       A brushless motor for more power and a longer term than with a conventional carbon brush engine
       </span></li>
       <li><span class="a-list-item">
       Small and easy for convenient handling and at the same time powerful with a high torque for fast screwing. With Robust 1/2 "Outside Fiercant Recording
       </span></li>                                   
       </ul>
       </div>', 'B01M1RJU2O', 'Einhell cordless impact wrench TE-CW 18 Li BL Solo Power X-Change (lithium ion, 18 V, 215 Nm, LED light, bit adapter for screwing, without battery and charger)');

Now we can build and run the PostgreSQL docker image:

$ docker build -t spring-boot-postgresql-db .
$ docker run --name spring-boot-postgresql-db-ins  -d -p 5432:5432 spring-boot-postgresql-db

Add entities and repositories to the application

We declared a table with name PRODUCT in the previous section. Let’s create the entity to represent this table in JPA context:

package net.entrofi.spring.springbootdatajpa.data.entity;

import lombok.Getter;
import lombok.Setter;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
@Setter
@Getter
public class Product {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String name;

    private String eAN;

    private String description;
}

Next step is to create the spring data repository to provide basic CRUD functionality for this entity:

package net.entrofi.spring.springbootdatajpa.data.repository;

import net.entrofi.spring.springbootdatajpa.data.entity.Product;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface ProductRepository extends JpaRepository<Product, Long> {
}

As the last step of this section, we can add a controller for testing purposes:

@RestController
public class ProductController {

    private ProductRepository productRepository;

    @Autowired
    public ProductController(ProductRepository productRepository) {
        this.productRepository = productRepository;
    }

    @GetMapping("/products")
    List<Product> getProducts() {
        return productRepository.findAll();
    }
}

Test the initial setup of the JPA-based Spring Boot with Snapshot Docker PostgreSQL

We are now ready to test the application. Run your application $mvn spring-boot:run and send the following request to check whether your databases started as expected:

$  curl http://localhost:8080/products
[
    {
        "id": 1,
        "name": "Einhell cordless impact wrench TE-CW 18 Li BL Solo Power X-Change (lithium ion, 18 V, 215 Nm, LED light, bit adapter for screwing, without battery and charger)",
        "description": "<div id=\"feature-bullets\" class=\"a-section a-spacing-medium a-spacing-top-small\">.....                                   </div>",
        "ean": "B01M1RJU2O"
    }
]

Introduce MySQL database as a secondary database option to the Spring boot application

Similar to the PostgreSQL step above, we are going to introduce MySQL as the secondary RDBMS option to our application in this step.

As we did in the PostgreSQL docker image section, create the folder src/docker/databases/mysql/ to collect MySQL related configurations and add the following files accordingly:

Dockerfile

#src/docker/databases/mysql/Dockerfile
FROM mysql:5.7
LABEL description="My Custom Mysql Docker Image"

# Add a database
ENV MYSQL_DATABASE spring_data_jpa_mysql_db

#Check out docker entry point for further configuration :
# https://github.com/docker-library/mysql
COPY ./scripts/ /docker-entrypoint-initdb.d/

Schema creation:

--0_create_tables.sql
create table product
(
    id          bigint not null,
    description text,
    ean         varchar(255),
    name        varchar(255),
    primary key (id)
);

Initial data:

INSERT INTO product (id, description, ean, name) VALUES (1, '<div id="feature-bullets" class="a-section a-spacing-medium a-spacing-top-small">
       <hr>
       <h1 class="a-size-base-plus a-text-bold">
       About this item
       </h1>
       <ul class="a-unordered-list a-vertical a-spacing-mini">
       <li><span class="a-list-item">
       A brushless motor for more power and a longer term than with a conventional carbon brush engine
       </span></li>
       </ul>
       </div>', 'B01M1RJU2O', 'MYSQL impact wrench');

Run MySQL docker container

We can now build the customized MySQL docker image and run it:

$ docker build -t spring-boot-mysql-db .
$ docker run -d -p 63306:3306 --name spring-boot-mysql-db-ins \
-e MYSQL_ROOT_PASSWORD=password spring-boot-mysql-db

Introduce a new environment specific properties file

As you might already know, spring has built-in support for targetting different environments. Hence, one can simply define an environment-specific file with the pattern application-[ENVIRONMENT].properties and use this file by defining a spring profile with the same environment name. This is what we are going to do here.

Add a file named src/main/resources/application-mysql.properties:

## MySQL
spring.datasource.url=jdbc:mysql://localhost:63306/spring_data_jpa_mysql_db
spring.datasource.username=root
spring.datasource.password=password

#`hibernate_sequence' doesn't exist
spring.jpa.hibernate.use-new-id-generator-mappings=false

Finally, our Spring Boot JPA application with Docker Database is now ready to be run against a MySQL DB as well. We can now run the application by activating the mysql profile as follows:

mvn spring-boot:run -Dspring-boot.run.profiles=mysql

Conclusion

This tutorial summarized how to create a JPA-based Spring Boot with Docker Database that is initialized with snapshot data. You can reach the full code of the article here:

https://github.com/entrofi/spring/tree/master/spring-boot-data-jpa

The post JPA-based Spring Boot with Docker Database that contains snapshot data first appeared on Çomak's Notes on Software.

The post JPA-based Spring Boot with Docker Database that contains snapshot data appeared first on Çomak's Notes on Software.

]]>
372
Java Local Variable Type Inference https://www.entrofi.net/java-local-variable-type-inference/?utm_source=rss&utm_medium=rss&utm_campaign=java-local-variable-type-inference Sat, 01 May 2021 09:20:04 +0000 https://www.entrofi.net/?p=303 Introduction Once written, a code block is definitely read more than it’s modified. Given this fact, it’s important to leave it as readable as it can be for future readers. Therefore, the Java team introduced a new feature called “Local Variable Type Inference” in Java 10 to help the developers further with this fact. The … Continue reading "Java Local Variable Type Inference"

The post Java Local Variable Type Inference first appeared on Çomak's Notes on Software.

The post Java Local Variable Type Inference appeared first on Çomak's Notes on Software.

]]>
Java 10 Local Variable Type Inference
Java 10 – Local variable type inference

Introduction

Once written, a code block is definitely read more than it’s modified. Given this fact, it’s important to leave it as readable as it can be for future readers. Therefore, the Java team introduced a new feature called “Local Variable Type Inference” in Java 10 to help the developers further with this fact.

The feature is introduced with JEP 286. Let’s take a look at the excerpt from the goal section of the JEP.

Goals
We seek to improve the developer experience by reducing the ceremony associated with writing Java code, while maintaining Java’s commitment to static type safety, by allowing developers to elide the often-unnecessary manifest declaration of local variable types

JEP 286: Local-Variable Type Inference

As it is noted here, the primary purpose of this feature is to contribute to language’s intrinsic readability and we are going to take look at it in this article.

History of Type Inference In Java

To start with, let’s look at the history of type inference in Java. Type inference was introduced to java initially with version 5. This introduction enabled us to omit static type declaration while using generics. For instance, consider the following expression:

Set<Integer> integerSet = Collections.<Integer>emptySet()

since java 5, it’s possible to write this expression in a more simpler way like the following:

Set<Integer> integerSet = Collections.emptySet();

Java compiler infers the generic type of the set returned by the emptySet() method for you.

After this introduction, the Java team expanded the scope of type inference in Java 7 further by introducing the diamond operator. For example, instead of:

Map<Order, List<Product>> productsByOrder = new HashMap<Order, List<Product>>();

You can write:

Map<Order, List<Product>> productsByOrder = new HashMap<>();

Java 8 extended this further to infer the types in Lambda expressions:

Predicate<String> isEmptyString = (String x) -> x != null && x.length > 0;
//OR with inference
Predicate<String> isEmptyString = x -> x != null && x.length > 0;


Local Variable Type Inference

In order to aid readability in certain cases, java 10 introduced a new feature called “local variable type inference“. By using the word “var” in local contexts, we can now omit the type manifestation while declaring a local variable. Java compiler infers the type of the variable using the assignment part of the declaration.

Let’s look through some examples.

Available only for local variables

Local type inference is only supported for local variables. Therefore, It can not be used for method arguments or return types, and member fields.

void someMethod() {
		Map<String, List<Order>> orderMap = new HashMap<>();
}

//Can now be written as

void someMethod() {
		var orderMap = new HashMap<String, List<Order>>();
}

You can use it with a static type initializer. Meaning, you can use it with a constructor or a method that returns a static type.

var order = new Order(); // order type is inferred

public List<Order> getCustomerOrders(String customerId){...}

var orderList = getCustomerOrders("1234"); //List of orders is inferred

Furthermore, it can be used as the iteration variable of enhanced for loop or in a try with statement.

var stringArray = new String[11];
Arrays.fill(new String[11], 0, 10, "String Array");

for(var stringInArray: stringArray) {
    System.out.println(stringInArray);
}

try (var bufferedReader =
                   new BufferedReader(new FileReader(path))) {
        return bufferedReader.readLine();
    }

Has Java become a dynamically typed language now?

No, Java is still a statically typed language and the variables declared using local type inference are inferred at compile-time, not in runtime.

var counter = 13;
counter = "13";
|  Error:
|  incompatible types: java.lang.String cannot be converted to int
|  counter = "13"

Is var a reserved keyword now?

No it’s not a reserved keyword. This means the following:

  • It’s possbile to use “var” as a method name.
  • It’s possible to use “var” as variable or field name.
  • Finally, using “var” as a package name is also possible.
//var as a method name is allowed
public void var() {
      System.out.println("Hello var!");
  }

public void someMethod() {
  //var as a variable name is allowed
  var var = 3;
}

public class AClass {
 //var as field name is allowed
  public int var; 
}

//package name is allowed
package var;

On the other hand, you can not use it as a type name.

public class var {

}
//Build output
java: 'var' not allowed here
  as of release 10, 'var' is a restricted type name and cannot
 be used for type declarations

How about the use of local variable inference with polymorphic types?

Assume that we have an interface(or a parent class) Mammal which is implemented by two other types Leopard and Hippopotamus . For example, when we use a statement like the one below:

var mammal = new Leopard();

, does mammal have the type Leopard or Mammal ? As an illustration, consider the following statement:

var mammal = new Leopard();
mammal = new Hippopotamus();

Will this work?
No, the local variable assignment in the above snippet won’t work this time. That is to say, local variable inference does not play along with the polymorphic behavior.

Local Variable Type Inference Usage and Coding Style Recommendations

Although language features like local variable type inference are there to ease readability of the code – as well as to ease writing the code-, misuse or unconscious use of such features can lead to the complete opposite. Therefore, paying special attention to the style guides and good practice recommendations is extremely helpful to avoid such situations.

Luckily, Java.net has a dedicated “style guides and recommendations” page for this feature. Even though we are going to provide some caveats and summarize some important pieces from these recommendations. It’s particularly important to emphasize that it’s worth it to visit this page.

Let’s have a look at these recommendations and caveats now.

Pay special attention to minimising the scope of local variables

It is a good practice to reduce the scope of local variables in general. This becomes particularly important when using var. For instance, consider the following snippet from java.net’s guide page:

var items = new ArrayList<Item>(...);
items.add(MUST_BE_PROCESSED_LAST);
for (var item : items) ...

Assume that the order of items in the list matters for further processing of the list in this code snippet. Given this requirement, what happens if there is a new request that requires the removal of duplicate elements? A developer may achieve this goal by converting this list to a HashSet:

var items = new HashSet<Item>(...);
items.add(MUST_BE_PROCESSED_LAST);
for (var item : items) ...

Since the sets do not have an explicit iteration order, this code now has a bug. Nevertheless, the developer will probably fix this error when he/she makes the change. The reason is that the scope of the local variable is limited enough to make this bug cognisable at first sight.

Now assume that this variable is used in a bloated method:

var items = new HashSet<Item>(...);
items.add(MUST_BE_PROCESSED_LAST);
//...
//...
//...hundreds of lines of code
//...
//...
for (var item : items) ...

The effect of changing the list is no longer easily recognisable in this case, because the list is processed far away from the declaration. The code is now prone to stay in this buggy state longer. Therefore, it’s good practice to keep local variable scopes as limited as possible and to be cautious using var if simply using it hides the necessary information.

Use var to break up chained or nested expressions

Consider the following code block which creates a flight by destination map entry of which contains the maximum-duration flight for a specific destination.

Map<String, Flight> flightsByDestMaxDurMap = 
  flightList.stream()
     .collect(groupingBy(
        Flight::getDestination,
        collectingAndThen(
           maxBy(
             Comparator.comparing(Flight::getDuration)), 
             Optional::get)));

The intention is hardly readable from this code-block, because of the nested and chained expressions in stream processing. A developer can alternatively write this with var as follows:

var maxByDuration = maxBy(Comparator.comparing(Flight::getDuration));

var maxDurationFlightSelection = collectingAndThen(maxByDuration, Optional::get);

var flightsByDestMaxDurMap = flightList
       .stream()
       .collect(
         groupingBy(
          Flight::getDestination,maxDurationFlightSelection));

Of course, it’s justifiable to use the first approach in this case. Nevertheless, in cases where we face more complex and long chained/nested expressions, breaking up the expression using var can help a lot in terms of readability.

Approaching the conclusion, it’s worth mentioning that although we have not covered all of the recommendations stated in the “Local Variable Type Inference: Style Guidelines” page of java.net, they are still important and valuable. It’s definitely worth taking a look at this page.

Conclusion

We took a look at the local type inference feature of Java 10 and some recommendations regarding the local type inference feature. This new feature can help us make our code more readable as long as we stick with the style guides and recommendations that come along with the feature.

Happy coding!

The post Java Local Variable Type Inference first appeared on Çomak's Notes on Software.

The post Java Local Variable Type Inference appeared first on Çomak's Notes on Software.

]]>
303
Coupling Metrics – Afferent and Efferent Coupling https://www.entrofi.net/coupling-metrics-afferent-and-efferent-coupling/?utm_source=rss&utm_medium=rss&utm_campaign=coupling-metrics-afferent-and-efferent-coupling Sun, 19 Jan 2020 14:16:09 +0000 https://www.entrofi.net/?p=239 How to help your code base to stand the test of time using fan-in and fan-out metrics? A short revisit of afferent and efferent coupling metrics.

The post Coupling Metrics – Afferent and Efferent Coupling first appeared on Çomak's Notes on Software.

The post Coupling Metrics – Afferent and Efferent Coupling appeared first on Çomak's Notes on Software.

]]>
There are different definitions of coupling types in software development, and each of these has a different perspective. There is one shared concept among the definitions though; Coupling in software is about the dependency relationship of modules. This leads us to the generic definition of coupling; “Coupling is the degree of interdependence between software modules …[1]. Granted that, anyone who has dealt with coupling must have heard the widely known statement that it’s crucial to seek low coupling, and high cohesion between software modules to get a well structured, reliable, easy-to-change software. However, how can we know that our software design has the correct level of coupling? In order to answer this, first, we are going to revisit afferent coupling, efferent coupling concepts in this article. Secondly, we are going to explain the instability index concept, which relates these metrics to each other.

Afferent and Efferent Coupling as the Metrics of Coupling

A software quanta, -a module, a class, a component- is considered to be more stable if they have loose coupling to the other quanta in the system because it will remain undisturbed by the changes introduced to the others. The metrics afferent and efferent coupling, which were initially defined by Robert C. Martin in his books Agile Software Development, and Clean Architecture, help us understand the probability that a change in a software module will force a change in others. Similarly, they guide us to see the tendency of a module to be affected if an error in other modules of the system occurs.

Afferent Coupling – Fan-in

Afferent Coupling metric defines the number of modules that depend on a specific module. A module with a high fan-in value is likely to induce changes on the components which are dependent on it. On the other hand, the changes to the dependants are unlikely to induce changes to this component.

An example of afferent coupling between components.
Fan-in Coupling Example

For instance, the PageUrlGenerator class in the above diagram has three first-level dependants, and four dependants in total so it’s said to have a high afferent coupling.

As we mentioned before, components with high afferent coupling have smaller chances of being affected by the changes introduced to its dependants, therefore; these components are also called “stable components“. When we see a module (component etc.) with high fan-in coupling, we can also assume that the code reuse is high from the perspective of that module.

Relying on these statements, one might instinctively think that high fan-in value is always a good thing. While this may be true in most of the cases, consider a “stable component which depends on a highly flexible component, what happens in this case? Alternatively, assume that we have another so-called “stable” component with a high fan-in value that tries to handle many responsibilities; in fact, it acts as a swiss army knife. Can we confidently say that we did a good job in either of these cases? Absolutely, no! We will elaborate on this after we define the Efferent Coupling metric and the Instability Index.

Efferent Coupling – Fan-out

Efferent coupling, on the other hand, defines the number of components on which a certain component depends. Components with high efferent coupling value are sensitive to the changes that are introduced to their dependencies. In addition, the deficiencies of their dependencies naturally manifest themselves in these components.

Example of efferent coupling
Efferent coupling – Fan-out

As an illustration, let’s focus on the LiveUrlGenerator class in the above; this class extends the GenericViewUrlGenerator class and has “has-a” relation to the classes MarketingAgentUrlGenerator and LoginRedirectUrlGenerator. If any of its dependencies undergoes a change, this class needs to adapt to it as well. Similarly, if a bug manifests itself in one of these dependencies, LiveUrlGenerator suffers from it too. Therefore, we call such kind of components “unstable“.

This leads to another question:

-Should we try to avoid high fan-out value all the time?

-No!

In fact, some components need to be “unstable” (or flexible) by nature. For instance, consider a microservices service of the form “Backend for Frontend” which uses some other downstream services to collect and orchestrate the information that the frontend application will need. If we try to reduce the fan-out metric of such a component, we might end up either with a completely irresponsible component that does nothing or overly responsible component that tries to do everything (see also: coincidental cohesion, death star antipattern). That is to say, whether it’s fan-in or fan-out coupling is inevitable, -actually mandatory-, however; what is the correct level or style of coupling? We will try to explain this after touching on the concept “instability index”.

Instability Index

The instability index relates fan-out coupling to the total incoming/outgoing coupling of the system with the formula:

Instability index formula: Instability index ranges between 1 and 0 (inclusive). 0 indicates maximal stability, 1 indicates maximal instability.
Instability index formula: Instability index ranges between 1 and 0 (inclusive).

The instability index ranges between [0,1]. If the value is tending towards 0 then the component is heading to maximal stability. The value 1, on the other hand, indicates maximum instability.

What is the correct level of instability?

We can answer this question with the famous software consultant reply: “it depends!”. It depends because the role of the component within the architectural constellation might require that component to be flexible, or as stable as possible. The goal, in general, is not to have individually stable components all over the system, but rather to have a stable system that is capable of responding to change in a seamless, adaptive manner. This leads us to the “Stable Dependencies Principle“.

The “Stable Dependencies Principle” states that:

The instability metric of a component should be larger than the instability metrics of the component that it depends on. That is, the instability metrics should decrease in the direction of dependency.

(See [3, Chapter 14])

How to check the stability of a system?

Luckily, there are libraries that ease implementation of such kind of architectural wellness checks; JDepend for java, NDepend for -. Net, PDepend for PHP are a few examples. In addition, there many static code analysis tools like Checkstyle, that make use of such libraries to provide configurable utilities to handle such concerns effortlessly.

References and Further Reading

  1. Coupling (computer programming). (2019, December 19). Retrieved January 3, 2020, from https://en.wikipedia.org/wiki/Coupling_(computer_programming).
  2. Martin, R. C. (2003, 2014). Agile software development: Principles, patterns, and practices.
  3. Martin, R. C. (2018). Clean Architecture: a craftsman’s guide to software structure and design

The post Coupling Metrics – Afferent and Efferent Coupling first appeared on Çomak's Notes on Software.

The post Coupling Metrics – Afferent and Efferent Coupling appeared first on Çomak's Notes on Software.

]]>
239
Github’s Scientist as a helper to do large refactorings https://www.entrofi.net/githubs-scientist-as-a-helper-to-do-large-refactorings/?utm_source=rss&utm_medium=rss&utm_campaign=githubs-scientist-as-a-helper-to-do-large-refactorings Thu, 26 Dec 2019 22:19:10 +0000 https://entrofi.net/?p=110 Github’s Scientist is a tool to create fitness functions for critical path refactorings in a project. It relies on the idea that for a large enough system, the behavior or data complexity makes it harder to refactor the critical paths only with the help of tests. If we can run the new path and old … Continue reading "Github’s Scientist as a helper to do large refactorings"

The post Github’s Scientist as a helper to do large refactorings first appeared on Çomak's Notes on Software.

The post Github’s Scientist as a helper to do large refactorings appeared first on Çomak's Notes on Software.

]]>
Using github scientist for large refactorings in java

Github’s Scientist is a tool to create fitness functions for critical path refactorings in a project. It relies on the idea that for a large enough system, the behavior or data complexity makes it harder to refactor the critical paths only with the help of tests. If we can run the new path and old path in production in parallel without affecting the current behavior, and compare the results, then we can decide the best moment to switch to the new path more confidently.
I created a simple example to demonstrate the application of the concept on a java project.

Prologue

Some software companies don’t pay enough attention to the overall quality of their codebase. We might even say that this is a common pattern in the software business. The rationale for such a behavior is often justified by the claim that paying attention to “fast delivery to market” is far more important than code quality aspects of the product. Similarly, caring more about whether the functionality is there, is what matters for the business.

During the early stages of a project, this claim might have the (false) appearance of being true; your codebase has not grown that large yet, and you are delivering to your customers with “unbelievable” velocity. Since this the case, there is no point in caring about this technical nonsense. However, as time goes by, this kind of approach causes the technical debt to pile up. It slowly starts to cripple your market maneuvering capability, makes your code harder to change and degrades your developers’ motivation.

Excuses don’t help your software to stay healthy

This situation resembles the self-apologia of a person who consumes junk food and does no exercise to support her body because she thinks she lacks time. There is always the excuse of having some more important things to do for her. But, you know, it might be too late for her when she starts to face life paralyzing issues like vascular occlusion problems, heart failures, and lung capacity issues.

Nevertheless, some fact facer people are capable of confronting such future troubling practices in time. No matter the difficulty of overcoming the problems that are caused by a sloppy lifestyle, they show strong dedication to surmount them.

A goal without a proper strategy is just a goldfish in a bowl!

This strong dedication is also valid for some companies. As a consequence of different kinds of technical debts and code smells, their codebases might have been going south for a while. To name a few; deep coupling, outdated design, death star classes, and many others might be among them. However, they show a strong dedication to alter this situation.

Dedication is only the starting point though. Refactoring under such circumstances is a tedious, long-running job. Thereby, if you do not have a proper strategy; not long before the beginning, you start to feel your self like a goldfish in a bowl.

Implementing dependable test suites is a common next step to commence a refactoring strategy. The question is: Are the tests enough to refactor a considerably large, legacy codebase?

The idea behind Github’s Scientist

The idea is simple: For a large enough system, it’s hard to cover all cases by tests. In order to refactor safely, implement a way to run the candidate and the old code path against production data, but do not let the candidate path have any effects on the final result. Compare the results of each execution and record the differences. When you’re confident that there is no mismatch anymore, switch to the new implementation.

Github's Scientist Flow for experimenting large refactorings.

Java alternatives of Github’s Scientist

Github’s Scientist originally implemented as a Ruby library. The implementations for other languages are mostly done by independent developers. So for java, there are two alternatives currently: Scientist4J and Poptchev’s Kotlin based scientist. Since the original library is relatively small, it’s also possible to implement your version of “Github’s Scientist for Java”.

Example

As I mentioned in the previous section, two alternatives of Scientist for Java platform exist at the moment. We are going to use Scientist4J in this example.

The scenario of the example

Although the actual application scenarios are far more complicated, let’s use a hello-world scenario to represent the idea of Github’s Scientist.

Our application will expose a rest end-point to greet its consumers. The end-point will use a path parameter to get the name of the caller and pass it to its backing business service.

The backing business service interface is called “GreeterService“. In the beginning, “GreeterService” has only the old implementation “OldGreeterService“.

Because of ever-changing market situations, we will have to implement a new version of the “GreeterService”; the “NewGreeterService“.

The greeting feature is a critical one for our business, so we do not want to change it directly after the implementation. We would like to check if the new service behaves exactly like the old one for a while. Therefore, we are going to introduce an additional abstraction layer to use the “Scientist” for our Java environment.

You can check out the example code from here.

Implementation of the Scenario

We stated that we are going to use Scientist4J for this example. The setup of the tool is easy; Add the dependency to your pom file or “build.gradle” file.

We will expose a rest endpoint to serve our clients:

The backing service of our rest controller is the “GreeterService“. It returns “Hello <name>” for the provided name.

Our old version of the service:

GreeterService” is defined in injection context as follows:

In order to simulate a random mismatch between new and old implementations, the “NewGreeterService” chooses a salutation word from an array of salutation words. This way we can demonstrate how Scientist works:

Introduce the Experiment

The idea was to run both of these implementations in the production environment, right? To achieve this, we are going to use a simplified version of the “Branch by Abstraction” pattern. Our abstraction flow is shown in the figure below.

Scientist for Java: Experiment example flow.
Fig: Experiment abstraction flow.

Let’s implement the abstraction layer “ExperimentingGreeterService” so that we can make use of Scientist for Java. Here is our experimenting service implementation:

Note: An extended version of Scientist4J’s Experiment class is used in the “ExperimentingGreeterService”. This way we were able to enable service return value comparison for reporting.

The critical section of this class is the implementation of the “greet” method. The actual experimentation is created via Supplier instances, which are passed into the “run” method of the “Experiment” class. The instance of the “Experiment” class executes both paths and returns the result of the old one.

In Scientist4J, developers used Dropwizard metrics for reporting purposes. In order to make use of these metrics, we need to manually configure reporting. The “initReporter” and the “reportAndStop” methods do the trick for us.

I used “ConsoleReporter”, which was enough for this example. However, for a real-life scenario, of course, it’s better to redirect your comparison results to your monitoring tools.

Run the experiment and get the reports

We are almost ready to see the Scientist for Java in action. The remaining last step is to configure our “ExperimentingGreeterService” as the “GreeterService” provider. Update the “ServiceConfigurer” class as follows:

Now we can run the application and see the results:

java -jar simple-scientist-example-1.0-SNAPSHOT.jar net.entrofi.examples.refactoring.scientist.ScientistExampleApplication

curl http://localhost:8080/greet/comak

On top of the results, we see the gauge for the greet method call mismatch. The rest shows the Counters and Timers for the candidate and control calls.

Further Reading

  1. Scientist: Measure Twice, Cut Over Once
  2. Ford, N. (2017). Building Evolutionary Architectures: Support Constant Change. Sebastopol, CA: Oreilly & Associates Inc.

The post Github’s Scientist as a helper to do large refactorings first appeared on Çomak's Notes on Software.

The post Github’s Scientist as a helper to do large refactorings appeared first on Çomak's Notes on Software.

]]>
110
CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced https://www.entrofi.net/ci-cd-as-code-part-iv-stateless-jenkins-jobs-as-code-advanced/?utm_source=rss&utm_medium=rss&utm_campaign=ci-cd-as-code-part-iv-stateless-jenkins-jobs-as-code-advanced Wed, 25 Dec 2019 13:26:57 +0000 https://entrofi.net/?p=82 In the previous article of this example series, we created a stateless Jenkins docker container that can be initialized solely by scripts. In that example, a seed job for the jobDsl plugin was also implemented. This job was later on used to create an automated simple Jenkins job,-inline-. Now we are ready to shape our … Continue reading "CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced"

The post CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced appeared first on Çomak's Notes on Software.

]]>
Stateless Jenkins Container with Docker

In the previous article of this example series, we created a stateless Jenkins docker container that can be initialized solely by scripts. In that example, a seed job for the jobDsl plugin was also implemented. This job was later on used to create an automated simple Jenkins job,-inline-. Now we are ready to shape our Stateless Jenkins container to meet more advanced requirements, – advanced jobs as code-.

We will extend previous seedJob implementation further to create more complex jobs programmatically. Our extended seedJob will poll a job definition repository, and it will gather the information on how to create new jobs for some other remote repositories.

Using Jenkins Docker container to have a stateless Continuous delivery environment.
Stateless Jenkins Container: jobs as code process flow

Summary of the steps:

  1. Extend jenkins/init.groovy/1-init-dsl-seed-job.groovy script to enable polling of remote repositories so that it can check these repositories for new job descriptions.
  2. Automate the installation of the tools, which will be needed by the builds (e.g. maven, JDK, etc.)
  3. Implement some job description scripts in the repository (folder), which will be polled by the seed job initialization script.
  4. Add Jenkinsfile(s) to the target projects which are referenced in the job definition script(s).
  5. Run and test the example.

1. Extend jobDsl script

The jobDsl script 1-init-dsl-seed.job.groovy in the previous example is implemented to create a simple “Hello world” job. We passed a DSL job script to the seed job’s “scriptText” attribute to achieve this.

Implement “Job Configuration Repository” configuration

As we stated earlier, we are going to extend this script to define more advanced jobs via groovy scripts. The extended seed job will poll a remote repository, then it will scan the contents for groovy based job descriptions. To achieve this goal, we need to define an SCM and pass this to the freestyle job (the seed job):

In addition to SCM definition, we also defined where and how to find the job definition scripts in that SCM,-“jobScriptFile“-. The snippet below shows how the “jobScriptFile” variable is used: It defines a jobDsl executor ( ExecuteDslScripts)instance and passes jobScriptFile variable to the ‘targets‘ attribute of this instance:

The final form of the seed job script and summary of the tasks

In order to save some space, I will not show the full script here. You can reach the final version from the example “CI as Code” repository.

Let’s summarize what this script is doing for our “Jenkins Jobs as Code” goal:

  1. It will poll the master branch of the repository ‘https://github.com/entrofi/oyunbahcesi.git
  2. Check for groovy scripts under the folder ci_cd/jenkins/configs/jobdsl/
  3. Execute found groovy scripts to create new Jenkins jobs. The scripts in this remote repository can poll any number of repositories and can define different types of Jenkins jobs or views.

2. Installing Tools

Every build-job has it’s own tooling requirements and in order to keep our “stateless CI/CD” goal, we should be able to automate such kind of tool installations. The example projects that we are going to define in the next section are java projects with maven build support. Let’s showcase how to install these tools via Jenkins initialization scripts:
The “2-install-tools.groovy” script installs two different versions of JDK and maven for us.

Java Installation

Maven Installation

The final form tool installation script can be found here

3. Implementing Job Descriptions

In this section, we are going to implement some example job descriptions in the remote repository. Go to jenkins/configs/jobdsl folder, and add a groovy script file called createJobs.groovy:

This groovy script uses jobDsl API to create multibranch jobs for the remote repositories which are defined in the multipbranchJobs array. The factory section of the implementation checks if the remote repositories have Jenkinsfiles, and creates the pipelines for each accordingly.
This snippet demonstrates how to create a multi-branch job using jobDsl API; however, we are not limited in terms of the variety of job descriptions. We can define any kind of Jenkins jobs, including build monitors, list views, free-style jobs, simple pipeline jobs, etc., using this API.

4. Adding Jenkinsfile to the target projects

The rest is easy; in section three we defined a target project, – “entrofi/spring/restassured-asciidoctor” -, in “multiBranchJobs” array. To create her own jobs, one can define further projects in a similar way, and add a Jenkinsfile to the target project. After completing this, it’s just a matter of running the stateless Jenkins Docker image.

The post CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced appeared first on Çomak's Notes on Software.

]]>
82
CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code https://www.entrofi.net/ci-cd-iii-jenkins-jobs-as-code/?utm_source=rss&utm_medium=rss&utm_campaign=ci-cd-iii-jenkins-jobs-as-code Wed, 25 Dec 2019 09:05:31 +0000 https://entrofi.net/?p=65 The purpose of these sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The main goal is to create a stateless Jenkins Docker Container setup, which can be bootstrapped from a set of configuration files, and scripts so that many problems related to maintenance of the infrastructure, … Continue reading "CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code"

The post CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code appeared first on Çomak's Notes on Software.

]]>
Stateless Jenkins Container with Docker

The purpose of these sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The main goal is to create a stateless Jenkins Docker Container setup, which can be bootstrapped from a set of configuration files, and scripts so that many problems related to maintenance of the infrastructure, and other operational issues are reduced.

 

The previous article of the series summarizes the steps to install and set up Jenkins programmatically. To see how it’s done, please visit CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically

First Steps to “job as code”

Check out this example from here

We are going to introduce the necessary plugins to achieve the goal of programmatic job descriptions in this step and configure them accordingly via programmatic means.

The first plugin that we are going to introduce is the jobDsl plugin. The most common job creation mechanism in Jenkins is that users usually create jobs by cloning/copying an existing project. As experienced by most of us, when the number of jobs grows in Jenkins or the job description gets complicated, the use of the user interface-oriented method becomes more tedious. This is where the jobDsl plugin comes in handy. It provides the programmatic bridge to create jobs using scripts or configuration files.

In order to serve its purpose, the jobDsl plugin uses a freestyle Jenkins job, which is called “Seed Job”. This job is a standard freestyle job to which you add a “Process Job DSLs” step. This step uses the configured DSL and generates jobs configured in it. (For further information please visit the official tutorial)

That’s enough talking. Let’s move to the practical part.

Adding jobDsl support and creating a job using dslScript

We introduced automatic plugin installation support to our stateless Jenkins Docker container instance in the previous step(s). Now, we can use our plugin installation file to add our new plugin.

Open configs/plugins file and addjob-dsl as a new line to this file.

The next step is to configure a seed job by programmatic means. Since we already have the support for “Post-initialization scripts” in our container, we can add a groovy script to create our “Seed Job”. Let’s add a new groovy script file named 1-init-dsl-seed-job.groovy to our init scripts (init.groovy/1-dsl-seed-job.groovy).

Please remember that the reason a numbered prefix is used in the script file names is that these scripts are executed by Jenkins in alphabetical order.

As mentioned earlier, the primary goal of this script is to create the seed job for the plugin. Additionally, we will also add a simple job DSL to the seed job using the
Use the provided DSL script (useScriptText=true)” option. This job, which is going to be created by our seed job, will only print “Hello World!” to the stdout.

Note that, there are more advanced ways to create jobs using the seed job, but simple job definition serves the purpose of this step.

The example job dsl is as follows:

job('example') {
   steps {
       shell('echo Hello World!')
   }
}

The content of the init.groovy/1-dsl-seed-job.groovy script will be as follows:

Test what we did so far

  1. run the docker-compose up –build command against your docker-compose file.
  2. Browse to the Jenkins instance http://localhost:7080 and log in.
  3. Run the SeedJob job.
    The job will fail the first time you run it. It’s because the jobDsl plugin
    limits execution of the scripts without admin approval. In order to approve the execution, go to the configuration page of the job and click save.
  4. A job with the name example must be in place. Check if it exists.
  5. Run the example job and check the output of the job using the console output link on the interface.

Disable Script Security Checks for jobDsl Plugin

If you followed the execution steps in the previous section, you must have noticed the failure of SeedJob and had resolved it manually. Since our goal is to minimize the manual work as much as possible and to get a stateless Jenkins instance, we are going to introduce one additional Jenkins init script to disable script execution limitation for the jobDSL plugin.  The script below does the job for us:

After adding this script to our
init.groovy folder, we can repeat the steps mentioned in the “Test what we did so far” section; however, this time without facing the problem mentioned in item 3 of the list there.  

The post CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code appeared first on Çomak's Notes on Software.

]]>
65
CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically https://www.entrofi.net/ci-cd-as-code-part-ii-installing-and-setting-up-jenkins-programmatically/?utm_source=rss&utm_medium=rss&utm_campaign=ci-cd-as-code-part-ii-installing-and-setting-up-jenkins-programmatically Wed, 25 Dec 2019 09:00:00 +0000 https://entrofi.net/?p=56 The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other … Continue reading "CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically"

The post CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically appeared first on Çomak's Notes on Software.

]]>
Stateless Jenkins Container with Docker: Installing and setting up Jenkins Programmatically

The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other operational issues are reduced.

We created a containerized Jenkins instance in the previous part of these example series. Moreover, we did most of the installation and configuration work via the user interfaces. In contrast to this, our target in this step is to automate most of the manual work so that we are one step closer to a stateless Jenkins Docker container. 

Checkout the example code from here

In summary, this basic setup will handle the following on behalf of an actual human being:

  1. Creation of the user(s) on Jenkins programmatically.
  2. Installation of basic plugins to get up and running with Jenkins.

Creating the initial admin user for Jenkins programmatically

Jenkins provides a set of scripted execution mechanisms to allow automation in delivery processes using certain key Jenkins events. This mechanism is called “hook scripts“. In this step, we are going to introduce “Post-initialization scripts” from this set to our container.

The “post-initialization” scripts are located at

/usr/share/jenkins/ref/init.groovy.d/ in a standard Jenkins setup. Accordingly, to use this feature in our container, we will add a folder named
init.groovy to our codebase and map the contents of this folder to our container’s post-initialization scripts directory:

Add the following command to your Dockerfile
ADD init.groovy/ /usr/share/jenkins/ref/init.groovy.d/

The next step is to implement the script, which will create the initial admin user for us. Create a groovy script file under init.groovy named “0-disable-login.groovy” directory and paste the following code:

Note that, post-initialization scripts are executed in alphabetical order by Jenkins. This is why the prefix “0” is used for user creation script.

Adding an initial set of necessary Jenkins plugins programmatically

In this step, we need to collect a list of necessary Jenkins plugins for our setup. You can use recommended plugins in a standard Jenkins installation for the sake of simplicity. Getting the list is relatively easy, but we need to use Jenkins Interface again.  Let’s run our container, which we created in the previous step, then complete the setup via the user interface. As soon as you finish the setup, you can either use the Jenkins scripting interface located at “http://yourhost:port/script” or use the API provided by Jenkins.

Getting the list using a script:

Getting the list using the API:

curl --user admin_groovy:123456 "http://localhost:7080/pluginManager/api/json?depth=1"

Now that you have the plugin list at hand, create a file named plugins under configs directory and paste the short names of the plugins to this file.

Excerpt of the plugins file:

ace-editor
ant
antisamy-markup-formatter
apache-httpcomponents-client-4-api
authentication-tokens
bouncycastle-api
branch-api build-timeout
cloudbees-folder
... 

The next step is to add the necessary configuration to our Dockerfile to automate Jenkins plugin installation. Jenkins Docker Image already has a script to handle this job from the command line.  Add the following configuration to your Dockerfile. 

COPY configs/plugins /var/jenkins_init_config/plugins
RUN /usr/local/bin/install-plugins.sh < /var/jenkins_init_config/plugins

The container can be tested after completing this step. Build your container, run it and check if everything works as expected from the user interface.

The post CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically appeared first on Çomak's Notes on Software.

]]>
56
CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container https://www.entrofi.net/ci-cd-as-code-part-i-introduce-a-stateful-jenkins-docker-container/?utm_source=rss&utm_medium=rss&utm_campaign=ci-cd-as-code-part-i-introduce-a-stateful-jenkins-docker-container Wed, 25 Dec 2019 07:14:35 +0000 https://entrofi.net///?p=36 The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other operational … Continue reading "CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container"

The post CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container appeared first on Çomak's Notes on Software.

]]>
Stateless Jenkins Container with Docker

The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other operational issues are reduced.

We are going to follow a step-by-step approach in this series. You can reach each step using the links below.

Sections of the article series -Fully Automated stateless Jenkins Docker container

  1. Introduce a stateful containerized Jenkins
  2. Installing and Setting up Jenkins Programmatically: The goal of this step is to complete the basic setup of Jenkins completely programmatically.
  3. First steps to ‘job as code’: In this step, we are going to head towards configuring jobs programmatically.
  4. Advanced jobs as code: In this part, we are going to finalize the setup and define some example jobs.

Introduce a stateful Jenkins Docker Container

You can checkout the full sample from here.

This initial form tackles only with a containerized version of Jenkins. The goal is to head towards a stateless form of a Jenkins setup. In this initial version, we will only have a dockerized Jenkins instance, the state of which we will map to static folders in the host machine.

The advantage of this setup is to have the state, which includes history, configurations, etc, preserved in the host machine and it’s not that different from having Jenkins installed on a physical/virtual machine. You still need to set up the installation via Jenkins initial setup interface and configure your jobs, pipelines, etc manually. Nevertheless, it’s the initial step towards the “… as a code” approach.

Container Configuration

The docker-compose.yml maps

  • Jenkins’ default HTTP port 8080 to 7080 on the host
    machine.
  • Jenkins’ default JNLP port 50000 to 50005 on the host machine.

Checkout docker-compose.yml for additional configuration.

How to run the sample

  • First, go to ci_cd root directory and run docker-compose up.
    The docker-compose.yml file maps jenkins_home directory to the directory ./jenkins/jenkins_home on the host machine. Thus, as soon as your container gets up and running, you will be able to see Jenkins-related files in this directory. In addition, note that for such kind of setups Jenkins creates a temporary admin password in the file jenkins/jenkins_home/secrets/initialAdminPassword.
  • Next, navigate to http://localhost:7080 and use the temporary password defined in jenkins/jenkins_home/secrets/initialAdminPassword to start configuring your Jenkins instance.
  • Lastly, follow the steps defined in the setup interface of Jenkins.

The post CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container first appeared on Çomak's Notes on Software.

The post CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container appeared first on Çomak's Notes on Software.

]]>
36
Creating a Custom MySQL Docker Image with initialization Scripts https://www.entrofi.net/custom-mysql-docker-image-with-initialisation-scripts/?utm_source=rss&utm_medium=rss&utm_campaign=custom-mysql-docker-image-with-initialisation-scripts Fri, 05 Apr 2019 05:49:40 +0000 https://entrofi.net///?p=34 This blog post aims to provide an introduction for creating a MySQL Docker image with predefined data in it. Therefore, it does not address explanations of basic docker commands and instructions.  Containerisation has many advantages.  One advantage among these is getting up and running quickly with a production-ready data source, which has predefined data in it. … Continue reading "Creating a Custom MySQL Docker Image with initialization Scripts"

The post Creating a Custom MySQL Docker Image with initialization Scripts first appeared on Çomak's Notes on Software.

The post Creating a Custom MySQL Docker Image with initialization Scripts appeared first on Çomak's Notes on Software.

]]>
Mysql Docker Image

This blog post aims to provide an introduction for creating a MySQL Docker image with predefined data in it. Therefore, it does not address explanations of basic docker commands and instructions. 

Containerisation has many advantages.  One advantage among these is getting up and running quickly with a production-ready data source, which has predefined data in it. This way we can share and use the application data state in local development environments easily.

In this article, we are going to create such a customized data source container, in order to speed up our development activities. Even though MySQL is used as the database system in this blog post, similar concepts are applicable to other database systems as well.

The steps that we’re going to follow to reach our goal are as follows:

  1. Creating a Dockerfile for our base container
  2. Implementing the scripts for database/table creation and sample data insertion. 
  3. Building our customized MySQL docker image and verifying the container state

1. Creating the Dockerfile for a customized Mysql Docker Image

Initially,  create the directory structure to store configuration files for your MySQL docker container. 

$mkdir mysql-docker
$cd mysql-docker
$mkdir init-scripts

Now we can start writing our custom MySQL docker image file.  Create a Dockerfile under the root folder “mysql-docker”. We are going to use the official Mysql Docker Image as our base image. Add the following content to the file that you’ve created:

FROM mysql:5.7
LABEL description="My Custom Mysql Docker Image"

# Add a database
ENV MYSQL_DATABASE CARDS

#Check out docker entry point for further configuration :
# https://github.com/docker-library/mysql
COPY ./init-scripts/ /docker-entrypoint-initdb.d/

As mentioned earlier, we have used the official MySQL Docker image for MySQL version 5.7 as our parent image. We also added some metadata information for our image. On the line starting with ENV MYSQL_DATABASE CONTACTS, we informed the base MySQL container that we are going to create and use the “CONTACTS” database in this customized docker image. MYSQL_DATABASE is an environment variable provided by the parent image. This step is not a mandatory step. We could have also used our initialization scripts to achieve the same goal, yet for demonstration purposes, it’s easier to implement it like this.

 The COPY ./scripts/ /docker-entrypoint-initdb.d/ instruction on the last line of our docker file will be used to copy the initialization scripts, which we are going to create in the next step, to the filesystem of the container.
The important point here is the destination folder “/docker-entrypointpinitdb.d/”.  Mysql base image defines this folder as the startup scripts folder and executes every SQL file under this folder while the container is starting up. It’s not clearly documented in the official container documentation page (at least at the time of writing); however, you can see that this is the case by checking out docker entry point script implementation in the official MySQL Docker image repository.

We’re done with creating our image file description.  We can now add our initialisation scripts.

2. Implementing the scripts for table creation and sample data insertion

Add the following SQL scripts under the folder “init-scripts”, which we created before:

-- create_tables.sql
CREATE TABLE card (
  id bigint(20) NOT NULL,
  back varchar(255) NOT NULL,
  front varchar(255) NOT NULL,
  hint varchar(255) DEFAULT NULL,
   PRIMARY KEY (id)
  );
-- insert_data.sql
INSERT INTO card (id, front, back, hint) values (1, 'Front 1', 'Back 1', 'Hint 1');
INSERT INTO card (id, front, back, hint) values (2, 'Front 2', 'Back 2', 'Hint 2');
INSERT INTO card (id, front, back, hint) values (3, 'Front 3', 'Back 3', 'Hint 3');
INSERT INTO card (id, front, back, hint) values (4, 'Front 4', 'Back 4', 'Hint 4');
INSERT INTO card (id, front, back, hint) values (5, 'Front 5', 'Back 5', 'Hint 5');

3. Building and running the container

The Dockerfile and database initialization scripts are ready for our image. It’s now time to build and run our container.   

In order to build the image run the following command: 

$ docker build -t mysql-docker .
Sending build context to Docker daemon   7.68kB
Step 1/4 : FROM mysql:8.0
 ---> 98455b9624a9
Step 2/4 : LABEL description="My Custom Mysql Docker Image"
 ---> Using cache
 ---> 6bb1abb1370e
Step 3/4 : ENV MYSQL_DATABASE CARDS
 ---> Using cache
 ---> fda7d70507cb
Step 4/4 : COPY ./init-scripts/ /docker-entrypoint-initdb.d/
 ---> Using cache
 ---> a5b2181dd613
Successfully built a5b2181dd613
Successfully tagged mysql-docker:latest

If the image is built successfully, you can run your container with the following command:

$ docker run -d -p 63306:3306 --name mysql-docker \
-e MYSQL_ROOT_PASSWORD=secret mysql-docker

Since we are inside our container right now, we can execute database commands and verify that our initial state is as expected:

root@ed27a9ae3b98:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.25 MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use CARDS
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-----------------+
| Tables_in_CARDS |
+-----------------+
| card            |
+-----------------+
1 row in set (0.00 sec)
mysql> select * from card;
+----+--------+---------+--------+
| id | back   | front   | hint   |
+----+--------+---------+--------+
|  1 | Back 1 | Front 1 | Hint 1 |
|  2 | Back 2 | Front 2 | Hint 2 |
|  3 | Back 3 | Front 3 | Hint 3 |
|  4 | Back 4 | Front 4 | Hint 4 |
|  5 | Back 5 | Front 5 | Hint 5 |
+----+--------+---------+--------+
5 rows in set (0.00 sec)ur initial state is as expected:

As you can see, we have our initial database state in our container now. As a next step, we can push this image to our docker hub to share it with our development team. This way, every member of our team can run this image locally and play around with the data without affecting other developers.

The post Creating a Custom MySQL Docker Image with initialization Scripts first appeared on Çomak's Notes on Software.

The post Creating a Custom MySQL Docker Image with initialization Scripts appeared first on Çomak's Notes on Software.

]]>
34
Testing Java Projects with Groovy https://www.entrofi.net/testing-java-projects-with-groovy/?utm_source=rss&utm_medium=rss&utm_campaign=testing-java-projects-with-groovy Tue, 26 Mar 2019 18:12:35 +0000 https://entrofi.net///?p=32 This example demonstrates how we can implement our tests using groovy in a java based project. You might be asking why. Well, implementing tests with groovy has some advantages like easier mock creation, expressive business readable test method names, seamless adaptation with BDD frameworks, and so on. If you want to investigate testing java projects … Continue reading "Testing Java Projects with Groovy"

The post Testing Java Projects with Groovy first appeared on Çomak's Notes on Software.

The post Testing Java Projects with Groovy appeared first on Çomak's Notes on Software.

]]>
Testing java projects with groovy.

This example demonstrates how we can implement our tests using groovy in a java based project. You might be asking why. Well, implementing tests with groovy has some advantages like easier mock creation, expressive business readable test method names, seamless adaptation with BDD frameworks, and so on. If you want to investigate testing java projects with groovy, the article will, then, provide you a starting point for your investigation.

Source code for the article: https://github.com/entrofi/spring/tree/master/groovy-intengration-tests

A few advantages which we can have if we write our tests with groovy

Even though I will not discuss the advantages, there is no harm in listing a few advantages testing java projects with groovy :

  1. The creation of mocks and stubs is easier in groovy. Using language features like closures and simple maps you can avoid the implementation of inner classes and express your mock objects using closures. Further information regarding this topic can be found in the official documentation.
  2. Expressive, human-readable test method name declarations,
  3. String interpolation,
  4. Easier to adapt with BDD frameworks in an expressive manner,
  5. Groovy provides many syntactic sugars for data creation,
  6. Test outputs can be more expressive, especially by using frameworks like Spock.

Let’s create our example setup now!

Setting up the project – Adding groovy support to unit tests

Although this example is not necessarily related with spring boot, we are going to use spring boot here to get up and running quickly. Go to spring initialzr page and create a Gradle backed spring boot java project with the following dependency spring-boot-starter-web.
Our initial build.gradle script will be similar to the following snippet:

plugins {
 id 'org.springframework.boot' version '2.1.3.RELEASE'
 id 'java'
}
 
apply plugin: 'io.spring.dependency-management'
 
group = 'net.entrofi.testing'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
 
repositories {
 mavenCentral()
}
 
dependencies {
 implementation 'org.springframework.boot:spring-boot-starter-web'
 testImplementation 'org.springframework.boot:spring-boot-starter-test'
}

Simple isn’t it? Our first job is to add groovy support to our unit tests. Afterward, we are going to extend this support to our spring boot integration tests.

Add Groovy support to the unit tests

Steps for adding groovy support to unit tests are as follows:

  1. Add groovy-all as a dependency with testImplementation scope to our build script.
    1
    dependencies {      ....      testImplementation'org.codehaus.groovy:groovy-all:2.4.15'   }
  2. Inform Gradle that we are going to put our groovy based test implementations under the source folder src/test/groovy.
    1
    plugins {                 id 'org.springframework.boot' version '2.1.3.RELEASE'                 id 'java'             }             apply plugin: 'io.spring.dependency-management'             group = 'net.entrofi.testing'             version = '0.0.1-SNAPSHOT'             sourceCompatibility = '1.8'             repositories {                 mavenCentral()             }             dependencies {                 implementation 'org.springframework.boot:spring-boot-starter-web'                 testImplementation 'org.springframework.boot:spring-boot-starter-test'             }
  3. Add a line to apply the groovy plugin so that Gradle is able to process groovy files.
    1
    apply plugin: 'groovy'

As soon as these steps are applied, our build script will be like the following snippet:

plugins {
    id'org.springframework.boot'  version'2.1.3.RELEASE' 
    id'java' 
}

apply plugin:'io.spring.dependency-management' 
apply plugin:'groovy' 

group  ='net.entrofi.testing' 
version ='0.0.1-SNAPSHOT' 
sourceCompatibility ='1.8' 

repositories {
    mavenCentral()
}

sourceSets {
    test {
        groovy {
            srcDir file( 'src/test/groovy' )
        }
    }
}

dependencies {
    implementation'org.springframework.boot:spring-boot-starter-web' 
    testImplementation'org.springframework.boot:spring-boot-starter-test' 
    testImplementation'org.codehaus.groovy:groovy-all:2.4.15' 
}

Add a utility class to test with Groovy

We can now add some trivial utility class and create a unit test for it using groovy. Create a class called MathUtil under the package net.entrofi.testing.groovyintengrationtests.util, and implement a method that sums it’s arguments and returns the result.

public final class MathUtil   {

    public  static  int  sum (int  a, int  b) {
        return  a + b;
    }
}

Now create a groovy unit test file (MathUtilTest.groovy) for this trivial utility class under src/test/groovy folder.

package  net.entrofi.testing.groovyintengrationtests.util

import  org.junit.Test

import  static  org.junit.Assert.assertEquals

 class   MathUtilTest  { 


    @Test 
    void "sum of 2 and 3 should return 5" () {
        int  result =  5 ;
        assertEquals( 5 , MathUtil.sum( 2 ,  3 ))
    }
}

Run the test, and if everything goes fine, you can move on to the integration test configuration.

Setting up groovy backed springboot integration tests

This section is also similar to the previous one, with only slight differences. We will inform Gradle that our groovy based integration tests reside under src/integration/groovy folder and extend our previous test configuration in order to reuse similar libraries that we will need for integration tests also.

In addition, we are going to configure the flow that our integration tests will be run in.

Configure groovy test sourceSets

Add the following lines to sourceSets section of the build.gradle file:

sourceSets {
    test {
        groovy {
            srcDir file( 'src/test/groovy' )
        }
    }

    integration {
        groovy {
            compileClasspath += main .output  + test .output 
            runtimeClasspath += main .output  + test .output 
            srcDir file( 'src/integration/groovy' )
        }
        java .srcDir  project.file( 'src/integration/java' )
        resources .srcDir  project.file( 'src/integration/resources' )
    }
}

sourceSets” in Gradle are instances of NamedDomainObjectContainer which means named extensions of dependency configurations will be supported by the relevant plugins. For instance, Gradle defines “implementation” (previously compile), “runtime“, “testImplementation“, etc., as predefined dependency scopes (configurations) by default. These configurations are further extended by plugins like the java plugin (or new ones are provided). That is to say, we are going to have independent dependency scopes for our new named configuration integration, like integrationImplementation, integrationRuntime, and so on.

Configure integration-test related dependency scopes

We can now configure our integration related scopes as extensions to test configurations so that we can use similar dependencies from the test scope

configurations {
    integrationRuntime .extendsFrom  testRuntime
    integrationImplementation .extendsFrom  testImplementation
}

Define tasks to run integration tests

The next item in our build script configuration is defining the task that runs our integration tests and introducing this task to our build lifecycle. The task descriptions are as follows:

  task  ( 'integrationTest' , type: Test, description:'Runs the integration tests.' , group:'Verification' )   {
    testClassesDirs = project .sourceSets  .integration  .output  .classesDirs 
    classpath = project .sourceSets  .integration  .runtimeClasspath 
}

In this task implementation, we named our task as integrationTest, and marked it as a test task using the “type: Test” parameter declaration. After defining the name and the type, we have informed Gradle that our integration test classes reside in the output directory of our integration source set and also added the classpath from that source set to our task.

Introduce “integrationTest” task to build cycle

It’s now time to introduce this task to our build lifecycle. We would like to introduce this task in the check stage where we also run unit tests.

As you know integration tests are long-running tests, and we prefer to shorten our feedback lifecycle. If there is a failing test within our unit test scope, it’s better to stop the build-cycle before running integration tests. Therefore, we are going to make sure that our integration tests will run after unit tests:

check .dependsOn  integrationTest
integrationTest .mustRunAfter  test

Implement a rest-endpoint and it’s integration test

Now, we’re done with the build script configuration. Let’s implement a trivial integration test in our project. In order to do this we will add a HelloWorldController class to our project:

@RestController 
public   class   HelloWorldController   {


    @GetMapping( "/hello" ) 
    public  String sayHello(@RequestParam(name =  "name" )  String name) {
        return "Hello "  + name;
    }
}

The next step is to add the corresponding spring boot integration test configuration and implementation under src/integration/groovy:

@RunWith (SpringRunner.class )
@SpringBootTest (classes = [ GroovyIntengrationTestsApplication.class ], webEnvironment = RANDOM_PORT)
@AutoConfigureMockMvc 
 class   HelloWorldControllerTest   {

    @Value ( ' ${local.server.port} ' )
    private  int  port;

    @Value ( ' ${deployment.environment.host:http://localhost} ' )
    private  String  host;

    @Autowired 
    private MockMvc mvc

    @Test 
    void "given name is hasan hello should return 'hello hasan'"  () {
        mvc.perform(
                get (getUrl( "/hello" ))
                        .param( 'name' ,'hasan' )
        ).andExpect(status().isOk())
        .andExpect(content().string( 'Hello hasan' ));
    }


    private  String  getUrl( String  uri) {
         String  url =  host +":"  + port;
        url = StringUtils.isEmpty(uri) ? url : url + uri;
        return  url;
    }
}

After implementing our test with groovy, we can run our integration tests using the following command:

 $  ./gradlew integrationTest

Further Reading

  1. Gradle User Guide – Chapter 6. Build Script Basics
  2. Gradle Reference – sourceSets{ }
  3. Gradle User Guide – Dependency Management for Java Projects
  4. Gradle Reference – configurations{ }

The post Testing Java Projects with Groovy first appeared on Çomak's Notes on Software.

The post Testing Java Projects with Groovy appeared first on Çomak's Notes on Software.

]]>
32