Skip to content

Latest commit

 

History

History
1865 lines (1402 loc) · 63.6 KB

File metadata and controls

1865 lines (1402 loc) · 63.6 KB

Appointment Booking System - Data Access Layer

In this chapter, we’ll create a comprehensive database schema with initial data, map it to entity models, and implement operations for entity management. This forms the foundation of our appointment booking system’s data persistence layer.

Table of Contents

1. Understanding the Domain Model

After completing the Appointment Booking System Setup, we’ll create our domain model based on the requirements outlined in the Appointment Booking System Specification.

1.1. Core Functionalities

Our system must support these essential operations:

  • Adding new Treatments associated with Specialists

  • Retrieving available Treatments, optionally filtered by Specialist or Treatment name

  • Retrieving detailed Treatment information including connected Specialist data

  • Creating Appointments as a Client for specific dates and times

  • Validating Appointment scheduling conflicts (preventing double-booking of Specialists)

  • Allowing Clients to view their appointments

  • Enabling Appointment cancellation by Clients

  • Marking Appointments as completed by Specialists

  • Optional: Retrieving visit history for Clients and Specialists

Note

Think about it: Why is preventing double-booking crucial for a medical appointment system? What could happen if two patients were scheduled with the same specialist at the same time?

1.2. Entity Overview

We’ll work with five core entities: UserEntity, ClientEntity, SpecialistEntity, TreatmentEntity, and AppointmentEntity.

dataaccess entities uml

1.2.1. Common Entity Attributes

Each entity includes these standard fields:

  • id - Generated using database sequences

  • version - Optimistic locking version number

  • created - Entity creation timestamp

  • lastUpdated - Last modification timestamp

1.2.2. Entity Specifications

  • UserEntity: Contains email (unique), passwordHash, firstName, and lastName

  • ClientEntity: Links to a UserEntity and maintains a collection of AppointmentEntities

  • SpecialistEntity: Includes specialization (enum), links to a UserEntity, and maintains TreatmentEntities

  • TreatmentEntity: Contains name, description, duration (minutes), and links to a SpecialistEntity

  • AppointmentEntity: Includes dateTime, status (enum), and links to ClientEntity and TreatmentEntity

1.3. Relationship Mapping

Understanding these relationships is crucial for proper database design:

1.3.1. One-to-One Relationships

  • User ↔ Client: Each user can optionally be a client

  • User ↔ Specialist: Each user can optionally be a specialist

Important

A user can be both a client and a specialist simultaneously. For example, a doctor (specialist) can also book appointments with other specialists as a client. This flexible design supports real-world scenarios where medical professionals may need services from other specialists.

1.3.2. One-to-Many Relationships

  • Specialist → Treatments: One specialist provides multiple treatments

  • Client → Appointments: One client can book multiple appointments

  • Treatment → Appointments: One treatment can have multiple appointments

1.3.3. Bidirectional Relationships

Only Client ↔ Appointment and Specialist ↔ Treatment relationships are bidirectional, allowing navigation in both directions.

Note

Think about it: Why might we choose to make some relationships bidirectional while keeping others unidirectional? What are the trade-offs in terms of performance and complexity?

Consider this scenario: What happens when a user who is both a client and a specialist tries to book an appointment with themselves? How should the system handle this edge case?

2. Adding Database and Utility Dependencies

Before we can work with databases and entities, we need to add the required dependencies to our project.

2.1. Required Dependencies

Add the following dependencies to your pom.xml inside the <dependencies> section:

<!-- Spring Data JPA provides integration with Hibernate and simplifies database persistence through repositories -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

<!-- H2 is an in-memory database used for lightweight development and testing -->
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

<!-- Flyway manages database schema migrations automatically on application startup -->
<dependency>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-core</artifactId>
</dependency>

2.2. Why These Dependencies?

  • spring-boot-starter-data-jpa — Integrates Spring Data JPA and Hibernate, making persistence and repository use easy

  • H2 — Provides an in-memory relational database, perfect for demos and testing where no external DB is needed

  • Flyway — Enables controlled, versioned database schema migrations to manage changes over time

2.3. Verify Integration

After adding dependencies, reload your Maven project (e.g., "Reimport Maven Project" in IntelliJ IDEA). Spring Boot will auto-configure the JPA EntityManager, H2 data source, and Flyway migration mechanism without extra setup.

Note

About Dependency Versions: You don’t see explicit <version> tags because the parent project spring-boot-starter-parent manages tested dependency versions via its <dependencyManagement> section. This approach ensures:

  • Consistent, compatible versions across all Spring Boot dependencies

  • Simplified maintenance during upgrades

  • Clean, shorter pom.xml with fewer version conflicts

If you need to override a dependency’s version (for example, to test a newer Flyway), you can add the <version> element explicitly — but keep this only for special cases.

Important

After adding these dependencies, your application will have access to:

  • JPA annotations (@Entity, @Table, @Id, etc.)

  • Repository interfaces (JpaRepository, CrudRepository)

  • H2 database console at /h2-console

  • Automatic Flyway migration execution on startup

  • Support for JPA persistance tests

3. Database Configuration

3.1. Technology Stack

We’ll use:

  • H2 Database: In-memory database perfect for development and testing

  • Flyway: Database migration tool for version control of schema changes

Warning

H2 is excellent for development but should never be used in production. Production systems require persistent databases like PostgreSQL, MySQL, or Oracle.

3.2. Database Setup

Add these configurations to your application.properties file:

3.2.1. H2 Database Configuration

spring.h2.console.enabled=true

spring.datasource.url=jdbc:h2:mem:appointmentbooking
spring.datasource.username=sa
spring.datasource.password=password

By design, the in-memory database is volatile, and results in data loss after application restart.

We can change that behavior by using file-based storage. To do this we need to update the spring.datasource.url property:

spring.datasource.url=jdbc:h2:file:/data/demo

3.2.2. Flyway Configuration

# If you are using default location (classpath:db/migration), this property can be omitted:
spring.flyway.locations=classpath:db/migration

# Enable Flyway migrations on application startup.
# spring.flyway.enabled=true is not strictly required if Flyway migrations should always run in your environment.
# By default, Spring Boot activates Flyway migrations when a DataSource is present and Flyway is on the classpath
spring.flyway.enabled=true
Note

Think about it: We’re using Spring Boot’s auto-configuration for Flyway, which runs migrations automatically on application startup. In which scenarios might you need to add the flyway-maven-plugin to your pom.xml instead? Consider CI/CD pipelines, manual database operations, and production deployment strategies.

3.2.3. JPA and Hibernate Configuration

# Hibernate DDL-auto is recommended as 'none' when using Flyway for migrations
spring.jpa.hibernate.ddl-auto=none

# To avoid potential lazy loading issues in the view layer, disable OSIV (Open Session in View) which is enabled by default in Spring Boot:
spring.jpa.open-in-view=false

# Show SQL queries in logs for easier debugging (just for development purposes)
spring.jpa.show-sql=true
# To beautify or pretty-print the logged SQL (see above), we can add:
spring.jpa.properties.hibernate.format_sql=true

Understanding DDL-auto Configuration

With ddl-auto=none and Flyway active, you rely entirely on Flyway scripts to define and evolve the schema safely and reproducibly. This approach:

  • Prevents conflicts between Hibernate’s automatic schema generation and Flyway migrations

  • Ensures consistency across different environments (dev, test, production)

  • Provides version control for all database changes through migration files

  • Enables rollback capabilities and change tracking

Warning

Never use ddl-auto=create or ddl-auto=create-drop with Flyway in production! This could lead to data loss and schema conflicts.

Disable spring.jpa.open-in-view to avoid anti-pattern in configuration

Warning

Explicit spring.jpa.open-in-view=false is the best practice in most cases. More details: how to overcome The OSIV Anti-Pattern

SQL Logging

The show-sql=true setting displays all generated SQL queries in your application logs, which is invaluable for:

  • Debugging query performance issues

  • Understanding what Hibernate generates from your JPA queries

  • Learning how different JPA annotations translate to SQL

  • Optimizing database interactions

Note

Optional: Consider other ways to show Hibernate/JPA SQL Statements. Take a look here: https://www.baeldung.com/sql-logging-spring-boot

3.3. Accessing the H2 Console

  1. Start your application

  2. Navigate to http://localhost:8080/h2-console/

  3. Use the connection details configured above:

H2 Console - login

After successful login:

H2 Console - content

You should see this in your application logs:

... : H2 console available at '/h2-console'. Database available at 'jdbc:h2:mem:appointmentbooking'
... : Exposing 15 endpoints beneath base path '/actuator'
... : Tomcat started on port 8080 (http) with context path ''
Note

Think about it: Why is having a database console useful during development? How might this help you debug data-related issues?

3.4. Database Schema Design

Our schema is following (all tables have ID, VERSION for optimistic locking and audit columns: CREATED and LAST_UPDATED columns, some of them are omitted for some of the tables for the sake of readability):

@startuml
entity APPLICATION_USER {
  *ID : BIGINT
  *VERSION : INTEGER
  *EMAIL : VARCHAR(254)
  *PASSWORD_HASH : VARCHAR(128)
  *FIRST_NAME : VARCHAR(255)
  *LAST_NAME : VARCHAR(255)
  *CREATED : TIMESTAMP
  *LAST_UPDATED : TIMESTAMP
  --
  UNIQUE_USER_EMAIL
}

entity CLIENT {
  *ID : BIGINT
  *USER_ID : BIGINT
  ...
  --
  FK_USER_ID
}

entity SPECIALIST {
  *ID : BIGINT
  *USER_ID : BIGINT
  *SPECIALIZATION : VARCHAR(128)
  ...
  --
  FK_USER_ID
}

entity TREATMENT {
  *ID : BIGINT
  *NAME : VARCHAR(255)
  *DESCRIPTION : TEXT
  *DURATION_MINUTES : INTEGER
  *SPECIALIST_ID : BIGINT
  ...
  --
  FK_SPECIALIST_ID
}

entity APPOINTMENT {
  *ID : BIGINT
  *DATE_TIME : TIMESTAMP
  *END_DATE_TIME : TIMESTAMP
  *STATUS : VARCHAR(32)
  *CLIENT_ID : BIGINT
  *TREATMENT_ID : BIGINT
  ...
  --
  FK_CLIENT_ID
  FK_TREATMENT_ID
}

APPLICATION_USER ||--|| CLIENT : is a
APPLICATION_USER ||--|| SPECIALIST : is a
SPECIALIST ||--o{ TREATMENT : provides
CLIENT ||--o{ APPOINTMENT : books
TREATMENT ||--o{ APPOINTMENT : is scheduled as
@enduml

4. Creating Database Tables

Create the migration file: src/main/resources/db/migration/1.0/V0001__Create_schema.sql

Important

Flyway migration files must follow the naming convention: V<version>__<description>.sql. The version number determines execution order.

4.1. APPLICATION_USER Table

CREATE TABLE APPLICATION_USER (
    ID BIGINT NOT NULL PRIMARY KEY,
    VERSION INTEGER NOT NULL,
    EMAIL VARCHAR(254) NOT NULL,
    PASSWORD_HASH VARCHAR(128) NOT NULL,
    FIRST_NAME VARCHAR(255) NOT NULL,
    LAST_NAME VARCHAR(255) NOT NULL,
    CREATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    LAST_UPDATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    CONSTRAINT UNIQUE_USER_EMAIL UNIQUE (EMAIL)
);

4.1.1. Field Explanations

  • ID: Primary key, auto-incremented via sequence

  • VERSION: JPA optimistic locking version

  • EMAIL: Unique identifier following RFC 5321 (max 254 characters)

  • PASSWORD_HASH: Secure password storage (never store plain text!)

  • CREATED/LAST_UPDATED: Audit timestamps with automatic defaults

Warning

We use APPLICATION_USER instead of USER because USER is a reserved word in most SQL databases.

Note

Think about it: Why do we store password hashes instead of plain text passwords? What security risks would plain text passwords introduce?

4.2. CLIENT Table

CREATE TABLE CLIENT (
    ID BIGINT NOT NULL,
    VERSION INTEGER NOT NULL,
    USER_ID BIGINT NOT NULL,
    CREATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    LAST_UPDATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (ID),
    FOREIGN KEY (USER_ID) REFERENCES APPLICATION_USER(ID) ON DELETE CASCADE
);

The ON DELETE CASCADE ensures that when a user is deleted, their client record is automatically removed, maintaining referential integrity.

4.3. SPECIALIST Table

Exercise: Create the SPECIALIST table following this specification:

  • ID: Primary key (BIGINT)

  • VERSION: Optimistic locking (INTEGER)

  • USER_ID: Foreign key to APPLICATION_USER (BIGINT)

  • SPECIALIZATION: Specialist’s field of expertise (VARCHAR(128))

  • CREATED/LAST_UPDATED: Audit timestamps

Remember to include:

  • Primary key constraint

  • Foreign key constraint with ON DELETE CASCADE

  • Proper NOT NULL constraints

Tip

Use the CLIENT table structure as a reference. The SPECIALIST table follows the same pattern with one additional column for specialization.

4.4. TREATMENT Table

CREATE TABLE TREATMENT (
    -- TODO: Complete the TREATMENT table creation script here:
    -- ... add ID, VERSION, SPECIALIST_ID foreign key and rest of the fields (in total 8 fields + one FK constraint) ...
    NAME VARCHAR(512) NOT NULL,
    DESCRIPTION TEXT,
    DURATION_MINUTES INTEGER NOT NULL
);

4.4.1. Some field Explanations

  • NAME: Name of the treatment. Real treatment names in healthcare systems can be very specific and long.

  • DESCRIPTION : TEXT is in PostgreSQL native unbounded text type (better than CLOB). Descriptions of actual treatments in healthcare systems often exceed 400 characters and vary significantly in length (need more space: contraindications, procedures, aftercare instructions).

Note

Think about it: Why might we want to cascade delete treatments when a specialist is removed? What are the implications for existing appointments?

4.5. APPOINTMENT Table

Exercise: Create the APPOINTMENT table with these fields:

  • ID: Primary key (BIGINT)

  • VERSION: Optimistic locking (INTEGER)

  • DATE_TIME: Appointment start time (TIMESTAMP)

  • END_DATE_TIME: Appointment end time (TIMESTAMP)

  • STATUS: Appointment status (VARCHAR(32), default 'SCHEDULED')

  • CLIENT_ID: Foreign key to CLIENT (BIGINT)

  • TREATMENT_ID: Foreign key to TREATMENT (BIGINT)

  • CREATED/LAST_UPDATED: Audit timestamps

Status values: SCHEDULED, CANCELLED, COMPLETED

Constraints:

  • Primary key on ID

  • Foreign keys with ON DELETE CASCADE for both CLIENT_ID and TREATMENT_ID

  • Check constraint: END_DATE_TIME > DATE_TIME (see Constraint Checks section)

Tip

Notice that APPOINTMENT has TWO foreign keys. This is different from the one-to-one relationships you’ve seen so far. Study the schema diagram to understand the relationships.

4.6. Database Sequences

CREATE SEQUENCE USER_SEQ START WITH 1 INCREMENT BY 100 NO CYCLE;
CREATE SEQUENCE CLIENT_SEQ START WITH 1 INCREMENT BY 100 NO CYCLE;
-- Create the next required sequences. How many should there be in total?
Note

Think about it: Why do we increment by 100 instead of 1?

This is related to JPA’s sequence allocation optimization. When using @SequenceGenerator in JPA (which will be covered in the next section), the allocationSize parameter defines how many IDs are preallocated and cached in memory. This reduces the number of database round-trips and improves performance, especially under high load.

If the database sequence increments by 100, and JPA is configured with allocationSize = 100, each call to the sequence returns a new block of 100 IDs (e.g., 101–200, 201–300, etc.). JPA then uses these IDs in memory, one by one, without querying the database again until the block is exhausted.

Consider this: What happens if the allocationSize in JPA is set to 1, but the database sequence increments by 100? You would see IDs like 101, 201, 301…​ — skipping 99 values each time. This is inefficient and leads to gaps.

Why must allocationSize match the DB sequence increment? To ensure consistency and avoid ID gaps or collisions, especially in distributed systems with multiple application instances. If they don’t match, JPA may either: - Skip IDs unnecessarily. - Reuse IDs (in worst-case scenarios). - Cause confusion when debugging or analyzing data.

What if the application crashes before using all allocated IDs? Some IDs may be lost, but this is acceptable in most cases since primary keys are meant to be unique, not necessarily sequential.

What if you have multiple server instances sharing the same database? Each instance will request its own block of IDs. Matching allocationSize and sequence increment ensures that each instance gets a distinct, non-overlapping range, avoiding collisions.

What does NO CYCLE mean? In PostgreSQL, NO CYCLE is the default behavior when creating a sequence. This means that once the sequence reaches its maximum (MAXVALUE) or minimum (MINVALUE), it will not wrap around and restart. Instead, it stops generating new values, which helps ensure uniqueness — especially important for primary keys. Although it’s the default, explicitly specifying NO CYCLE is considered good practice for clarity, maintainability, and to avoid surprises during migrations or when using tools that might override defaults. In Oracle, the concept is the same, but the syntax differs slightly — the keyword is written as NOCYCLE (without a space). As in PostgreSQL, it’s also the default, but often included explicitly for readability and safety. In the next section, Entity Implementation, we will configure JPA entities to use these sequences properly using @SequenceGenerator.

4.7. Constraint Checks

Exercise: Add a check constraint to ensure appointment end time is after start time.

Tip

Syntax hint, see documentation for reference:

ALTER TABLE ... ADD CONSTRAINT CHK_..._TIME
CHECK (...);
Note

Think about it: Why is this constraint important at the database level, even though it could also be enforced in Java code? Consider the defense-in-depth principle.

4.8. Database Indexes

Create indexes on foreign key columns for better query performance:

CREATE INDEX IDX_CLIENT_USER ON CLIENT(USER_ID);
CREATE INDEX IDX_SPECIALIST_USER ON SPECIALIST(USER_ID);
-- write next indexes, in total should be 5
Important

Indexes on foreign keys are crucial for:

  • Faster JOIN operations

  • Preventing table locks during parent record updates/deletes

  • Improved query performance on filtered results

4.9. Sample Data

To provide realistic test data for our application, create the migration file src/main/resources/db/migration/1.0/V0002__Create_mockdata.sql.

Tip

You can download the complete sample data from the file: V0002Create_mockdata.sql or use the following structure:

  • 8 Users: 4 clients and 4 specialists

  • 4 Clients: Linked to the first 4 users

  • 4 Specialists: With different specializations (Dentist, Cardiologist, Pediatrician, Orthopaedist)

  • 12 Treatments: Medical procedures with realistic durations (30-180 minutes)

  • 20 Appointments: Various statuses (SCHEDULED, COMPLETED, CANCELLED) for testing

Important

Use negative IDs for test data (e.g., -1, -2, -3). This prevents conflicts with sequence-generated positive IDs and makes test data easily identifiable.

The END_DATE_TIME should be calculated as: DATE_TIME + DURATION_MINUTES. This ensures data consistency with actual treatment durations.

After creating the migration file, run your application and verify the data is inserted correctly using the H2 Console.

5. Entity Implementation

5.1. Lombok Setup

Add Lombok dependency to the pom.xml to reduce boilerplate code:

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <optional>true</optional>
</dependency>

Lombok automatically generates constructors, getters, setters, and other methods at compile time, improving code readability and reducing repetitive code. Its version should be managed by Spring Boot.

Important

Install the Lombok plugin in your IDE. This is required for proper annotation processing and code completion. Without it, IDE may show false errors despite successful compilation.

Tip

Since Lombok is a compile-time tool, its JAR is not needed at runtime. Excluding it from the final application artifact (such as the executable JAR or WAR file) is considered a best practice: it reduces the final JAR size and avoids having unnecessary libraries on the runtime classpath. Add this configuration to your pom.xml for spring-boot-maven-plugin:

<configuration>
    <excludes>
        <exclude>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </exclude>
    </excludes>
</configuration>

For details about the difference between optional and provided configuration, see Appendix: Lombok Dependency Configuration.

5.2. Package Structure

Create these packages under com.capgemini.training.appointmentbooking:

  • dataaccess.entity - Entity classes

  • dataaccess.converter - JPA attribute converters

  • common.datatype - Enums and common types

5.3. BaseEntity Implementation

Notice that attributes version, lastUpdated and created are repeated in every entity. To make the structure cleaner and avoid duplicated code, extract a @MappedSuperclass that each entity will extend.

Create a base class in package com.capgemini.training.appointmentbooking.dataaccess.entity to eliminate duplicate audit fields:

@MappedSuperclass
@Getter
public class BaseEntity {

    @Version
    @Setter
    private int version;

    @Column(insertable = true, updatable = false)
    private Instant created;

    @Column(name = "LAST_UPDATED")
    private Instant lastUpdated;

    @PrePersist
    public void prePersist() {
        Instant now = Instant.now();
        this.created = now;
        this.lastUpdated = now;
    }

    @PreUpdate
    public void preUpdate() {
        this.lastUpdated = Instant.now();
    }
}
Note

Lifecycle methods explained:

  • @PrePersist - Called just before the entity is saved for the first time (INSERT)

  • @PreUpdate - Called just before an existing entity is updated (UPDATE)

Why no setters for created and lastUpdated? Because they should only be set automatically by the database lifecycle, never modified directly. The @PrePersist and @PreUpdate methods enforce this immutability through the framework.

Why is @Version only on version with @Setter? Because JPA needs to update the version field automatically on every modification. Other audit fields are set by lifecycle methods instead.

5.4. UserEntity Implementation

@Entity
@Table(name = "APPLICATION_USER")
@Getter
@Setter
public class UserEntity extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "USER_SEQ_GEN")
    @SequenceGenerator(sequenceName = "USER_SEQ", name = "USER_SEQ_GEN",
                       allocationSize = 100, initialValue = 1)
    private Long id;

    private String email;

    @Column(name = "PASSWORD_HASH")
    private String passwordHash;

    // TODO: Add firstName and lastName fields
    // Remember: use @Getter and @Setter from Lombok (already on class)

}

5.4.1. Key Points

  • The @Table(name = "APPLICATION_USER") explicitly maps to our database table

  • The @SequenceGenerator links to the USER_SEQ sequence we created in SQL

  • The allocationSize = 100 matches our sequence increment to optimize performance

  • The @Column(name = "PASSWORD_HASH") is needed because Java naming conventions differ from SQL

Note

Think about it: Why map the @Table name explicitly when it could default to lowercase table names? Consider what happens when your database uses uppercase names or reserved words.

5.5. Specialization Enum

Create the enum in common.datatype package:

public enum Specialization {

    DENTIST("Dentist"),
    CARDIOLOGIST("Cardiologist"),
    PEDIATRICIAN("Pediatrician"),
    UROLOGIST("Urologist"),
    NEUROLOGIST("Neurologist"),
    ORTHOPAEDIST("Orthopaedist");

    private final String name;

    Specialization(String name) {
        this.name = name;
    }

    public String getName() {
        return this.name;
    }

    public static Specialization getByName(String name) {
        for (Specialization s : Specialization.values()) {
            if (s.getName().equals(name)) {
                return s;
            }
        }
        return null;
    }
}
Note

Think about it: Why define a custom name field instead of using the enum constant name directly (e.g., DENTIST.name())? Consider the database values in your migration scripts - they don’t match the default enum names perfectly.

5.6. AppointmentStatus Enum

Create alongside Specialization in common.datatype package:

public enum AppointmentStatus {
    SCHEDULED, CANCELLED, COMPLETED
}

This enum represents the possible appointment states. It will be stored in the database using @Enumerated(EnumType.STRING) to maintain readability.

5.7. Attribute Converter

Create SpecializationConverter in dataaccess.converter package:

@Converter
public class SpecializationConverter implements AttributeConverter<Specialization, String> {

    @Override
    public String convertToDatabaseColumn(Specialization specialization) {
        return specialization != null ? specialization.getName() : null;
    }

    @Override
    public Specialization convertToEntityAttribute(String dbData) {
        return dbData != null ? Specialization.getByName(dbData) : null;
    }
}
Note

Think about it: Why use a custom converter instead of @Enumerated?

With @Enumerated, you’re limited to storing either the enum constant name or ordinal value. Custom converters let you:

  • Store the display name (e.g., "Dentist" instead of "DENTIST")

  • Control database format independently of Java enum changes

  • Implement custom serialization logic for complex transformations

However, @Enumerated is simpler for straightforward cases. Choose converters when you need custom control.

5.8. Entity Relationships & Implementations

5.8.1. Understanding Relationships

Before implementing entities, understand the relationship types:

  • One-to-One (1:1): User ↔ Client (each user has max one client record)

  • One-to-Many (1:n): Specialist → Treatments (one specialist provides many treatments)

  • Many-to-One (n:1): Treatments → Specialist (many treatments from one specialist)

  • Many-to-Many (n:m): Not used in this schema

Relationships can be unidirectional (one side knows the other) or bidirectional (both sides know each other).

Note

Which side owns the relationship?

In bidirectional relationships, one side is the "owner" (holds the foreign key) and the other is "mapped by" (read-only). The owner side determines what gets persisted to the database. Always put @JoinColumn on the owner side.

5.8.2. ClientEntity with Bidirectional Relationships

@Entity
@Table(name = "CLIENT")
@Getter
@Setter
public class ClientEntity extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "CLIENT_SEQ_GEN")
    @SequenceGenerator(sequenceName = "CLIENT_SEQ", name = "CLIENT_SEQ_GEN",
                       allocationSize = 100, initialValue = 1)
    private Long id;

    // Unidirectional one-to-one: ClientEntity -> UserEntity
    // This is the owning side (holds the foreign key)
    @OneToOne(optional = false, fetch = FetchType.LAZY, cascade = {CascadeType.PERSIST})
    @JoinColumn(name = "USER_ID", referencedColumnName = "ID")
    private UserEntity user;

    // Bidirectional one-to-many: ClientEntity <- AppointmentEntity
    // This is the non-owning side (AppointmentEntity is the owner)
    @OneToMany(mappedBy = "client", fetch = FetchType.LAZY, orphanRemoval = true,
               cascade = {CascadeType.PERSIST, CascadeType.REMOVE})
    private List<AppointmentEntity> appointments = new ArrayList<>();
}
Tip

Key parameters explained:

  • optional = false - Client must have a User (enforces 1:1 constraint)

  • fetch = FetchType.LAZY - Don’t load user until explicitly accessed (performance)

  • cascade = {CascadeType.PERSIST} - When saving client, also save new user

  • orphanRemoval = true - Delete appointments if removed from the list

  • mappedBy = "client" - The AppointmentEntity side owns this relationship

5.8.3. SpecialistEntity Exercise

Exercise: Implement SpecialistEntity following the pattern of ClientEntity, but with these requirements:

@Entity
@Table(name = "SPECIALIST")
@Getter
@Setter
public class SpecialistEntity extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "SPECIALIST_SEQ_GEN")
    @SequenceGenerator(sequenceName = "SPECIALIST_SEQ", name = "SPECIALIST_SEQ_GEN",
                       allocationSize = 100, initialValue = 1)
    private Long id;

    // TODO: Add one-to-one relationship with UserEntity (same pattern as ClientEntity)

    // TODO: Add specialization field with @Convert annotation
    // Hint: Use SpecializationConverter

    // TODO: Add bidirectional one-to-many relationship with TreatmentEntity
    // Hint: Use cascade {PERSIST, REMOVE} and orphanRemoval = true
}
Tip

Reference the ClientEntity implementation above. The pattern is nearly identical, with one difference: you’ll add a bidirectional relationship with TreatmentEntity instead of AppointmentEntity.

5.8.4. TreatmentEntity with Many-to-One Relationship

Exercise: Implement TreatmentEntity with the following requirements:

@Entity
@Table(name = "TREATMENT")
@Getter
@Setter
public class TreatmentEntity extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "TREATMENT_SEQ_GEN")
    @SequenceGenerator(sequenceName = "TREATMENT_SEQ", name = "TREATMENT_SEQ_GEN",
                       allocationSize = 100, initialValue = 1)
    private Long id;

    // TODO: Add name field (VARCHAR(128))
    // TODO: Add description field (TEXT)
    // TODO: Add durationMinutes field (INTEGER)

    // Many-to-one: TreatmentEntity -> SpecialistEntity (owning side)
    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "SPECIALIST_ID")
    private SpecialistEntity specialist;
}
Note

Why is this owning side different from ClientEntity?

In @ManyToOne, the owning side is always the many side (the side with foreign key in database). The SpecialistEntity side will be non-owning with @OneToMany(mappedBy = "specialist", …​) that you’ll add to SpecialistEntity.

5.8.5. AppointmentEntity Exercise

Exercise: Implement AppointmentEntity with relationships to both ClientEntity and TreatmentEntity:

@Entity
@Table(name = "APPOINTMENT")
@Getter
@Setter
public class AppointmentEntity extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "APPOINTMENT_SEQ_GEN")
    @SequenceGenerator(sequenceName = "APPOINTMENT_SEQ", name = "APPOINTMENT_SEQ_GEN",
                       allocationSize = 100, initialValue = 1)
    private Long id;

    // TODO: Add dateTime field (TIMESTAMP)
    // TODO: Add endDateTime field (TIMESTAMP)

    // TODO: Add status field with @Enumerated(EnumType.STRING) and default "SCHEDULED"

    // Many-to-one: AppointmentEntity -> ClientEntity (owning side)
    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "CLIENT_ID")
    private ClientEntity client;

    // TODO: Add many-to-one relationship with TreatmentEntity
    // Hint: Similar to client relationship above
    // Use @ManyToOne with @JoinColumn("TREATMENT_ID")
}
Note

Think about it: Why does AppointmentEntity have TWO many-to-one relationships while ClientEntity only has one-to-one? Consider the business logic: can a client have multiple appointments? Can each appointment reference multiple treatments?

5.9. Testing Entity Mappings

Now that entity mappings are in place, let’s verify whether they are correctly loaded from the database. Create a new test class named EntitySmokeIT in src/main/test, under com.capgemini.training.appointmentbooking.dataaccess.entity. Annotate it with @DataJpaTest(bootstrapMode = BootstrapMode.LAZY) to configure JPA-related components.

Inside the class, inject an instance of EntityManager using @PersistenceContext.

Now, implement a test method that will validate the database contains the expected number of records for each entity.

    @Test
    void loadAllClasses() {

        // given
        Map<Class<? extends BaseEntity>, Integer> classMap = Map.of(
                UserEntity.class, 8,
                ClientEntity.class, 4,
                SpecialistEntity.class, 4,
                TreatmentEntity.class, 12,
                AppointmentEntity.class, 20
        );

        // when //then
        classMap.forEach((entityType, expectedCount) ->
                assertThat(em.createQuery("from " + entityType.getSimpleName()).getResultList()).hasSize(expectedCount));
    }

For each entity type, we define the expected number of records in a map.

Using EntityManager, we execute a simple query "from <EntityName>" to fetch all records of a given entity type.

We then validate that the number of retrieved records matches the expected count.

Note

Think about it: Why is this test valuable even though it seems simple? It validates that:

  • All entity mappings are correct

  • The database schema matches entity definitions

  • Sample data loaded successfully

  • Relationships are properly configured

This is a smoke test - it doesn’t test business logic, just that the foundation is solid.

Important

About transaction behavior in @DataJpaTest:

By default, @DataJpaTest wraps each test in a transaction and rolls back after completion. This means:

  • Your test data is never committed to the database

  • Tests don’t interfere with each other

  • Tests run faster (no I/O)

However, this also means database-specific constraints (like CHECK constraints) may not be validated properly in tests. Always add constraints to BOTH database AND entities for defense-in-depth validation.

6. End of Exercise Block 1

Important

Checkpoint: Save Your Progress

Congratulations! You’ve completed the first exercise block covering database setup, schema creation, and entity mapping. This is a good time to save your work:

  1. Commit your changes to your working branch with a descriptive message

  2. Push your changes to the remote repository

  3. Create a Pull Request from your working branch to your solution branch

  4. Compare your implementation with the reference solution available in the solution repository on branch feature/2-dataaccess (your trainer will provide access)

This allows you to:

  • Secure your progress before moving to the next section

  • Review differences between your approach and the reference implementation

  • Learn alternative solutions and best practices

  • Get feedback from your team during code review

7. Repositories

Repositories provide data access operations for entities. Spring Data JPA offers multiple approaches for implementing them.

7.1. Why Custom Repository Infrastructure?

For simple CRUD operations, Spring’s JpaRepository is sufficient. However, advanced queries often require direct EntityManager access for:

  • Criteria API queries

  • QueryDSL queries

  • Complex native SQL

  • Dynamic query construction

Creating a custom base repository gives us this access pattern throughout the application.

7.2. Custom Repository Infrastructure

Create a base repository interface for EntityManager access:

@NoRepositoryBean
public interface BaseJpaRepository<T, ID> extends JpaRepository<T, ID> {
    EntityManager getEntityManager();
}
Note

The @NoRepositoryBean annotation tells Spring not to create a proxy for this interface - it’s only meant to be extended, not used directly.

Implement the base repository:

public class BaseJpaRepositoryImpl<T, ID> extends SimpleJpaRepository<T, ID>
        implements BaseJpaRepository<T, ID> {

    private final EntityManager entityManager;

    BaseJpaRepositoryImpl(JpaEntityInformation<T, ?> entityInformation,
                         EntityManager entityManager) {
        super(entityInformation, entityManager);
        this.entityManager = entityManager;
    }

    @Override
    public EntityManager getEntityManager() {
        return this.entityManager;
    }
}

Configure Spring to use custom repositories:

@Configuration
@EnableJpaRepositories(
    repositoryBaseClass = BaseJpaRepositoryImpl.class,
    basePackages = "com.capgemini.training.appointmentbooking.dataaccess.repository")
public class DataaccessConfiguration {}
Tip

Add this configuration to com.capgemini.training.appointmentbooking.dataaccess.config package. The basePackages setting tells Spring where to scan for repository interfaces.

7.3. Repository Implementation

Create the AppointmentRepository in dataaccess.repository package:

public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {
    // Basic CRUD operations inherited:
    // - save(entity)
    // - findById(id)
    // - findAll()
    // - delete(entity)
    // - deleteById(id)
    // etc.

    // TODO: Add custom query methods here
}
Note

By extending BaseJpaRepository, AppointmentRepository automatically inherits all CRUD methods from JpaRepository, plus access to EntityManager for custom queries.

Create similar repositories for other entities:

public interface UserRepository extends BaseJpaRepository<UserEntity, Long> {}

public interface ClientRepository extends BaseJpaRepository<ClientEntity, Long> {}

public interface SpecialistRepository extends BaseJpaRepository<SpecialistEntity, Long> {}

public interface TreatmentRepository extends BaseJpaRepository<TreatmentEntity, Long> {}

7.4. Testing Infrastructure

Create base test class with AssertJ support:

public class BaseTest implements WithAssertions {

    protected Instant toInstant(String date) {
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
        return LocalDateTime.parse(date, formatter)
                           .atZone(ZoneId.systemDefault())
                           .toInstant();
    }
}

Create base class for JPA integration tests:

@DataJpaTest
@Import(DataaccessConfiguration.class)
public class BaseDataJpaTest extends BaseTest {
    // Provides:
    // - JPA testing context via @DataJpaTest
    // - Automatic transaction support (and rollback)
    // - Custom repository configuration via @Import
    // - H2 in-memory database (pre-configured)
}
Important

@DataJpaTest provides:

  • Simplified context: Only JPA/database components loaded (faster startup)

  • Automatic transactions: Each test wrapped in transaction, rolled back after

  • Isolated tests: No test data leaks between tests

  • H2 database: Pre-configured in-memory database (no setup needed)

  • Performance: Tests run much faster than full @SpringBootTest

7.5. Repository Testing

Create AppointmentRepositoryIT in src/test/java:

public class AppointmentRepositoryIT extends BaseDataJpaTest {

    @Autowired
    private AppointmentRepository appointmentRepository;

    @Test
    void testFindAll() {
        // when
        List<AppointmentEntity> result = appointmentRepository.findAll();

        // then
        assertThat(result).isNotEmpty().hasSize(20);
    }
}
Note

We test repository configuration and basic operations, but not Spring framework code itself. The focus is on verifying:

  • Your entity mappings work with Spring Data

  • Custom repository configuration is correct

  • Entities load from database properly

8. Advanced Querying

Spring Data JPA offers multiple approaches for custom queries. Choose based on complexity and team preference.

8.1. Query Method Types Comparison

Method Complexity Type-Safety Best For

Spring Query Methods

Low

Medium

Simple filters, sorting

@Query Annotation

Medium

High

Well-defined queries

Named Queries

Medium

High

Reusable, testable queries

Criteria API

High

Very High

Complex, dynamic queries

QueryDSL

Medium

Very High

Complex queries with IDE support, fluent API and code generation support

8.2. Spring Query Methods

Method names parsed automatically to generate queries:

// File: TreatmentRepository.java
public interface TreatmentRepository extends BaseJpaRepository<TreatmentEntity, Long> {

    // Returns treatments with name containing search term (case-insensitive)
    List<TreatmentEntity> findAllByNameContainingIgnoreCase(String name);

    // Returns treatments by specialist, ordered by name
    List<TreatmentEntity> findAllBySpecialistOrderByName(SpecialistEntity specialist);
}
Note

Think about it: Spring Query Methods are convenient but become unwieldy with complexity. Method names like findAllByStatusAndSpecialistIdAndDateTimeBetweenOrderByDateTimeDesc are hard to read. When does it make sense to switch to @Query?

8.3. @Query Annotation

Explicit JPQL queries with parameter binding:

// File: AppointmentRepository.java
public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {

    @Query("""
            SELECT a FROM AppointmentEntity a
            JOIN a.treatment t
            WHERE t.specialist.id = :specialistId
            AND a.dateTime < :date
            ORDER BY a.dateTime DESC
            """)
    List<AppointmentEntity> findPastAppointmentsBySpecialist(
        @Param("specialistId") Long specialistId,
        @Param("date") Instant date);
}
Tip
  • Use triple-quoted strings (Java 15+) for multi-line readability

  • @Param annotation maps method parameters to query placeholders

  • Use JPQL entity names and field names, not table/column names

  • For reference on JPQL syntax: https://thorben-janssen.com/jpql/

8.4. Named Queries

Defined on entity class, referenced by interface method:

// File: SpecialistEntity.java
@Entity
@NamedQuery(name = "SpecialistEntity.findBySpecialization",
    query = "SELECT s FROM SpecialistEntity s WHERE s.specialization = :specialization")
public class SpecialistEntity extends BaseEntity {
    // ...
}
// File: SpecialistRepository.java
public interface SpecialistRepository extends BaseJpaRepository<SpecialistEntity, Long> {

    // Method name must match query name suffix (after entity name)
    List<SpecialistEntity> findBySpecialization(Specialization specialization);
}
Note

Named queries are useful for:

  • Keeping queries close to entities

  • Reusing same query across multiple repositories

  • Separating complex query logic from repository interface

However, they’re less discoverable than @Query annotations in the repository.

8.5. Criteria API

Type-safe, programmatic query construction:

// File: AppointmentCriteria.java (create in dataaccess.criteria package)
public record AppointmentCriteria(
    LocalDate startDate,
    LocalDate endDate,
    AppointmentStatus status,
    Long clientId
) {}
// File: AppointmentRepository.java
public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {

    default List<AppointmentEntity> findByCriteria(AppointmentCriteria criteria) {
        CriteriaBuilder cb = getEntityManager().getCriteriaBuilder();
        CriteriaQuery<AppointmentEntity> cq = cb.createQuery(AppointmentEntity.class);
        Root<AppointmentEntity> root = cq.from(AppointmentEntity.class);

        List<Predicate> predicates = new ArrayList<>();

        if (criteria.status() != null) {
            predicates.add(cb.equal(root.get("status"), criteria.status()));
        }

        if (criteria.clientId() != null) {
            predicates.add(cb.equal(root.get("client").get("id"), criteria.clientId()));
        }

        // Add more predicates based on other criteria fields

        cq.where(predicates.toArray(new Predicate[0]));
        return getEntityManager().createQuery(cq).getResultList();
    }
}
Note

Think about it: Criteria API is verbose and type-safe. When would you choose it over @Query? Consider: dynamic query building, avoiding string concatenation, compile-time safety vs readability.

8.6. QueryDSL

QueryDSL is a framework that enables the construction of type-safe SQL-like queries for multiple backends including JPA, MongoDB, and SQL in Java. It provides a fluent API that allows developers to write queries in a more readable and maintainable way compared to string-based queries.

Note

The original QueryDSL project has been relatively inactive in recent years, with the last major release being over 2 years old. However, the community has created several active forks, such as https://github.com/OpenFeign/querydsl, which continue to maintain and improve the library. For production use, consider evaluating these community-maintained versions.

QueryDSL offers a fluent, type-safe query builder with excellent IDE support. For comprehensive documentation, see: https://querydsl.com/static/querydsl/5.0.0/reference/html_single/

8.6.1. Setup

Understanding QueryDSL Code Generation

QueryDSL works by generating special "Q-classes" from your JPA entities at compile time. These Q-classes provide type-safe access to entity fields and relationships, enabling IDE autocompletion and compile-time error checking.

What happens during compilation:

  1. Annotation Processing: The querydsl-apt (Annotation Processing Tool) scans your @Entity classes

  2. Code Generation: For each entity like UserEntity, it generates a corresponding QUserEntity class

  3. Q-Class Structure: Generated classes contain static field references (e.g., QUserEntity.userEntity.firstName)

  4. Type Safety: These references are strongly typed, preventing field name typos and type mismatches

Example of what gets generated:

From your UserEntity class:

@Entity
public class UserEntity {
    private String firstName;
    private String lastName;
    // ...
}

QueryDSL generates QUserEntity (simplified):

public class QUserEntity extends EntityPathBase<UserEntity> {
    public static final QUserEntity userEntity = new QUserEntity("userEntity");
    public final StringPath firstName = createString("firstName");
    public final StringPath lastName = createString("lastName");
    // ...
}

Why do we need both dependencies?

  • querydsl-jpa: Runtime library containing query execution classes (JPAQueryFactory, query builders)

  • querydsl-apt: Compile-time tool that generates Q-classes from your entities

Why do we need annotation processing?

The maven-compiler-plugin with annotation processors tells Maven:

  • When to run: During the compile phase (integrated with compilation)

  • What processor to use: JPAAnnotationProcessor to scan JPA entities

  • Where to put generated code: target/generated-sources/annotations directory

What happens without proper setup?

  • Missing querydsl-jpa: Runtime errors when executing queries

  • Missing querydsl-apt: No Q-classes generated, compilation fails

  • Wrong classifier: Package mismatch errors (javax vs jakarta)

  • Missing annotation processor: Q-classes not generated during Maven build

  • Missing Jakarta Persistence API: NoClassDefFoundError during Q-class generation

Add dependencies to pom.xml:

<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-jpa</artifactId>
    <version>5.1.0</version>
    <classifier>jakarta</classifier>
</dependency>
<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-apt</artifactId>
    <version>5.1.0</version>
    <classifier>jakarta</classifier>
    <scope>provided</scope>
</dependency>

The provided scope of the querydsl-apt dependency means that this jar should be made available only at build time, but not included in the application artifact.

Add plugin to <plugins> section in your pom.xml (modern approach):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <configuration>
        <annotationProcessorPaths>
            <!-- QueryDSL annotation processor -->
            <path>
                <groupId>com.querydsl</groupId>
                <artifactId>querydsl-apt</artifactId>
                <version>5.1.0</version>
                <classifier>jakarta</classifier>
            </path>
            <!-- Jakarta Persistence API needed by QueryDSL processor -->
            <path>
                <groupId>jakarta.persistence</groupId>
                <artifactId>jakarta.persistence-api</artifactId>
                <version>3.1.0</version>
            </path>
            <!-- Lombok annotation processor - needed because entities use @Data, @Getter, etc.
                 Both QueryDSL and Lombok processors must run together during compilation -->
            <path>
                <groupId>org.projectlombok</groupId>
                <artifactId>lombok</artifactId>
                <version>${lombok.version}</version>
            </path>
        </annotationProcessorPaths>
        <generatedSourcesDirectory>target/generated-sources/annotations</generatedSourcesDirectory>
    </configuration>
</plugin>

Why this approach works:

  • Modern: Uses standard Maven compiler plugin instead of deprecated apt-maven-plugin

  • Integrated: Annotation processing happens during compilation, not separate phase

  • Complete classpath: Includes all dependencies needed by annotation processors

  • Multiple processors: Supports both QueryDSL and Lombok in same configuration

  • No duplicate generation: Avoids "Attempt to recreate a file" errors

Note

Why Jakarta classifier? Spring Boot 3.x uses Jakarta EE (jakarta.) instead of Java EE (javax.). The jakarta classifier ensures QueryDSL generates code compatible with Jakarta EE APIs. Without it, you’ll get compilation errors due to package mismatches.

Minimal setup: Only querydsl-jpa and querydsl-apt are needed for JPA queries. Additional artifacts like querydsl-sql are only required for direct SQL operations.

Complete Setup Workflow:

  1. Add dependencies and plugin to your pom.xml

  2. Run Maven compilation: mvn clean compile (or mvn clean install)

  3. Verify Q-classes generation: Check target/generated-sources/java directory

  4. IDE integration: Most IDEs automatically recognize generated sources

What you should see after successful setup:

target/generated-sources/java/
└── com/capgemini/training/appointmentbooking/dataaccess/entity/
    ├── QUserEntity.java
    ├── QClientEntity.java
    ├── QSpecialistEntity.java
    ├── QTreatmentEntity.java
    └── QAppointmentEntity.java

Common Setup Issues for Beginners:

  • "Cannot resolve symbol QUserEntity": Q-classes not generated - run mvn clean compile

  • "Package jakarta.persistence does not exist": Missing jakarta classifier

  • "Generated sources not found": IDE not recognizing generated sources - refresh/reimport project

  • "Processor not found": Wrong processor class name in plugin configuration

Tip

IDE Setup: After running mvn compile, your IDE should automatically detect generated sources. If not:

  • IntelliJ IDEA: Right-click target/generated-sources/java → "Mark Directory as" → "Generated Sources Root"

  • Eclipse: Right-click project → "Properties" → "Java Build Path" → "Source" → "Add Folder" → select target/generated-sources/java

8.6.2. Usage

// File: ClientRepository.java
public interface ClientRepository extends BaseJpaRepository<ClientEntity, Long> {

    default List<ClientEntity> findByName(String firstName, String lastName) {
        JPAQueryFactory queryFactory = new JPAQueryFactory(getEntityManager());

        QClientEntity client = QClientEntity.clientEntity;
        QUserEntity user = QUserEntity.userEntity;

        return queryFactory
                .selectFrom(client)
                .leftJoin(client.user, user)
                .where(user.firstName.eq(firstName)
                        .and(user.lastName.eq(lastName)))
                .fetch();
    }
}
// File: ClientRepositoryIT.java - Testing QueryDSL
public class ClientRepositoryIT extends BaseDataJpaTest {

    @Autowired
    private ClientRepository clientRepository;

    @Test
    void testFindByQueryDSL() {
        // when
        List<ClientEntity> clients = clientRepository.findByName("Stefan", "Kowalski");

        // then
        assertThat(clients).isNotEmpty().hasSize(1);
        assertThat(clients.getFirst().getUser().getFirstName()).isEqualTo("Stefan");
        assertThat(clients.getFirst().getUser().getLastName()).isEqualTo("Kowalski");
    }
}
Important

QueryDSL advantages:

  • IDE autocompletion for all query methods

  • Type-safe (compile-time checking)

  • Fluent, readable API

  • SQL-like structure with Java syntax

Setup overhead is higher than other approaches. Choose for complex queries where type-safety and readability matter most.

9. Exercises

9.1. Exercise 1: Basic Queries

Implement and test these queries:

1.1 Find treatments by partial name (case-insensitive)

  • Use Spring Query Methods

  • Expected method signature: List<TreatmentEntity> findAllByNameContainingIgnoreCase(String name)

  • Write test: should find treatments containing "Konsultacja"

1.2 Find past appointments for a specialist

  • Use @Query annotation with JOIN

  • Include only appointments before the current date (or any other date)

  • Exclude cancelled appointments

  • Parameters: specialistId (Long), currentDate (Instant)

  • Write test: should find all past appointments belong to the correct specialist before some specific date

9.2. Exercise 2: Advanced Queries

Tip

When implementing exercises:

  1. Write the query/method first

  2. Write a test that calls it

  3. Run test to verify it works

  4. Commit before moving to next exercise

This test-first approach catches errors immediately rather than discovering them later.

Implement these queries and write tests for each:

2.1 Find appointments by time period and status - Use Spring Query Methods

  • Parameters: startDateTime, endDateTime, status

  • Hint: Method name could be findByDateTimeBetweenAndStatus

2.2 Find conflicting appointments - Use @Query

  • Find appointments for a specialist in given time range

  • Exclude CANCELLED status

  • Parameters: specialistId, appointmentStartTime, appointmentEndTime

  • Hint: You’ll need to JOIN through Treatment to Specialist

2.3 Find treatment by name - Use Named Query

  • Single parameter: treatment name (exact match)

  • Implement: annotation on TreatmentEntity + method in TreatmentRepository

2.4 Find treatments by specialization - Use QueryDSL

  • Parameters: specialization

  • Must join with SpecialistEntity

  • Expected: treatments provided by specialists in given field

2.5 Find appointment history for a client - Use Criteria API

  • Create AppointmentHistoryCriteria record with optional filters

  • Parameters: clientId (required), status (optional), fromDate (optional)

  • Return: all matching appointments ordered by date descending

10. Troubleshooting Common Issues

10.1. IllegalStateException: Failed to load ApplicationContext

Symptom: Test fails immediately on startup; mentions query errors

Cause: Query compilation errors detected at application startup

Solution:

  • Check JPQL syntax carefully (entity names, field names, keywords)

  • Verify parameter names match @Param annotations

  • Ensure relationships exist and are properly mapped

  • Check query in IDE for syntax highlighting errors

Tip

Copy your JPQL into the application logs carefully - error messages often point to exact syntax issues.

10.2. LazyInitializationException

Symptom: Exception: could not initialize proxy - no Session

Cause: Accessing lazy-loaded associations outside transaction context

Solution:

  • Add @Transactional to service methods that need to load associations

  • Explicitly fetch required associations in query: LEFT JOIN FETCH a.user

  • Consider eagerly loading frequently-needed associations (carefully!)

// Wrong - association not loaded
AppointmentEntity appt = appointmentRepository.findById(id).orElse(null);
String clientName = appt.getClient().getUser().getFirstName(); // Fails!

// Right - fetch association in query
@Query("SELECT a FROM AppointmentEntity a LEFT JOIN FETCH a.client c LEFT JOIN FETCH c.user WHERE a.id = :id")
Optional<AppointmentEntity> findByIdWithClient(@Param("id") Long id);

10.3. NoSuchElementException: Unable to parse binding parameter

Symptom: Test runs but @Query fails with binding parameter error

Cause: Parameter name in query doesn’t match @Param annotation

Solution:

  • Check spelling: @Param("specialistId") must match :specialistId in query

  • Query parameters are case-sensitive

  • All placeholders must have corresponding @Param annotations

10.4. Performance - N+1 Query Problem

Symptom: For 100 appointments, you see 101 SQL queries in logs

Cause: Loading collection (e.g., appointments) triggers separate query per item

Solution:

  • Use JOIN FETCH in queries to load associations eagerly

  • Set appropriate fetch = FetchType.LAZY on relationships (default)

  • Use Criteria API to explicitly fetch needed data

  • Add proper indexes on foreign key columns

Warning

JPA queries are validated at runtime, not compile-time. Always test custom queries thoroughly to catch errors early. Enable SQL logging in tests to verify queries are efficient.

11. Best Practices Summary

11.1. Entity Design

  • Use @MappedSuperclass for common audit fields

  • Choose fetch strategies deliberately (LAZY by default)

  • Implement equals()/hashCode() using ID fields

  • Avoid circular bidirectional relationships if possible

11.2. Repository Design

  • Extend custom base repository for EntityManager access

  • Use appropriate query method for each use case

  • Write comprehensive integration tests with @DataJpaTest

  • Consider team expertise when choosing query style

11.3. Performance Considerations

  • Create indexes on all foreign key columns

  • Use lazy loading by default, eager only when necessary

  • Write efficient queries (avoid SELECT *)

  • Test with realistic data volumes

11.4. Testing Best Practices

  • Use @DataJpaTest for repository tests (faster than @SpringBootTest)

  • Test both positive and negative scenarios

  • Verify query performance with realistic data

  • Use toInstant() helper for timestamp testing

Note

Think about it: How does a well-designed data access layer contribute to overall application quality? Consider impact on testing, debugging, maintenance, and future feature development.

What architectural patterns have you noticed emerging from this design? How would you extend this to handle more complex scenarios like auditing, soft deletes, or partitioning?

12. End of Exercise Block 2

Important

Checkpoint: Save Your Progress

  1. Commit & push your changes to your working branch with a descriptive message

  2. Create a Pull Request from your working branch to your solution branch

  3. Compare your implementation with the reference solution available in the solution repository on branch feature/2-dataaccess-repositories (your trainer will provide access)

13. Appendix: Lombok Dependency Configuration

Lombok is a compile-time annotation processor. It should not be included in the final runtime package (JAR/WAR), nor should it be propagated to other projects that depend on yours. That’s why using the optional=true configuration is recommended instead of scope=provided.

Feature optional=true scope=provided

Available during compilation

✅ Yes

✅ Yes

Included in runtime classpath

🚫 No

🚫 No

Propagated to dependent projects (transitive)

🚫 No

✅ Yes

Semantic meaning

“Local-only dependency — don’t export it further”

“Runtime environment will provide it”

Typical examples

Lombok, annotation processors

jakarta.servlet-api, jakarta.annotation-api

Recommended for Lombok

✅ Yes

⚠️ No (works, but semantically misleading)

In summary, use:

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <optional>true</optional>
</dependency>

This configuration:

  • keeps the dependency available for your local compilation,

  • prevents unnecessary propagation,

  • and aligns with Maven’s intended semantics for annotation processors.