In this chapter, we’ll create a comprehensive database schema with initial data, map it to entity models, and implement operations for entity management. This forms the foundation of our appointment booking system’s data persistence layer.
- 1. Understanding the Domain Model
- 2. Adding Database and Utility Dependencies
- 3. Database Configuration
- 4. Creating Database Tables
- 5. Entity Implementation
- 6. End of Exercise Block 1
- 7. Repositories
- 8. Advanced Querying
- 9. Exercises
- 10. Troubleshooting Common Issues
- 11. Best Practices Summary
- 12. End of Exercise Block 2
- 13. Appendix: Lombok Dependency Configuration
- 14. Navigation
After completing the Appointment Booking System Setup, we’ll create our domain model based on the requirements outlined in the Appointment Booking System Specification.
Our system must support these essential operations:
-
Adding new
Treatmentsassociated withSpecialists -
Retrieving available
Treatments, optionally filtered bySpecialistorTreatmentname -
Retrieving detailed
Treatmentinformation including connectedSpecialistdata -
Creating
Appointmentsas aClientfor specific dates and times -
Validating
Appointmentscheduling conflicts (preventing double-booking ofSpecialists) -
Allowing
Clientsto view their appointments -
Enabling
Appointmentcancellation byClients -
Marking
Appointmentsas completed bySpecialists -
Optional: Retrieving visit history for
ClientsandSpecialists
|
Note
|
Think about it: Why is preventing double-booking crucial for a medical appointment system? What could happen if two patients were scheduled with the same specialist at the same time? |
We’ll work with five core entities: UserEntity, ClientEntity, SpecialistEntity, TreatmentEntity, and AppointmentEntity.
Each entity includes these standard fields:
-
id- Generated using database sequences -
version- Optimistic locking version number -
created- Entity creation timestamp -
lastUpdated- Last modification timestamp
-
UserEntity: Contains
email(unique),passwordHash,firstName, andlastName -
ClientEntity: Links to a
UserEntityand maintains a collection ofAppointmentEntities -
SpecialistEntity: Includes
specialization(enum), links to aUserEntity, and maintainsTreatmentEntities -
TreatmentEntity: Contains
name,description,duration(minutes), and links to aSpecialistEntity -
AppointmentEntity: Includes
dateTime,status(enum), and links toClientEntityandTreatmentEntity
Understanding these relationships is crucial for proper database design:
-
User ↔ Client: Each user can optionally be a client
-
User ↔ Specialist: Each user can optionally be a specialist
|
Important
|
A user can be both a client and a specialist simultaneously. For example, a doctor (specialist) can also book appointments with other specialists as a client. This flexible design supports real-world scenarios where medical professionals may need services from other specialists. |
-
Specialist → Treatments: One specialist provides multiple treatments
-
Client → Appointments: One client can book multiple appointments
-
Treatment → Appointments: One treatment can have multiple appointments
Only Client ↔ Appointment and Specialist ↔ Treatment relationships are bidirectional, allowing navigation in both directions.
|
Note
|
Think about it: Why might we choose to make some relationships bidirectional while keeping others unidirectional? What are the trade-offs in terms of performance and complexity? Consider this scenario: What happens when a user who is both a client and a specialist tries to book an appointment with themselves? How should the system handle this edge case? |
Before we can work with databases and entities, we need to add the required dependencies to our project.
Add the following dependencies to your pom.xml inside the <dependencies> section:
<!-- Spring Data JPA provides integration with Hibernate and simplifies database persistence through repositories -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<!-- H2 is an in-memory database used for lightweight development and testing -->
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<!-- Flyway manages database schema migrations automatically on application startup -->
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>-
spring-boot-starter-data-jpa — Integrates Spring Data JPA and Hibernate, making persistence and repository use easy
-
H2 — Provides an in-memory relational database, perfect for demos and testing where no external DB is needed
-
Flyway — Enables controlled, versioned database schema migrations to manage changes over time
After adding dependencies, reload your Maven project (e.g., "Reimport Maven Project" in IntelliJ IDEA). Spring Boot will auto-configure the JPA EntityManager, H2 data source, and Flyway migration mechanism without extra setup.
|
Note
|
About Dependency Versions: You don’t see explicit
If you need to override a dependency’s version (for example, to test a newer Flyway), you can add the |
|
Important
|
After adding these dependencies, your application will have access to:
|
We’ll use:
-
H2 Database: In-memory database perfect for development and testing
-
Flyway: Database migration tool for version control of schema changes
|
Warning
|
H2 is excellent for development but should never be used in production. Production systems require persistent databases like PostgreSQL, MySQL, or Oracle. |
Add these configurations to your application.properties file:
spring.h2.console.enabled=true
spring.datasource.url=jdbc:h2:mem:appointmentbooking
spring.datasource.username=sa
spring.datasource.password=passwordBy design, the in-memory database is volatile, and results in data loss after application restart.
We can change that behavior by using file-based storage. To do this we need to update the spring.datasource.url property:
spring.datasource.url=jdbc:h2:file:/data/demo# If you are using default location (classpath:db/migration), this property can be omitted:
spring.flyway.locations=classpath:db/migration
# Enable Flyway migrations on application startup.
# spring.flyway.enabled=true is not strictly required if Flyway migrations should always run in your environment.
# By default, Spring Boot activates Flyway migrations when a DataSource is present and Flyway is on the classpath
spring.flyway.enabled=true|
Note
|
Think about it: We’re using Spring Boot’s auto-configuration for Flyway, which runs migrations automatically on application startup. In which scenarios might you need to add the |
# Hibernate DDL-auto is recommended as 'none' when using Flyway for migrations
spring.jpa.hibernate.ddl-auto=none
# To avoid potential lazy loading issues in the view layer, disable OSIV (Open Session in View) which is enabled by default in Spring Boot:
spring.jpa.open-in-view=false
# Show SQL queries in logs for easier debugging (just for development purposes)
spring.jpa.show-sql=true
# To beautify or pretty-print the logged SQL (see above), we can add:
spring.jpa.properties.hibernate.format_sql=trueUnderstanding DDL-auto Configuration
With ddl-auto=none and Flyway active, you rely entirely on Flyway scripts to define and evolve the schema safely and reproducibly. This approach:
-
Prevents conflicts between Hibernate’s automatic schema generation and Flyway migrations
-
Ensures consistency across different environments (dev, test, production)
-
Provides version control for all database changes through migration files
-
Enables rollback capabilities and change tracking
|
Warning
|
Never use |
Disable spring.jpa.open-in-view to avoid anti-pattern in configuration
|
Warning
|
Explicit spring.jpa.open-in-view=false is the best practice in most cases. More details: how to overcome The OSIV Anti-Pattern |
SQL Logging
The show-sql=true setting displays all generated SQL queries in your application logs, which is invaluable for:
-
Debugging query performance issues
-
Understanding what Hibernate generates from your JPA queries
-
Learning how different JPA annotations translate to SQL
-
Optimizing database interactions
|
Note
|
Optional: Consider other ways to show Hibernate/JPA SQL Statements. Take a look here: https://www.baeldung.com/sql-logging-spring-boot |
-
Start your application
-
Navigate to http://localhost:8080/h2-console/
-
Use the connection details configured above:
After successful login:
You should see this in your application logs:
... : H2 console available at '/h2-console'. Database available at 'jdbc:h2:mem:appointmentbooking'
... : Exposing 15 endpoints beneath base path '/actuator'
... : Tomcat started on port 8080 (http) with context path ''|
Note
|
Think about it: Why is having a database console useful during development? How might this help you debug data-related issues? |
Our schema is following (all tables have ID, VERSION for optimistic locking and audit columns: CREATED and LAST_UPDATED columns, some of them are omitted for some of the tables for the sake of readability):
@startuml
entity APPLICATION_USER {
*ID : BIGINT
*VERSION : INTEGER
*EMAIL : VARCHAR(254)
*PASSWORD_HASH : VARCHAR(128)
*FIRST_NAME : VARCHAR(255)
*LAST_NAME : VARCHAR(255)
*CREATED : TIMESTAMP
*LAST_UPDATED : TIMESTAMP
--
UNIQUE_USER_EMAIL
}
entity CLIENT {
*ID : BIGINT
*USER_ID : BIGINT
...
--
FK_USER_ID
}
entity SPECIALIST {
*ID : BIGINT
*USER_ID : BIGINT
*SPECIALIZATION : VARCHAR(128)
...
--
FK_USER_ID
}
entity TREATMENT {
*ID : BIGINT
*NAME : VARCHAR(255)
*DESCRIPTION : TEXT
*DURATION_MINUTES : INTEGER
*SPECIALIST_ID : BIGINT
...
--
FK_SPECIALIST_ID
}
entity APPOINTMENT {
*ID : BIGINT
*DATE_TIME : TIMESTAMP
*END_DATE_TIME : TIMESTAMP
*STATUS : VARCHAR(32)
*CLIENT_ID : BIGINT
*TREATMENT_ID : BIGINT
...
--
FK_CLIENT_ID
FK_TREATMENT_ID
}
APPLICATION_USER ||--|| CLIENT : is a
APPLICATION_USER ||--|| SPECIALIST : is a
SPECIALIST ||--o{ TREATMENT : provides
CLIENT ||--o{ APPOINTMENT : books
TREATMENT ||--o{ APPOINTMENT : is scheduled as
@enduml
Create the migration file: src/main/resources/db/migration/1.0/V0001__Create_schema.sql
|
Important
|
Flyway migration files must follow the naming convention: |
CREATE TABLE APPLICATION_USER (
ID BIGINT NOT NULL PRIMARY KEY,
VERSION INTEGER NOT NULL,
EMAIL VARCHAR(254) NOT NULL,
PASSWORD_HASH VARCHAR(128) NOT NULL,
FIRST_NAME VARCHAR(255) NOT NULL,
LAST_NAME VARCHAR(255) NOT NULL,
CREATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
LAST_UPDATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT UNIQUE_USER_EMAIL UNIQUE (EMAIL)
);-
ID: Primary key, auto-incremented via sequence
-
VERSION: JPA optimistic locking version
-
EMAIL: Unique identifier following RFC 5321 (max 254 characters)
-
PASSWORD_HASH: Secure password storage (never store plain text!)
-
CREATED/LAST_UPDATED: Audit timestamps with automatic defaults
|
Warning
|
We use |
|
Note
|
Think about it: Why do we store password hashes instead of plain text passwords? What security risks would plain text passwords introduce? |
CREATE TABLE CLIENT (
ID BIGINT NOT NULL,
VERSION INTEGER NOT NULL,
USER_ID BIGINT NOT NULL,
CREATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
LAST_UPDATED TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (ID),
FOREIGN KEY (USER_ID) REFERENCES APPLICATION_USER(ID) ON DELETE CASCADE
);The ON DELETE CASCADE ensures that when a user is deleted, their client record is automatically removed, maintaining referential integrity.
Exercise: Create the SPECIALIST table following this specification:
-
ID: Primary key (BIGINT)
-
VERSION: Optimistic locking (INTEGER)
-
USER_ID: Foreign key to APPLICATION_USER (BIGINT)
-
SPECIALIZATION: Specialist’s field of expertise (VARCHAR(128))
-
CREATED/LAST_UPDATED: Audit timestamps
Remember to include:
-
Primary key constraint
-
Foreign key constraint with
ON DELETE CASCADE -
Proper NOT NULL constraints
|
Tip
|
Use the CLIENT table structure as a reference. The SPECIALIST table follows the same pattern with one additional column for specialization. |
CREATE TABLE TREATMENT (
-- TODO: Complete the TREATMENT table creation script here:
-- ... add ID, VERSION, SPECIALIST_ID foreign key and rest of the fields (in total 8 fields + one FK constraint) ...
NAME VARCHAR(512) NOT NULL,
DESCRIPTION TEXT,
DURATION_MINUTES INTEGER NOT NULL
);-
NAME: Name of the treatment. Real treatment names in healthcare systems can be very specific and long.
-
DESCRIPTION : TEXT is in PostgreSQL native unbounded text type (better than CLOB). Descriptions of actual treatments in healthcare systems often exceed 400 characters and vary significantly in length (need more space: contraindications, procedures, aftercare instructions).
|
Note
|
Think about it: Why might we want to cascade delete treatments when a specialist is removed? What are the implications for existing appointments? |
Exercise: Create the APPOINTMENT table with these fields:
-
ID: Primary key (BIGINT)
-
VERSION: Optimistic locking (INTEGER)
-
DATE_TIME: Appointment start time (TIMESTAMP)
-
END_DATE_TIME: Appointment end time (TIMESTAMP)
-
STATUS: Appointment status (VARCHAR(32), default 'SCHEDULED')
-
CLIENT_ID: Foreign key to CLIENT (BIGINT)
-
TREATMENT_ID: Foreign key to TREATMENT (BIGINT)
-
CREATED/LAST_UPDATED: Audit timestamps
Status values: SCHEDULED, CANCELLED, COMPLETED
Constraints:
-
Primary key on ID
-
Foreign keys with
ON DELETE CASCADEfor both CLIENT_ID and TREATMENT_ID -
Check constraint: END_DATE_TIME > DATE_TIME (see Constraint Checks section)
|
Tip
|
Notice that APPOINTMENT has TWO foreign keys. This is different from the one-to-one relationships you’ve seen so far. Study the schema diagram to understand the relationships. |
CREATE SEQUENCE USER_SEQ START WITH 1 INCREMENT BY 100 NO CYCLE;
CREATE SEQUENCE CLIENT_SEQ START WITH 1 INCREMENT BY 100 NO CYCLE;
-- Create the next required sequences. How many should there be in total?|
Note
|
Think about it: Why do we increment by 100 instead of 1? This is related to JPA’s sequence allocation optimization. When using If the database sequence increments by 100, and JPA is configured with Consider this: What happens if the allocationSize in JPA is set to 1, but the database sequence increments by 100? You would see IDs like 101, 201, 301… — skipping 99 values each time. This is inefficient and leads to gaps. Why must allocationSize match the DB sequence increment? To ensure consistency and avoid ID gaps or collisions, especially in distributed systems with multiple application instances. If they don’t match, JPA may either: - Skip IDs unnecessarily. - Reuse IDs (in worst-case scenarios). - Cause confusion when debugging or analyzing data. What if the application crashes before using all allocated IDs? Some IDs may be lost, but this is acceptable in most cases since primary keys are meant to be unique, not necessarily sequential. What if you have multiple server instances sharing the same database? Each instance will request its own block of IDs. Matching allocationSize and sequence increment ensures that each instance gets a distinct, non-overlapping range, avoiding collisions. What does NO CYCLE mean?
In PostgreSQL, NO CYCLE is the default behavior when creating a sequence. This means that once the sequence reaches its maximum (MAXVALUE) or minimum (MINVALUE), it will not wrap around and restart. Instead, it stops generating new values, which helps ensure uniqueness — especially important for primary keys.
Although it’s the default, explicitly specifying NO CYCLE is considered good practice for clarity, maintainability, and to avoid surprises during migrations or when using tools that might override defaults.
In Oracle, the concept is the same, but the syntax differs slightly — the keyword is written as NOCYCLE (without a space). As in PostgreSQL, it’s also the default, but often included explicitly for readability and safety.
In the next section, Entity Implementation, we will configure JPA entities to use these sequences properly using |
Exercise: Add a check constraint to ensure appointment end time is after start time.
|
Tip
|
Syntax hint, see documentation for reference: ALTER TABLE ... ADD CONSTRAINT CHK_..._TIME
CHECK (...); |
|
Note
|
Think about it: Why is this constraint important at the database level, even though it could also be enforced in Java code? Consider the defense-in-depth principle. |
Create indexes on foreign key columns for better query performance:
CREATE INDEX IDX_CLIENT_USER ON CLIENT(USER_ID);
CREATE INDEX IDX_SPECIALIST_USER ON SPECIALIST(USER_ID);
-- write next indexes, in total should be 5|
Important
|
Indexes on foreign keys are crucial for:
|
To provide realistic test data for our application, create the migration file src/main/resources/db/migration/1.0/V0002__Create_mockdata.sql.
|
Tip
|
You can download the complete sample data from the file: V0002Create_mockdata.sql or use the following structure:
|
|
Important
|
Use negative IDs for test data (e.g., -1, -2, -3). This prevents conflicts with sequence-generated positive IDs and makes test data easily identifiable. The END_DATE_TIME should be calculated as: DATE_TIME + DURATION_MINUTES. This ensures data consistency with actual treatment durations. |
After creating the migration file, run your application and verify the data is inserted correctly using the H2 Console.
Add Lombok dependency to the pom.xml to reduce boilerplate code:
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>Lombok automatically generates constructors, getters, setters, and other methods at compile time, improving code readability and reducing repetitive code. Its version should be managed by Spring Boot.
|
Important
|
Install the Lombok plugin in your IDE. This is required for proper annotation processing and code completion. Without it, IDE may show false errors despite successful compilation. |
|
Tip
|
Since Lombok is a compile-time tool, its JAR is not needed at runtime.
Excluding it from the final application artifact (such as the executable JAR or WAR file) is considered a best practice: it reduces the final JAR size and avoids having unnecessary libraries on the runtime classpath.
Add this configuration to your <configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration> |
For details about the difference between optional and provided configuration, see Appendix: Lombok Dependency Configuration.
Create these packages under com.capgemini.training.appointmentbooking:
-
dataaccess.entity- Entity classes -
dataaccess.converter- JPA attribute converters -
common.datatype- Enums and common types
Notice that attributes version, lastUpdated and created are repeated in every entity. To make the structure cleaner and avoid duplicated code, extract a @MappedSuperclass that each entity will extend.
Create a base class in package com.capgemini.training.appointmentbooking.dataaccess.entity to eliminate duplicate audit fields:
@MappedSuperclass
@Getter
public class BaseEntity {
@Version
@Setter
private int version;
@Column(insertable = true, updatable = false)
private Instant created;
@Column(name = "LAST_UPDATED")
private Instant lastUpdated;
@PrePersist
public void prePersist() {
Instant now = Instant.now();
this.created = now;
this.lastUpdated = now;
}
@PreUpdate
public void preUpdate() {
this.lastUpdated = Instant.now();
}
}|
Note
|
Lifecycle methods explained:
Why no setters for Why is |
@Entity
@Table(name = "APPLICATION_USER")
@Getter
@Setter
public class UserEntity extends BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "USER_SEQ_GEN")
@SequenceGenerator(sequenceName = "USER_SEQ", name = "USER_SEQ_GEN",
allocationSize = 100, initialValue = 1)
private Long id;
private String email;
@Column(name = "PASSWORD_HASH")
private String passwordHash;
// TODO: Add firstName and lastName fields
// Remember: use @Getter and @Setter from Lombok (already on class)
}-
The
@Table(name = "APPLICATION_USER")explicitly maps to our database table -
The
@SequenceGeneratorlinks to theUSER_SEQsequence we created in SQL -
The
allocationSize = 100matches our sequence increment to optimize performance -
The
@Column(name = "PASSWORD_HASH")is needed because Java naming conventions differ from SQL
|
Note
|
Think about it: Why map the |
Create the enum in common.datatype package:
public enum Specialization {
DENTIST("Dentist"),
CARDIOLOGIST("Cardiologist"),
PEDIATRICIAN("Pediatrician"),
UROLOGIST("Urologist"),
NEUROLOGIST("Neurologist"),
ORTHOPAEDIST("Orthopaedist");
private final String name;
Specialization(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
public static Specialization getByName(String name) {
for (Specialization s : Specialization.values()) {
if (s.getName().equals(name)) {
return s;
}
}
return null;
}
}|
Note
|
Think about it: Why define a custom |
Create alongside Specialization in common.datatype package:
public enum AppointmentStatus {
SCHEDULED, CANCELLED, COMPLETED
}This enum represents the possible appointment states. It will be stored in the database using @Enumerated(EnumType.STRING) to maintain readability.
Create SpecializationConverter in dataaccess.converter package:
@Converter
public class SpecializationConverter implements AttributeConverter<Specialization, String> {
@Override
public String convertToDatabaseColumn(Specialization specialization) {
return specialization != null ? specialization.getName() : null;
}
@Override
public Specialization convertToEntityAttribute(String dbData) {
return dbData != null ? Specialization.getByName(dbData) : null;
}
}|
Note
|
Think about it: Why use a custom converter instead of With
However, |
Before implementing entities, understand the relationship types:
-
One-to-One (1:1): User ↔ Client (each user has max one client record)
-
One-to-Many (1:n): Specialist → Treatments (one specialist provides many treatments)
-
Many-to-One (n:1): Treatments → Specialist (many treatments from one specialist)
-
Many-to-Many (n:m): Not used in this schema
Relationships can be unidirectional (one side knows the other) or bidirectional (both sides know each other).
|
Note
|
Which side owns the relationship? In bidirectional relationships, one side is the "owner" (holds the foreign key) and the other is "mapped by" (read-only). The owner side determines what gets persisted to the database. Always put |
@Entity
@Table(name = "CLIENT")
@Getter
@Setter
public class ClientEntity extends BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "CLIENT_SEQ_GEN")
@SequenceGenerator(sequenceName = "CLIENT_SEQ", name = "CLIENT_SEQ_GEN",
allocationSize = 100, initialValue = 1)
private Long id;
// Unidirectional one-to-one: ClientEntity -> UserEntity
// This is the owning side (holds the foreign key)
@OneToOne(optional = false, fetch = FetchType.LAZY, cascade = {CascadeType.PERSIST})
@JoinColumn(name = "USER_ID", referencedColumnName = "ID")
private UserEntity user;
// Bidirectional one-to-many: ClientEntity <- AppointmentEntity
// This is the non-owning side (AppointmentEntity is the owner)
@OneToMany(mappedBy = "client", fetch = FetchType.LAZY, orphanRemoval = true,
cascade = {CascadeType.PERSIST, CascadeType.REMOVE})
private List<AppointmentEntity> appointments = new ArrayList<>();
}|
Tip
|
Key parameters explained:
|
Exercise: Implement SpecialistEntity following the pattern of ClientEntity, but with these requirements:
@Entity
@Table(name = "SPECIALIST")
@Getter
@Setter
public class SpecialistEntity extends BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "SPECIALIST_SEQ_GEN")
@SequenceGenerator(sequenceName = "SPECIALIST_SEQ", name = "SPECIALIST_SEQ_GEN",
allocationSize = 100, initialValue = 1)
private Long id;
// TODO: Add one-to-one relationship with UserEntity (same pattern as ClientEntity)
// TODO: Add specialization field with @Convert annotation
// Hint: Use SpecializationConverter
// TODO: Add bidirectional one-to-many relationship with TreatmentEntity
// Hint: Use cascade {PERSIST, REMOVE} and orphanRemoval = true
}|
Tip
|
Reference the ClientEntity implementation above. The pattern is nearly identical, with one difference: you’ll add a bidirectional relationship with |
Exercise: Implement TreatmentEntity with the following requirements:
@Entity
@Table(name = "TREATMENT")
@Getter
@Setter
public class TreatmentEntity extends BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "TREATMENT_SEQ_GEN")
@SequenceGenerator(sequenceName = "TREATMENT_SEQ", name = "TREATMENT_SEQ_GEN",
allocationSize = 100, initialValue = 1)
private Long id;
// TODO: Add name field (VARCHAR(128))
// TODO: Add description field (TEXT)
// TODO: Add durationMinutes field (INTEGER)
// Many-to-one: TreatmentEntity -> SpecialistEntity (owning side)
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "SPECIALIST_ID")
private SpecialistEntity specialist;
}|
Note
|
Why is this owning side different from ClientEntity? In |
Exercise: Implement AppointmentEntity with relationships to both ClientEntity and TreatmentEntity:
@Entity
@Table(name = "APPOINTMENT")
@Getter
@Setter
public class AppointmentEntity extends BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "APPOINTMENT_SEQ_GEN")
@SequenceGenerator(sequenceName = "APPOINTMENT_SEQ", name = "APPOINTMENT_SEQ_GEN",
allocationSize = 100, initialValue = 1)
private Long id;
// TODO: Add dateTime field (TIMESTAMP)
// TODO: Add endDateTime field (TIMESTAMP)
// TODO: Add status field with @Enumerated(EnumType.STRING) and default "SCHEDULED"
// Many-to-one: AppointmentEntity -> ClientEntity (owning side)
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "CLIENT_ID")
private ClientEntity client;
// TODO: Add many-to-one relationship with TreatmentEntity
// Hint: Similar to client relationship above
// Use @ManyToOne with @JoinColumn("TREATMENT_ID")
}|
Note
|
Think about it: Why does AppointmentEntity have TWO many-to-one relationships while ClientEntity only has one-to-one? Consider the business logic: can a client have multiple appointments? Can each appointment reference multiple treatments? |
Now that entity mappings are in place, let’s verify whether they are correctly loaded from the database.
Create a new test class named EntitySmokeIT in src/main/test, under com.capgemini.training.appointmentbooking.dataaccess.entity. Annotate it with @DataJpaTest(bootstrapMode = BootstrapMode.LAZY) to configure JPA-related components.
Inside the class, inject an instance of EntityManager using @PersistenceContext.
Now, implement a test method that will validate the database contains the expected number of records for each entity.
@Test
void loadAllClasses() {
// given
Map<Class<? extends BaseEntity>, Integer> classMap = Map.of(
UserEntity.class, 8,
ClientEntity.class, 4,
SpecialistEntity.class, 4,
TreatmentEntity.class, 12,
AppointmentEntity.class, 20
);
// when //then
classMap.forEach((entityType, expectedCount) ->
assertThat(em.createQuery("from " + entityType.getSimpleName()).getResultList()).hasSize(expectedCount));
}For each entity type, we define the expected number of records in a map.
Using EntityManager, we execute a simple query "from <EntityName>" to fetch all records of a given entity type.
We then validate that the number of retrieved records matches the expected count.
|
Note
|
Think about it: Why is this test valuable even though it seems simple? It validates that:
This is a smoke test - it doesn’t test business logic, just that the foundation is solid. |
|
Important
|
About transaction behavior in By default,
However, this also means database-specific constraints (like |
|
Important
|
Checkpoint: Save Your Progress Congratulations! You’ve completed the first exercise block covering database setup, schema creation, and entity mapping. This is a good time to save your work:
This allows you to:
|
Repositories provide data access operations for entities. Spring Data JPA offers multiple approaches for implementing them.
For simple CRUD operations, Spring’s JpaRepository is sufficient. However, advanced queries often require direct EntityManager access for:
-
Criteria API queries
-
QueryDSL queries
-
Complex native SQL
-
Dynamic query construction
Creating a custom base repository gives us this access pattern throughout the application.
Create a base repository interface for EntityManager access:
@NoRepositoryBean
public interface BaseJpaRepository<T, ID> extends JpaRepository<T, ID> {
EntityManager getEntityManager();
}|
Note
|
The |
Implement the base repository:
public class BaseJpaRepositoryImpl<T, ID> extends SimpleJpaRepository<T, ID>
implements BaseJpaRepository<T, ID> {
private final EntityManager entityManager;
BaseJpaRepositoryImpl(JpaEntityInformation<T, ?> entityInformation,
EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
}
@Override
public EntityManager getEntityManager() {
return this.entityManager;
}
}Configure Spring to use custom repositories:
@Configuration
@EnableJpaRepositories(
repositoryBaseClass = BaseJpaRepositoryImpl.class,
basePackages = "com.capgemini.training.appointmentbooking.dataaccess.repository")
public class DataaccessConfiguration {}|
Tip
|
Add this configuration to |
Create the AppointmentRepository in dataaccess.repository package:
public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {
// Basic CRUD operations inherited:
// - save(entity)
// - findById(id)
// - findAll()
// - delete(entity)
// - deleteById(id)
// etc.
// TODO: Add custom query methods here
}|
Note
|
By extending |
Create similar repositories for other entities:
public interface UserRepository extends BaseJpaRepository<UserEntity, Long> {}
public interface ClientRepository extends BaseJpaRepository<ClientEntity, Long> {}
public interface SpecialistRepository extends BaseJpaRepository<SpecialistEntity, Long> {}
public interface TreatmentRepository extends BaseJpaRepository<TreatmentEntity, Long> {}Create base test class with AssertJ support:
public class BaseTest implements WithAssertions {
protected Instant toInstant(String date) {
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
return LocalDateTime.parse(date, formatter)
.atZone(ZoneId.systemDefault())
.toInstant();
}
}Create base class for JPA integration tests:
@DataJpaTest
@Import(DataaccessConfiguration.class)
public class BaseDataJpaTest extends BaseTest {
// Provides:
// - JPA testing context via @DataJpaTest
// - Automatic transaction support (and rollback)
// - Custom repository configuration via @Import
// - H2 in-memory database (pre-configured)
}|
Important
|
|
Create AppointmentRepositoryIT in src/test/java:
public class AppointmentRepositoryIT extends BaseDataJpaTest {
@Autowired
private AppointmentRepository appointmentRepository;
@Test
void testFindAll() {
// when
List<AppointmentEntity> result = appointmentRepository.findAll();
// then
assertThat(result).isNotEmpty().hasSize(20);
}
}|
Note
|
We test repository configuration and basic operations, but not Spring framework code itself. The focus is on verifying:
|
Spring Data JPA offers multiple approaches for custom queries. Choose based on complexity and team preference.
| Method | Complexity | Type-Safety | Best For |
|---|---|---|---|
Spring Query Methods |
Low |
Medium |
Simple filters, sorting |
@Query Annotation |
Medium |
High |
Well-defined queries |
Named Queries |
Medium |
High |
Reusable, testable queries |
Criteria API |
High |
Very High |
Complex, dynamic queries |
QueryDSL |
Medium |
Very High |
Complex queries with IDE support, fluent API and code generation support |
Method names parsed automatically to generate queries:
// File: TreatmentRepository.java
public interface TreatmentRepository extends BaseJpaRepository<TreatmentEntity, Long> {
// Returns treatments with name containing search term (case-insensitive)
List<TreatmentEntity> findAllByNameContainingIgnoreCase(String name);
// Returns treatments by specialist, ordered by name
List<TreatmentEntity> findAllBySpecialistOrderByName(SpecialistEntity specialist);
}|
Note
|
Think about it: Spring Query Methods are convenient but become unwieldy with complexity. Method names like |
Explicit JPQL queries with parameter binding:
// File: AppointmentRepository.java
public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {
@Query("""
SELECT a FROM AppointmentEntity a
JOIN a.treatment t
WHERE t.specialist.id = :specialistId
AND a.dateTime < :date
ORDER BY a.dateTime DESC
""")
List<AppointmentEntity> findPastAppointmentsBySpecialist(
@Param("specialistId") Long specialistId,
@Param("date") Instant date);
}|
Tip
|
|
Defined on entity class, referenced by interface method:
// File: SpecialistEntity.java
@Entity
@NamedQuery(name = "SpecialistEntity.findBySpecialization",
query = "SELECT s FROM SpecialistEntity s WHERE s.specialization = :specialization")
public class SpecialistEntity extends BaseEntity {
// ...
}// File: SpecialistRepository.java
public interface SpecialistRepository extends BaseJpaRepository<SpecialistEntity, Long> {
// Method name must match query name suffix (after entity name)
List<SpecialistEntity> findBySpecialization(Specialization specialization);
}|
Note
|
Named queries are useful for:
However, they’re less discoverable than |
Type-safe, programmatic query construction:
// File: AppointmentCriteria.java (create in dataaccess.criteria package)
public record AppointmentCriteria(
LocalDate startDate,
LocalDate endDate,
AppointmentStatus status,
Long clientId
) {}// File: AppointmentRepository.java
public interface AppointmentRepository extends BaseJpaRepository<AppointmentEntity, Long> {
default List<AppointmentEntity> findByCriteria(AppointmentCriteria criteria) {
CriteriaBuilder cb = getEntityManager().getCriteriaBuilder();
CriteriaQuery<AppointmentEntity> cq = cb.createQuery(AppointmentEntity.class);
Root<AppointmentEntity> root = cq.from(AppointmentEntity.class);
List<Predicate> predicates = new ArrayList<>();
if (criteria.status() != null) {
predicates.add(cb.equal(root.get("status"), criteria.status()));
}
if (criteria.clientId() != null) {
predicates.add(cb.equal(root.get("client").get("id"), criteria.clientId()));
}
// Add more predicates based on other criteria fields
cq.where(predicates.toArray(new Predicate[0]));
return getEntityManager().createQuery(cq).getResultList();
}
}|
Note
|
Think about it: Criteria API is verbose and type-safe. When would you choose it over |
QueryDSL is a framework that enables the construction of type-safe SQL-like queries for multiple backends including JPA, MongoDB, and SQL in Java. It provides a fluent API that allows developers to write queries in a more readable and maintainable way compared to string-based queries.
|
Note
|
The original QueryDSL project has been relatively inactive in recent years, with the last major release being over 2 years old. However, the community has created several active forks, such as https://github.com/OpenFeign/querydsl, which continue to maintain and improve the library. For production use, consider evaluating these community-maintained versions. |
QueryDSL offers a fluent, type-safe query builder with excellent IDE support. For comprehensive documentation, see: https://querydsl.com/static/querydsl/5.0.0/reference/html_single/
Understanding QueryDSL Code Generation
QueryDSL works by generating special "Q-classes" from your JPA entities at compile time. These Q-classes provide type-safe access to entity fields and relationships, enabling IDE autocompletion and compile-time error checking.
What happens during compilation:
-
Annotation Processing: The
querydsl-apt(Annotation Processing Tool) scans your@Entityclasses -
Code Generation: For each entity like
UserEntity, it generates a correspondingQUserEntityclass -
Q-Class Structure: Generated classes contain static field references (e.g.,
QUserEntity.userEntity.firstName) -
Type Safety: These references are strongly typed, preventing field name typos and type mismatches
Example of what gets generated:
From your UserEntity class:
@Entity
public class UserEntity {
private String firstName;
private String lastName;
// ...
}QueryDSL generates QUserEntity (simplified):
public class QUserEntity extends EntityPathBase<UserEntity> {
public static final QUserEntity userEntity = new QUserEntity("userEntity");
public final StringPath firstName = createString("firstName");
public final StringPath lastName = createString("lastName");
// ...
}Why do we need both dependencies?
-
querydsl-jpa: Runtime library containing query execution classes (JPAQueryFactory, query builders) -
querydsl-apt: Compile-time tool that generates Q-classes from your entities
Why do we need annotation processing?
The maven-compiler-plugin with annotation processors tells Maven:
-
When to run: During the
compilephase (integrated with compilation) -
What processor to use:
JPAAnnotationProcessorto scan JPA entities -
Where to put generated code:
target/generated-sources/annotationsdirectory
What happens without proper setup?
-
❌ Missing
querydsl-jpa: Runtime errors when executing queries -
❌ Missing
querydsl-apt: No Q-classes generated, compilation fails -
❌ Wrong classifier: Package mismatch errors (javax vs jakarta)
-
❌ Missing annotation processor: Q-classes not generated during Maven build
-
❌ Missing Jakarta Persistence API:
NoClassDefFoundErrorduring Q-class generation
Add dependencies to pom.xml:
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-jpa</artifactId>
<version>5.1.0</version>
<classifier>jakarta</classifier>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-apt</artifactId>
<version>5.1.0</version>
<classifier>jakarta</classifier>
<scope>provided</scope>
</dependency>The provided scope of the querydsl-apt dependency means that this jar should be made available only at build time, but not included in the application artifact.
Add plugin to <plugins> section in your pom.xml (modern approach):
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<!-- QueryDSL annotation processor -->
<path>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-apt</artifactId>
<version>5.1.0</version>
<classifier>jakarta</classifier>
</path>
<!-- Jakarta Persistence API needed by QueryDSL processor -->
<path>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
<version>3.1.0</version>
</path>
<!-- Lombok annotation processor - needed because entities use @Data, @Getter, etc.
Both QueryDSL and Lombok processors must run together during compilation -->
<path>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
</path>
</annotationProcessorPaths>
<generatedSourcesDirectory>target/generated-sources/annotations</generatedSourcesDirectory>
</configuration>
</plugin>Why this approach works:
-
✅ Modern: Uses standard Maven compiler plugin instead of deprecated
apt-maven-plugin -
✅ Integrated: Annotation processing happens during compilation, not separate phase
-
✅ Complete classpath: Includes all dependencies needed by annotation processors
-
✅ Multiple processors: Supports both QueryDSL and Lombok in same configuration
-
✅ No duplicate generation: Avoids "Attempt to recreate a file" errors
|
Note
|
Why Jakarta classifier? Spring Boot 3.x uses Jakarta EE (jakarta.) instead of Java EE (javax.). The Minimal setup: Only |
Complete Setup Workflow:
-
Add dependencies and plugin to your
pom.xml -
Run Maven compilation:
mvn clean compile(ormvn clean install) -
Verify Q-classes generation: Check
target/generated-sources/javadirectory -
IDE integration: Most IDEs automatically recognize generated sources
What you should see after successful setup:
target/generated-sources/java/
└── com/capgemini/training/appointmentbooking/dataaccess/entity/
├── QUserEntity.java
├── QClientEntity.java
├── QSpecialistEntity.java
├── QTreatmentEntity.java
└── QAppointmentEntity.javaCommon Setup Issues for Beginners:
-
"Cannot resolve symbol QUserEntity": Q-classes not generated - run
mvn clean compile -
"Package jakarta.persistence does not exist": Missing
jakartaclassifier -
"Generated sources not found": IDE not recognizing generated sources - refresh/reimport project
-
"Processor not found": Wrong processor class name in plugin configuration
|
Tip
|
IDE Setup: After running
|
// File: ClientRepository.java
public interface ClientRepository extends BaseJpaRepository<ClientEntity, Long> {
default List<ClientEntity> findByName(String firstName, String lastName) {
JPAQueryFactory queryFactory = new JPAQueryFactory(getEntityManager());
QClientEntity client = QClientEntity.clientEntity;
QUserEntity user = QUserEntity.userEntity;
return queryFactory
.selectFrom(client)
.leftJoin(client.user, user)
.where(user.firstName.eq(firstName)
.and(user.lastName.eq(lastName)))
.fetch();
}
}// File: ClientRepositoryIT.java - Testing QueryDSL
public class ClientRepositoryIT extends BaseDataJpaTest {
@Autowired
private ClientRepository clientRepository;
@Test
void testFindByQueryDSL() {
// when
List<ClientEntity> clients = clientRepository.findByName("Stefan", "Kowalski");
// then
assertThat(clients).isNotEmpty().hasSize(1);
assertThat(clients.getFirst().getUser().getFirstName()).isEqualTo("Stefan");
assertThat(clients.getFirst().getUser().getLastName()).isEqualTo("Kowalski");
}
}|
Important
|
QueryDSL advantages:
Setup overhead is higher than other approaches. Choose for complex queries where type-safety and readability matter most. Reference: https://querydsl.com/ |
Implement and test these queries:
1.1 Find treatments by partial name (case-insensitive)
-
Use Spring Query Methods
-
Expected method signature:
List<TreatmentEntity> findAllByNameContainingIgnoreCase(String name) -
Write test: should find treatments containing "Konsultacja"
1.2 Find past appointments for a specialist
-
Use
@Queryannotation with JOIN -
Include only appointments before the current date (or any other date)
-
Exclude cancelled appointments
-
Parameters: specialistId (Long), currentDate (Instant)
-
Write test: should find all past appointments belong to the correct specialist before some specific date
|
Tip
|
When implementing exercises:
This test-first approach catches errors immediately rather than discovering them later. |
Implement these queries and write tests for each:
2.1 Find appointments by time period and status - Use Spring Query Methods
-
Parameters: startDateTime, endDateTime, status
-
Hint: Method name could be
findByDateTimeBetweenAndStatus
2.2 Find conflicting appointments - Use @Query
-
Find appointments for a specialist in given time range
-
Exclude CANCELLED status
-
Parameters: specialistId, appointmentStartTime, appointmentEndTime
-
Hint: You’ll need to JOIN through Treatment to Specialist
2.3 Find treatment by name - Use Named Query
-
Single parameter: treatment name (exact match)
-
Implement: annotation on TreatmentEntity + method in TreatmentRepository
2.4 Find treatments by specialization - Use QueryDSL
-
Parameters: specialization
-
Must join with SpecialistEntity
-
Expected: treatments provided by specialists in given field
2.5 Find appointment history for a client - Use Criteria API
-
Create
AppointmentHistoryCriteriarecord with optional filters -
Parameters: clientId (required), status (optional), fromDate (optional)
-
Return: all matching appointments ordered by date descending
Symptom: Test fails immediately on startup; mentions query errors
Cause: Query compilation errors detected at application startup
Solution:
-
Check JPQL syntax carefully (entity names, field names, keywords)
-
Verify parameter names match
@Paramannotations -
Ensure relationships exist and are properly mapped
-
Check query in IDE for syntax highlighting errors
|
Tip
|
Copy your JPQL into the application logs carefully - error messages often point to exact syntax issues. |
Symptom: Exception: could not initialize proxy - no Session
Cause: Accessing lazy-loaded associations outside transaction context
Solution:
-
Add
@Transactionalto service methods that need to load associations -
Explicitly fetch required associations in query:
LEFT JOIN FETCH a.user -
Consider eagerly loading frequently-needed associations (carefully!)
// Wrong - association not loaded
AppointmentEntity appt = appointmentRepository.findById(id).orElse(null);
String clientName = appt.getClient().getUser().getFirstName(); // Fails!
// Right - fetch association in query
@Query("SELECT a FROM AppointmentEntity a LEFT JOIN FETCH a.client c LEFT JOIN FETCH c.user WHERE a.id = :id")
Optional<AppointmentEntity> findByIdWithClient(@Param("id") Long id);Symptom: Test runs but @Query fails with binding parameter error
Cause: Parameter name in query doesn’t match @Param annotation
Solution:
-
Check spelling:
@Param("specialistId")must match:specialistIdin query -
Query parameters are case-sensitive
-
All placeholders must have corresponding
@Paramannotations
Symptom: For 100 appointments, you see 101 SQL queries in logs
Cause: Loading collection (e.g., appointments) triggers separate query per item
Solution:
-
Use
JOIN FETCHin queries to load associations eagerly -
Set appropriate
fetch = FetchType.LAZYon relationships (default) -
Use Criteria API to explicitly fetch needed data
-
Add proper indexes on foreign key columns
|
Warning
|
JPA queries are validated at runtime, not compile-time. Always test custom queries thoroughly to catch errors early. Enable SQL logging in tests to verify queries are efficient. |
-
Use
@MappedSuperclassfor common audit fields -
Choose fetch strategies deliberately (LAZY by default)
-
Implement
equals()/hashCode()using ID fields -
Avoid circular bidirectional relationships if possible
-
Extend custom base repository for EntityManager access
-
Use appropriate query method for each use case
-
Write comprehensive integration tests with
@DataJpaTest -
Consider team expertise when choosing query style
-
Create indexes on all foreign key columns
-
Use lazy loading by default, eager only when necessary
-
Write efficient queries (avoid SELECT *)
-
Test with realistic data volumes
-
Use
@DataJpaTestfor repository tests (faster than@SpringBootTest) -
Test both positive and negative scenarios
-
Verify query performance with realistic data
-
Use
toInstant()helper for timestamp testing
|
Note
|
Think about it: How does a well-designed data access layer contribute to overall application quality? Consider impact on testing, debugging, maintenance, and future feature development. What architectural patterns have you noticed emerging from this design? How would you extend this to handle more complex scenarios like auditing, soft deletes, or partitioning? |
|
Important
|
Checkpoint: Save Your Progress
|
Lombok is a compile-time annotation processor. It should not be included in the final runtime package (JAR/WAR), nor should it be propagated to other projects that depend on yours.
That’s why using the optional=true configuration is recommended instead of scope=provided.
| Feature | optional=true |
scope=provided |
|---|---|---|
Available during compilation |
✅ Yes |
✅ Yes |
Included in runtime classpath |
🚫 No |
🚫 No |
Propagated to dependent projects (transitive) |
🚫 No |
✅ Yes |
Semantic meaning |
“Local-only dependency — don’t export it further” |
“Runtime environment will provide it” |
Typical examples |
Lombok, annotation processors |
|
Recommended for Lombok |
✅ Yes |
In summary, use:
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>This configuration:
-
keeps the dependency available for your local compilation,
-
prevents unnecessary propagation,
-
and aligns with Maven’s intended semantics for annotation processors.


