Skip to content

Informasjonsforvaltning/fdk-sparql-service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

FDK SPARQL Service

This application provides an API for the SPARQL page on data.norge.no and to other backend applications.

The application manages an embedded Fuseki server instance, with a persistent TDB2 database. A schedule checks for any updated resources every 15 minutes, the schedule updates the associated fuseki-graph when it finds that resources have been updated.

The service listens for kafka events, relevant events are *_REASONED and *_REMOVED where * is the associated resource-type. Any of these events will update a postgres database and tag the relevant resource-type as updated. The fuseki graph for the type tagged as updated will be generated from the data in postgres when the 15-minute schedule next checks what has been updated.

The update will create a new graph and drop the old, but the old graph will still use disk space. A compact process will therefore run after each update, to free the now unused disk space.

For a broader understanding of the system’s context, refer to the architecture documentation wiki. For more specific context on this application, see the Portal subsystem section.

Getting Started

These instructions will give you a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

Ensure you have the following installed:

  • Java 17
  • Maven
  • Docker

Running locally

Clone the repository

git clone https://github.com/Informasjonsforvaltning/fdk-sparql-service.git
cd fdk-sparql-service

Generate sources

Kafka messages are serialized using Avro. Avro schemas are located in kafka/schemas. To generate sources from Avro schema, run the following command:

mvn generate-sources    

Start PostgreSQL database, Kafka cluster and setup topics/schemas

Topics and schemas are set up automatically when starting the Kafka cluster. Docker compose uses the scripts create-topics.sh and create-schemas.sh to set up topics and schemas.

docker-compose up -d

If you have problems starting kafka, check if all health checks are ok. Make sure number at the end (after 'grep') matches desired topics.

Start application

mvn spring-boot:run -Dspring-boot.run.profiles=dev

Produce messages

Check if schema id is correct in the script. This should be 1 if there is only one schema in your registry.

sh ./kafka/produce-messages.sh

Running tests

mvn verify

About

No description or website provided.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •  

Languages