View on GitHub

5-minute guide

Crash Course

Liquigraph executes migrations defined in a single changelog. These migrations are actually called changesets.

Each changeset must define at least 1 migration query in the Cypher query language.

Changesets are immutable (i.e. not allowed to change) and incremental (i.e. executed only once) by default. Liquigraph persists the executed changesets in the target graph database every time it runs.

See the detailed section to understand the Liquigraph model in more details.

The Migration File

Open your favourite editor and define a changelog with two changesets as follows:

<?xml version="1.0" encoding="UTF-8"?>
<changelog xmlns:xsi=""
    <changeset id="hello-world" author="you">
        <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>
    <changeset id="hello-world-fixed" author="you">
        <query>MATCH (n:Sentence {text:'Hello monde!'}) SET n.text='Hello world!' RETURN n</query>

These migrations can be run using the Java API directly, Spring Boot, the Maven plugin or simply using the command line. Choose the way that fits your project.

Java API

Save the migration file at ${your_project}/${root_classpath_folder}/changelog.xml.

Then, include Liquigraph in your pom.xml:


If you want to consume a SNAPSHOT version, you need to add the following to your POM:

      <name>Sonatype Snapshots</name>

Finally, running the migrations can be done as follows:

import org.liquigraph.core.api.Liquigraph;
import org.liquigraph.core.configuration.Configuration;
import org.liquigraph.core.configuration.ConfigurationBuilder;

// [...]

Configuration configuration = new ConfigurationBuilder()

Liquigraph liquigraph = new Liquigraph();

If want to dry-run your migrations, replace withRunMode() by withDryRunMode(Paths.get(outputDirectory)), where outputDirectory specifies the path of the directory where output.cypher will be written in.

Spring Boot

  1. Include Liquigraph Spring Boot starter in your pom.xml (see instructions to consume the SNAPSHOT version):

  2. Include a Neo4j JDBC driver in your pom.xml:

  3. Expose a DataSource Spring bean, which can be used by Liquigraph. This can be done in a few ways. The most idiomatic Spring Boot way is:

    Include spring-boot-starter-jdbc in your pom.xml:


    and provide values for spring.datasource.url, spring.datasource.driver-class-name, spring.datasource.username and spring.datasource.password using any configuration source.

    Another solution is to provide your own @Configuration class which exposes a DataSource like:

    import com.zaxxer.hikari.HikariConfig;
    import com.zaxxer.hikari.HikariDataSource;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import java.sql.SQLException;
    import javax.sql.DataSource;
    public class DataSourceConfiguration {
        public DataSource dataSource() throws SQLException {
            final HikariConfig config = new HikariConfig();
            return new HikariDataSource(config);
  4. You are now automatically running Liquigraph migrations at startup!

By default the migration file is read from ${your_project}/src/main/resources/db/liquigraph/changelog.xml. This can be changed using the liquigraph.change-log property in any supported configuration source.
Other settings can be found in LiquibaseProperties.

For more details there is a Liquigraph Spring Boot sample application so you can see how to set things up.

The Maven way

Save the migration file at ${your_project}/src/main/resources/changelog.xml.

Then, include Liquigraph plugin in your pom.xml (see instructions to consume the SNAPSHOT version):

        <changelog>changelog.xml</changelog><!-- classpath location -->

And finally, run:

$> mvn clean package

Full documentation is here.

Highway to shell

Save the migration file at ${your_project}/migrations/changelog.xml.

Homebrew users, skip the following installation steps altogether. Specific instructions are here.


  1. make sure at least JRE 8 is set up and java is included in your path
  2. decompress Liquigraph zip, tar.gz or tar.bz2 in LIQUIGRAPH_DIR
  3. include LIQUIGRAPH_DIR/liquigraph-cli/ to your PATH
  4. Unix users: make sure LIQUIGRAPH_DIR/liquigraph-cli/ is executable

You can now check your installation by executing the following command (reminder to non-Unix users: replace .sh with .bat):

Running Liquigraph

Interactive mode

$> cd LIQUIGRAPH_DIR/liquigraph-cli
$> ./ --help

A description should be displayed. Then, simulate the changes (replace /tmp with whatever floats your boat):

$> ./ --changelog "${your_project}/migrations/changelog.xml" \
    --username neo4j \
    --password \ # leave empty (password prompt will appear)
    --graph-db-uri jdbc:neo4j:http://localhost:7474/ \
    --dry-run-output-directory /tmp
# check contents
$> less /tmp/output.cypher

And finally, run:

$> ./ --changelog "${your_project}/migrations/changelog.xml" \
    --username neo4j \
    --password \ # leave empty (password prompt will appear)
    --graph-db-uri jdbc:neo4j:http://localhost:7474/

Non-interactive mode

The above methods will prompt for your Neo4j password. If you wish to run liquigraph without password prompt (i.e. non interactively):

$> touch my_password.txt
# update mypassword.txt to contain your password and chmod appropriately
$> ./ --changelog "${your_project}/migrations/changelog.xml" \
    --username neo4j \
    --graph-db-uri jdbc:neo4j:http://localhost:7474/ \
    --password < my_password.txt

Detailed section

Supported JDBC urls


The URL pattern is: jdbc:neo4j:http://<host>:<port>/

For instance: jdbc:neo4j:http://localhost:7474


The URL pattern is: jdbc:neo4j:https://<host>:<port>/

For instance: jdbc:neo4j:https://localhost:7474


The URL pattern is: jdbc:neo4j:bolt://<host>:<port>/

For instance: jdbc:neo4j:bolt://localhost:7687

Bolt+Routing (Neo4j 3.1+)

The URL pattern is: jdbc:neo4j:bolt+routing://<host>:<port>/

For instance: jdbc:neo4j:bolt+routing://localhost:7687



A Liquigraph changelog defines a set of migrations (changesets) to be performed. There can be only one changelog as entry point per project.

<?xml version="1.0" encoding="UTF-8"?>
<changelog xmlns="">
    <!-- import "sub-changelogs"-->
    <import resource="version_1/sub_changelog.xml" />
    <import resource="version_2/sub_changelog.xml" />
    <!-- and/or define directly changesets-->
    <changeset [...] />
    <!-- [...] -->

Both sub_changelog.xml files could import changelogs and/or define changeset elements. Their root element is also changelog.


A Liquigraph changeset describes one or more create or update statement. These statements must be written in Cypher Query Language and are wrapped in a single transaction (1 transaction per changeset). A changeset can be run only once (incremental) and cannot be altered (immutable) against the same graph database instance, by default. Finally, a changeset is uniquely identified within the changelog by its mandatory ID and author attributes.

<changeset id="unique_identifier" author="team_or_individual_name">
    <!-- 1 to n queries: all executed in 1 transaction -->
    <query>CREATE (m:MyAwesomeNode) RETURN m</query>
    <query>CREATE (m:MyOtherAwesomeNode) RETURN m</query>

Both sub_changelog.xml files could import changelogs and/or define changeset elements.

Execution context

An execution context is a simple string, defined at changeset level. A changeset can have 0, 1 or more execution contexts (in the latter case, they're comma-separated). For instance:

<changeset id="hello-world" author="you" contexts="foo,bar">
   <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>

If no execution contexts are specified at runtime, all changesets will match.
If one or more execution contexts are specified at runtime, changesets will be selected:

  • if they do not declare any execution contexts
  • or one of their declared contexts match one of the runtime contexts
Changeset immutability

As previously mentioned, Liquigraph changesets are immutable by default (an error will be thrown if they have been altered). That said, there may be situations where changesets should be run whenever their contents have changed. When a change occurs, the changeset computed checksum will change and Liquigraph will execute the changeeset queries.

To allow such a scenario, you just need to add an extra attribute to the changeset element:

<changeset id="hello-world" author="you" run-on-change="true">
   <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>

Changeset incrementality

Liquigraph changesets are incremental by default (they will be executed only once). That said, there may be situations where changesets should be run at every execution. To achieve this, you just need to define one extra attribute:

<changeset id="hello-world" author="you" run-always="true">
   <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>
Combining mutability and non-incrementality

A mutable changeset (run-on-change=true) is not going to run unless its content change.
Similarly, a non-incremental changeset (run-always="true") will always run as long as it never changes (else an error would occur, changesets are immutable by default).
If you need to run a changeset all the time and allow its content to change, then you need to combine both attributes.

<changeset id="hello-world" author="you" run-always="true"  run-on-change="true">
   <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>
Changeset precondition

Changeset preconditions act like guards. They make sure that the changeset queries will be executed if and only if the precondition is met.

A precondition can be simple (1 Cypher query) or compound (with boolean AND/OR operators).

In any case, each of the subqueries/the simple query has to return exactly one column named result of type boolean (true|false).

Note that a precondition cannot modify the database itself: all changes will be rolled back.

If the precondition fails, an error policy has to be selected amongst the following choices:

  • CONTINUE: ignores the precondition error and skips the associated changeset execution
  • MARK_AS_EXECUTED: ignores the precondition error and marks the changeset as executed (without actually executing it)
  • FAIL: halts the whole execution, reporting the precondition error

Here is a basic precondition example (you are right: this is the dumbest precondition ever):

<changeset id="hello-world" author="you">
   <precondition if-not-met="MARK_AS_EXECUTED">
      <query>RETURN true AS result</query>
   <query>CREATE (n:Sentence {text:'Hello monde!'}) RETURN n</query>

Here comes a slightly more complicated example:

<changeset id="contest-winner-selection" author="futuroscope-engineering">
   <precondition if-not-met="FAIL">
            <query>MATCH (p:User {twitter:'@fbiville'}) RETURN NOT (p.underAge) AS result</query>
            <query><![CDATA[MATCH (p:User {twitter:'@fbiville'}) OPTIONAL MATCH (p)-[:SUFFERS_FROM]->(d:NEURO_DISORDER {name:'photosensitive epilepsy'}) RETURN (d IS NULL) AS result]]></query>
         <query><![CDATA[MATCH (p:User {twitter:'@fbiville'}) OPTIONAL MATCH (p)<-[:HAS_PARENTAL_CONTROL]-(parent:User) RETURN NOT (parent IS NULL) AS RESULT]]></query>
   <query><![CDATA[MATCH (p:User {twitter:'@fbiville'}) CREATE (p)-[:IS_OFFERED_FREE_PASS_TO]->(:Location {name:'Futuroscope'})]]></query>
Changeset postcondition

Changeset postconditions allow a single changeset to be applied as many times as necessary to complete the migration, by repeating the changeset queries as long as the postcondition is met.

A postcondition can be simple (1 Cypher query) or compound (with boolean AND/OR operators).

In any case, each of the subqueries/the simple query has to return exactly one column named result of type boolean (true|false).

Note that a postcondition cannot modify the database itself: all changes will be rolled back.

Once the postcondition returns false, the changeset is considered complete.

A repeatable changeset can be used to perform a migration on a large database without risking an OutOfMemoryError because of the large number of nodes or relationships impacted, by splitting the migration into smaller batches.

Here is a postcondition example, deleting all relationships in the database by batches of 10:

<changeset id="delete-relationships" author="you">
   <query><![CDATA[MATCH ()-[r]->() WITH r LIMIT 10 DELETE r]]></query>
      <query><![CDATA[OPTIONAL MATCH ()-[r]->() WITH r LIMIT 1 RETURN (r IS NOT NULL) AS result]]></query>


Feel free to ping @fbiville if you have any questions.

Bug reporting, improvement suggestion... belong here.