This action is designed to make it easy to run SQL on the GitHub Actions runner. You can persist data through caching by configuring your database engine as SQLite3.
name | description | required | default |
---|---|---|---|
command |
The command to run the query. e.g. write or read. |
true |
"" |
engine |
The relation database engine name, e.g. sqlite3(default), mysql, postgres, and mssql. |
false |
sqlite3 |
datasource |
The datasource string of the relational database. e.g. /path/to/sqlite.db, user:password@tcp(localhost:3306)/dbname |
true |
"" |
sql-file |
The SQL config string in YAML format. |
false |
"" |
sql |
The path of SQL config string file. |
false |
"" |
name | description |
---|---|
query-result |
JSON array of query results. |
This action is a docker
action.
- uses: nandemo-ya/sql-execution-action@main
with:
command:
# The command to run the query. e.g. write or read.
#
# Required: true
# Default: ""
engine:
# The relation database engine name, e.g. sqlite3(default), mysql, postgres, and mssql.
#
# Required: false
# Default: sqlite3
datasource:
# The datasource string of the relational database. e.g. /path/to/sqlite.db, user:password@tcp(localhost:3306)/dbname
#
# Required: true
# Default: ""
sql-file:
# The SQL config string in YAML format.
#
# Required: false
# Default: ""
sql:
# The path of SQL config string file.
#
# Required: false
# Default: ""
The query to be executed is set using the "sql" or "sql-file" parameter. The SQL configuration file has the following schema:
The write operation schema is as follows:
queries:
- sql: |
CREATE TABLE IF NOT EXISTS users (
"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" TEXT,
"age" INTEGER
);
params: []
In addition, you can set multiple INSERT statements as follows:
queries:
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- hoge
- 20
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- fuga
- 30
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- piyo
- 40
The read operation schema is as follows:
query:
sql:
SELECT id, name, age FROM users WHERE name = ?;
params:
- fuga
The query result will be output in JSON format to the query-result
variable.
Examples of usage with SQLite3 and MySQL are shown below.
SQLite3 data files are persisted via actions/cache
.
# Setup the database file as a cache
- name: Cache database file
uses: actions/cache@v4
with:
path: sqlite3.db
key: ${{ runner.os }}-database-
restore-keys: |
${{ runner.os }}-database-
- uses: nandemo-ya/sql-execution-action@main
with:
command: write
engine: sqlite3
datasource: sqlite3.db
sql: |
queries:
- sql: |
CREATE TABLE IF NOT EXISTS users (
"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" TEXT,
"age" INTEGER
);
params: []
- uses: nandemo-ya/sql-execution-action@main
with:
command: write
engine: sqlite3
datasource: sqlite3.db
sql: |
queries:
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- hoge
- 20
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- fuga
- 30
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- piyo
- 40
- uses: nandemo-ya/sql-execution-action@main
# Set the "id" to reference the result in a later step.
id: execute_query
with:
command: read
engine: sqlite3
datasource: sqlite3.db
sql: |
query:
sql:
SELECT id, name, age FROM users WHERE name = ?;
params:
- fuga
- name: Show query result
run: |
echo "query result is: ${{ steps.execute_query.outputs.query-result }}"
For MySQL, start the mysql service first.
jobs:
test:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8
ports:
- 3306:3306
options: >-
--health-cmd="mysqladmin ping --silent"
--health-interval=10s
--health-timeout=5s
--health-retries=3
env:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: test_db
MYSQL_USER: test_user
MYSQL_PASSWORD: test_password
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Wait for MySQL to be ready
run: |
while ! mysqladmin ping --host="127.0.0.1" --user="root" --password="root_password" --silent; do
echo "Waiting for MySQL..."
sleep 3
done
- name: Execute create statement
uses: ./
with:
command: write
engine: mysql
datasource: test_user:test_password@tcp(mysql:3306)/test_db
sql: |
queries:
- sql: |
CREATE TABLE IF NOT EXISTS users (
`id` INT NOT NULL AUTO_INCREMENT,
`name` TEXT,
`age` INT,
PRIMARY KEY (`id`)
);
params: []
- name: Execute insert statement
uses: ./
with:
command: write
engine: mysql
datasource: test_user:test_password@tcp(mysql:3306)/test_db
sql: |
queries:
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- hoge
- 20
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- fuga
- 30
- sql:
INSERT INTO users (name, age) VALUES (?, ?);
params:
- piyo
- 40
- name: Execute select statement
uses: ./
id: execute_query
with:
command: read
engine: mysql
datasource: test_user:test_password@tcp(mysql:3306)/test_db
sql: |
query:
sql:
SELECT id, name, age FROM users WHERE name = ?;
params:
- fuga
- name: Show query result
run: |
echo "query result is: ${{ steps.execute_query.outputs.query-result }}"
This action resolves the MySQL database by its service name because the sql-execution-action is executed in the container.
DuckDB can be used as an in-memory analytical database. Below is an example of using DuckDB:
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Execute create statement
uses: ./
with:
command: write
engine: duckdb
datasource: test.duckdb
sql: |
queries:
- sql: |
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
name VARCHAR,
age INTEGER
);
params: []
- name: Execute insert statement
uses: ./
with:
command: write
engine: duckdb
datasource: test.duckdb
sql: |
queries:
- sql:
INSERT INTO users (id, name, age) VALUES (?, ?, ?);
params:
- 1
- hoge
- 20
- sql:
INSERT INTO users (id, name, age) VALUES (?, ?, ?);
params:
- 2
- fuga
- 30
- sql:
INSERT INTO users (id, name, age) VALUES (?, ?, ?);
params:
- 3
- piyo
- 40
- name: Execute select statement
uses: ./
id: execute_query
with:
command: read
engine: duckdb
datasource: test.duckdb
sql: |
query:
sql:
SELECT id, name, age FROM users WHERE name = ?;
params:
- fuga
- name: Show query result
run: |
echo "query result is: ${{ steps.execute_query.outputs.query-result }}"
DuckDB can be used both as a file-based database and as an in-memory database by specifying :memory:
as the datasource.
Please note that the DuckDB driver has additional dependencies that need to be considered:
-
Apache Arrow C++ Libraries: The DuckDB Go driver depends on Apache Arrow C++ libraries. These libraries are automatically installed in the Docker container used by this action.
-
Local Development: If you want to build or test this action locally with DuckDB support, you need to install Apache Arrow C++ libraries:
- On Ubuntu/Debian:
apt-get install libarrow-dev libarrow-dataset-dev libarrow-acero-dev
- On macOS:
brew install apache-arrow apache-arrow-glib
- On Ubuntu/Debian:
-
CI/CD Environment: The GitHub Actions workflow includes all necessary dependencies for DuckDB in the Docker container.
-
Building Without DuckDB: If you don't need DuckDB support, you can build without CGO by setting
CGO_ENABLED=0
.
This project is distributed under the MIT license.