graphql-engine: Migration fails because the enum needs at least one row, but my data has rows?

I’m trying to migrate data from Heroku to Docker, following the guide from here https://docs.hasura.io/1.0/graphql/manual/migrations/existing-database.html.

This is what I get when I run hasura migrate apply --endpoint http://localhost:8080: FATA[0002] apply failed: [constraint-violation] the table "frequency" cannot be used as an enum because the table must have at least one row ($.args[2].args.tables[4])

Additional information:

  1. The table frequency doesn’t have zero rows, it actually has 10 rows, at least that’s what I see on the original Heroku db that I ran hasura migrate create from.
  2. frequency isn’t the only enum table, nor was it the first one that was created. I had another enum called archive_types and this was created first and is alphabetically earlier than frequency. This enum table also has values, but the console error doesn’t seem to mention it. Why would Hasura complain that that is something wrong with frequency but not with archive_types?
  3. The only thing I did past hasura migrate create "init" was just create a test table, as stated in the guide.

Also I checked the files generated by the hasura migrate create "init" command, and it only seems to contain data about the whole Graphql/postgres schema itself (both the yaml and sql file). It doesn’t even contain a single actual row, i.e. my test data. Is there a way to migrate the data themselves too? It seems like migration isn’t that useful if it can’t do this.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (4 by maintainers)

Most upvoted comments

@joshuarobs This should work (while we fix this with something better):

  1. hasura migrate create "init" --from-server
  2. pg_dump <postgres-url> --column-inserts --data-only -t frequency -t archive_types >> 15xxxx_init.sql
  3. hasura migrate apply --endpoint prod-endpoint.com

You can add multiple -t flags for all the enum tables. Hope this helps!

this really needs a fix. It’s troublesome if you create the migration after making the enum tables or when refreshing the migration from scratch.

I’m not sure if this is the best approach, but i’m leaving it here incase others may run into this issue, and for further discussion of potential better options.

Currently I use this shell script to generate a unique migration that combines all migrations, but appends enums at the end. The result is that when applying this to a new fresh hasura server, everything works fine and the server has the bare minimum data it needs to operate. User generated data is not included.

I’m currently WIP making this approach work in a production setting alongside CI/CD and I’m unsure if its optimal.

# hasura-create-prod-migration-with-seeds.sh
# Creates a Hasura migration folder with some SQL enums in it so that it
# may be applied to another server.
#
# (Note) This is required as there's no built-in functionality for Hasura to
#        migrate seed data longside the database schema
#        Issue tracking: https://github.com/hasura/graphql-engine/issues/2431
#
# Summary:
# 1. Obtains seed data from tables defined in `seed-tables`
# 2. Generates a migration in Hasura
# 3. Adds the seed data to the migration

# The name of the docker container running the Postgres database
# Usually it's named after this repo folder's name, so we get that
DOCKER_CONTAINER=${PWD##*/}_postgres_cms_1

## Go inside the hasura/ directory or exit if there isn't any
cd hasura || exit
# ls
# docker ps
# Get a pg_dump of the whole Postgres database running locally on Docker...
# ...whilst getting only the enum tables from the "seed-tables" file
# and removing the comments in that file
docker exec $DOCKER_CONTAINER pg_dump -U postgres -d postgres \
--column-inserts --data-only $(< "seed-tables" sed '/^#/ d' | \
# Add `-t ` in front of every table name, to prepare for the command
sed -e's/^/-t /' | \
# Put everything in one line,
sed 's/$/ /' | \
# ???
tr -d '\n') | \
# Obtain only the INSERT INTO statements, i.e. the enum data
# (ignoring the schema) and put it in a temporary dump file
sed -n -e '/^INSERT INTO public/p' > .temp-dump
# Run the Hasura migration code as usual and save the result in the variable
SQL=$(hasura migrate create "init" --from-server --skip-update-check --database-name default |& \
# Obtain the version number as described in the output...
 grep -E -o '[0-9]+' | \
# ...obtaining the last number, since it outputs some extra numbers in front
# which are bugged, incorrect and useless
tail -1 | \
# Append to the number, `_init/up.sql` to create the filename of the newly
# created .sql file from the `hasura migrate`
sed -e 's/$/_init\/up.sql/')
# Go to the migrations folder
cd migrations/default || exit
# And the default database folder as of migrations v3
# cd default || exit
# Add the enums we obtained at the end of the new SQL file
echo "SQL: $SQL"
cat ../../.temp-dump >> "$SQL"

# Clean up by deleting the temporary dump file we made
rm ../../.temp-dump

# Finally, print to the console indicating the migration creation was successful
echo 'Successfully created new migration:' $SQL

You’ll need a seed-tables file in the same folder as config.yaml in the hasura/ folder. An example I have is:

colours
countries

It’s just a text file with a list of tables that I consider enums. Most of these tables are considered enum tables within Hasura, but some are enum-like which aren’t technically enums under hasura, but essentially act like an enum. In order to cater to this flexibility that I need, I put all the tables names that are considered enum-enough in this seed-tables file.

Running the script should work in my project directory, which has a project/hasura folder with all my hasura stuff in it. If it works without error, a new migration should be created in the migrations folder and you can find appended to the end of the file, all the seed data that the server needs. When applying this to a new fresh instance of a server, it loads all essential data for the server to work.