Small stand alone application for periodially grist backups (sqlite, xlsx, csv)

I’ve build a small stand alone application that downloads/exports
your Grist documents for periodically backups.
It works with hosted and self hosted grist.

The reason i build it is, that we need to do “tamperproof periodically snapshots” of our documents,
to be archived for 10+/30+ years (gene modified organisms database), so this tool can download the documents in all three formats, to hopefully be still readable in these 30+ years.

14 Likes

Thank you for this! Very cool.

Looks great, need to try it out!
Thanks very much for sharing!

that’s super cool :+1:,
what’s this part of the config file does , is it the name of tables in Grist we wanna download ?

[csvtables]
Table1
Table2

In contrast to sqlite and xlsx, csv only downloads one specific table.
So in the [csvtable] section you tell the tool which tables you want to download as csv.

1 Like

I’ve created a new release:

  • v0.3.0
    • Added logging
    • Do not create files in case of an error.
    • create config.ini in case its missing
    • In case of an error the application quits with an return code of “1”.
    • Added linux builds
      • amd64
      • arm
      • arm-linux-gnueabi
      • arm-linux-gnueabihf
4 Likes

I’ve created a new release (fixing some ugly bugs):

  • v0.3.1
    • Fixed bug that prevents v0.3.0 from running. (ApiKey always empty.)
    • Use platform specific newline character in generated config.
      • Older versions of Windows display the config correct in notepad.
    • Print Version in the logs
    • Fixed bug that does not print csv tables in the log.
1 Like

I’ve created a new release:

  • v0.3.2
    • Added wildcard downloads to csv this means that for example::
      • * Fetches all tables as csv
      • MyTable* Fetches all tables that start with “MyTable”
      • *Expanses* Fetches all tables that contain the word Expanses in the name
6 Likes

@enthus1ast Thanks for a very useful app.
But what if you need to make a Backup from several documents with different docId’s?

For version v0.3.2 the only option is to copy the
executable to a new folder and create another config.ini.
Then call the executable from the other folder.

However, I’ve already written a new version that uses config dirs and supports multiple configs. It is done and working, but not released yet.
I could release it later (that version also comes with .deb packages).

Edit:
I’ve uploaded the changes to github, GitHub - enthus1ast/nimDownloadGrist at devel
but currently you must build it yourself.
Before i want to create a new release, i want to document the changes properly, fix the linux man page creation, and build the .deb package for all targets i support.

2 Likes

I need help with a timeout issue while using nimDownloadFrist (downloadGrist_0.3.2_x86_64-linux-gnu).

I have Grist 1.6.0 installed in a Proxmox LXC container, using this script.

I’m trying backup a document with the following details:

  • Total rows: 1,505 rows
  • Total Data Size: 0.30 MB
  • Size of Attachments: 1.01 GB
  • Number of Tables: 15

I am able to successfully backup the document via a browser using
https://www.example.com/o/docs/api/docs/docid/download?template=false&nohistory=false (url and docid redacted)

I’m able to successfully backup smaller documents with nimDownloadGrist but it triggers a timeout error for this document.

The error I get is:

[09:38:21] - INFO: Version: 0.3.2 Server: ‘https://www.example.com/’ docId: ‘aaaa’ apiKey: ‘bbbb’
[09:38:21] - INFO: Download sqlite
[09:39:21] - ERROR: Could not download sqlite: ‘Timeout was reached’

The corresponding grist logs, with redactions, are:

Jun 10 09:38:21 grist yarn[163]: 2025-06-10 09:38:21.261 - debug: Auth[GET]: www.example.com /docs/docid/download customHostSession=, method=GET, host=www.example.com, path=/docs/docid/download, org=, email=email@example.com, userId=6, altSessionId=altSessionId
Jun 10 09:38:21 grist yarn[163]: 2025-06-10 09:38:21.267 - debug: backupSqliteDatabase: starting copy of /opt/grist/docs/docid.grist (xxx-xxx-xxx-xxx-xxx) docId=docid
Jun 10 09:38:21 grist yarn[163]: 2025-06-10 09:38:21.274 - info: backupSqliteDatabase: copying /opt/grist/docs/docid.grist (xxx-xxx-xxx-xxx-xxx) using source connection docId=docid
Jun 10 09:39:11 grist yarn[163]: 2025-06-10 09:39:11.093 - info: backupSqliteDatabase: copy of /opt/grist/docs/docid.grist (xxx-xxx-xxx-xxx-xxx) completed successfully docId=docid
Jun 10 09:39:11 grist yarn[163]: 2025-06-10 09:39:11.093 - debug: backupSqliteDatabase: stopped copy of /opt/grist/docs/docid.grist (xxx-xxx-xxx-xxx-xxx) docId=docid, finalStepTimeMs=9752, maxStepTimeMs=9752, maxNonFinalStepTimeMs=927, numSteps=259
Jun 10 09:39:21 grist yarn[163]: 2025-06-10 09:39:21.194 - warn: Download request aborted for doc docid Error: Request aborted
Jun 10 09:39:21 grist yarn[163]: at onaborted (/opt/grist/node_modules/express/lib/response.js:1062:15)
Jun 10 09:39:21 grist yarn[163]: at onfinish (/opt/grist/node_modules/express/lib/response.js:1098:50)
Jun 10 09:39:21 grist yarn[163]: at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
Jun 10 09:39:21 grist yarn[163]: at listener (/opt/grist/node_modules/on-finished/index.js:170:15)
Jun 10 09:39:21 grist yarn[163]: at onFinish (/opt/grist/node_modules/on-finished/index.js:101:5)
Jun 10 09:39:21 grist yarn[163]: at callback (/opt/grist/node_modules/ee-first/index.js:55:10)
Jun 10 09:39:21 grist yarn[163]: at Socket.onevent (/opt/grist/node_modules/ee-first/index.js:93:5)
Jun 10 09:39:21 grist yarn[163]: at Socket.emit (node:events:530:35)
Jun 10 09:39:21 grist yarn[163]: at emitErrorNT (node:internal/streams/destroy:169:8)
Jun 10 09:39:21 grist yarn[163]: at emitErrorCloseNT (node:internal/streams/destroy:128:3)
Jun 10 09:39:21 grist yarn[163]: at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
Jun 10 09:39:21 grist yarn[163]: code: ‘ECONNABORTED’
Jun 10 09:39:21 grist yarn[163]: }

I’m not sure if it’s nimDownloadGrist or my grist instance that is timing out and aborting the connection.

Thanks for creating this great tool.

Sorry for my late reply, but you might run into this timeout:

result.timeout = 240’f32

so ~ 4 Minutes?

In the current version, the timeout is hardcoded (choosen by a fair dice roll).
I could make this configurable.

Any chance you could make this a configuration option. I love you solution, but as it fails for one of my documents, I backup everything via a different mechanism.

Thank you for taking the time to respond.

1 Like