No Metadata For Imported Grist Document

Some hash mismatches don’t matter anymore after the change tracked in this issue:

In principle, snapshots could work without redis, but in practice it is not a configuration we test. Do you have a redis instance available? If so you can set REDIS_URL to whatever your provider says to use for that instance. The /N can be omitted. By default redis instances typically offer 16 or so distinct “databases” identified by a small number, but that isn’t important for your use-case. Omitting the /N uses a default redis “database”.

@paul-grist, in S3, the location is under docs/. Even importing small test document about 175Kb causes same error.

We still seem to have this problem of not being able to import files with S3 enabled. In the meantime we had our big document placed inside the directory where the .grist file should be and then we enabled S3 so that is working fine but we won’t be able to import documents straight from the website itself due to the metadata issue. We would have to basically do SSH or connect to the instance and manually put the .grist file every time an import is necessary for .grist files.

I tried running Grist with S3 and without Redis and was able to replicate the problem you are seeing @Tazwar_Belal.

Do you think you can get Redis running, and get Grist to find it, or do you need a solution without Redis?

To run Redis do I just download the installation package from the website and then link the database number on the instance?

Would you be comfortable running AWS’s hosted Redis option? (ElastiCache)

Its definitely worth a try. I will set something up on my personal account with the elasticache enabled and configure S3 and try the procedure again.

1 Like

Great, let me know how it goes. ElastiCache is what we use for the Grist SaaS so it is a well tested configuration.

For the REDIS_URL=“redis://hostname/N” is the hostname just the name of the redis ElastiCache that I created? What would be the database number (N)?

The database number can just be 0. Every redis instance has a collection of numbered databases available, with the first one (0) being the default.

The hostname should be whatever AWS says it is, I would guess it is in what you are seeing in the Endpoint section. Include any port number (you could omit it if it is 6379 I think, since that is the default for Redis). You may need to tweak security group settings to make it accessible to the Grist instance.

Overall it might look something along the lines of redis://xxxxxx.0001.use1.cache.amazonaws.com:6379/0

So I tried that and it is not allowing my website to show up. This is what it is showing up on the logs:

I don’t know if I have to do something on my settings for security groups but I have my inbound rules as this:

My outbound rules is this:

Yes, the error looks like the Grist instance doesn’t yet have the right to access the Redis endpoint. There are some thing to check here:

I configured the inbound security group to allow my grist instance to access the Redis endpoint but still getting the same issue unfortunately. Not sure where I am making the issue. Tried to follow the link you provided as well.

There’s a pull request to fix the original issue here:

Once that’s merged, the fix will be in the next release we do.

The issue was the code that managed the S3 upload was expecting to be able to query Redis. Without Redis, it was thinking the imported file was out of date and deleting it.

It’s worth noting though that running S3 / Minio without Redis isn’t something we (currently) test, so you might run into other issues later on running only S3. Getting Redis working would definitely be a more resilient and reliable solution!

Thank you, I have not been able to implement Redis as it did not connect to my instance but I will try to look more into it. It should be some type of configuration change within the security groups.

I see this has been merged into the core version. Will this be incorporated into Omnibus by chance?

grist-omnibus updates every Thursday, but I kicked off an early update so the change should be incorporated now.

1 Like