Re: data limit over 100k?

We were thinking of moving more functionality to Grist hosted version, but are worried about the data limit of 100K rows per document. Any decently-sized small business can hit this limit relatively quickly (~within 12 months) when using Grists for customer service/crm type use cases (don’t want to get into the exact numbers here, but I’ve worked thru the scenarios several times). So my question is how would we deal with this limit? My thinking was that we would basically archive the document with 100K records as historical and take it out of use, and then duplicate the document (just the structure) to use for the next round of records. In this way, we are basically creating a new document every 12 months or so. Is this a reasonable approach or is there a better way, or will Grist eventually provide a workaround for 100K row limit (charge extra??)? Thanks.

3 Likes

@ddsgadget your approach sounds reasonable given the current limits. We are thinking about means to gracefully scale documents, but don’t have an ETA for larger document support.

3 Likes

@paul-grist My two cents, but how about something like GitHub - phiresky/sqlite-zstd: Transparent dictionary-based row-level compression for SQLite ?