UUID are not unique?

We followed the instructions to generate UUIDs. It worked for awhile, and then we got to around 4,900 records in the table, the UUID was no longer unique for a particular record. We were able to catch this because we set up an isDup function on the UUID column (i.e. “DUP” if len(Email_subscribers.lookupRecords(UUID=$UUID)) > 1 else “”). Now, every single new record we enter into the table, simply duplicates a prior UUID.

This is quite concerning,. How can UUID generate non-unique values in one specific table? That means it’s not really a UUID. So how do we you generate a UUID that is actually unique per record in a table? I’ver never seen this happen in postgres database, so maybe this is a limitation of sqllite? Very strange behavior.

BTW, our code for the table is as per below. This fires as a trigger on every new record. I have attempted to enter many new records and I can confirm that every new record created now simply duplicates a prior UUID, so UUID does not generate anything unique. We are on the hosted version of Grist.

def _default_UUID(rec, table, value, user):
    return UUID()
  UUID = grist.Text()

I just tried this with 60,000 rows, and there were no duplicates. What you are describing is clearly a bug, but I wonder if the bug is something about how the column is set up, or something else specific to your document.

In Grist, the UUID() function is implemented in Python. For a true UUID, duplicates would be so rare as to never happen in practice. In case of Grist, because the Python runs in a sandbox, it may not have access to “good” randomness, and the UUID uses a pseudo-random number generator. It’s not perfect (e.g. is not suitable for cryptographic purposes), but it should be good enough for duplicates to never happen in practice.

Could there be some confusion about which column you are counting duplicates in? (E.g. if there is another UUID column which is actually pulling UUID values from another referenced record?)

Another possibility – could you check (perhaps in Code View) whether any formulas call Python random.seed() or random.setstate() functions? Those reset the state of the random number generator that the UUID relies on, and could explain such duplication.

If none of these help, would you be able to share your document with support@getgrist.com (view-only is fine), so that we can investigate?

hi there,
I also tried with 50,000 rows in another database and there were no duplicates. However, in my first database there were. There is no confusion. I have tested this dozens of times on the effected database constantly verifying the duplicate UUID, because I’m shocked that a UUID function could ever produce a duplicate in real life (you’d only expect it maybe on a billion rows, and then maybe only once). And it’s not just one duplicate, every single new record we enter in the first database produces a duplicate UUID! Yes, this is certainly a bug. Please let me know how to share the table.

update: I shared the table with support and also sent a video link demonstrating the bug. Thanks

BTW: about other formulas. No. We don’t have any other Python formulas. The only other thing we have is a DUP field to check for duplicate UUIDs (did that to make sure the UUID actually worked). Below is the effected columns.

def _default_UUID(rec, table, value, user):
    return UUID()
  UUID = grist.Text()

  def _default_is_uuid_dup(rec, table, value, user):
    return "DUP" if len(Email_subscribers.lookupRecords(UUID=rec.UUID)) > 1 else ""
  is_uuid_dup = grist.Text()
  is_hasura = grist.Bool()

Confirming that there is a bug, and it’s serious. So far, it looks like it has to do with the configuration of the sandboxing of the data engine, which results in non-random initialization of the random number generator (in a different sandboxing configuration, it works correctly).

I am escalating, and we will prioritize a fix ASAP. Thank you for reporting!

1 Like

thanks for the quick reply. Yeah, we have had some strange errors with sanboxes yesterday, but I didn’t think it was related and the errors were somewhat cryptic, so I thought it was just a connection issue or something. Anyway, good to know you spotted the bug. Please let me know when you want us to test any fixes. Thanks.

1 Like

Hi there, Any idea when this will be fixed? Thanks.

We have the fix, and expect it to be rolled out to production tonight.

@dmitry-grist rolled out yet?

@ddsgadget yes the fix is in production now. Sorry that this problem happened!

2 Likes