Datasets:

Size:
n>1T
ArXiv:
License:

string codec and escaping

#2
by maxidl - opened

When I load the text, it appears there is some issue with unicode escaping to be aware of.
For example:

import datasets as hfds
d = hfds.load_dataset("allenai/madlad-400", streaming=True, languages=["en"], split="clean")
for x in d:
    break
print(x) # {'text': "Alisha // Twenty two // California\\nThis is my personal blog where I post whatever. I can't follow back on this sideblog, sorry.\\nI run ....

The issue is with the newline character being \\n and not \n. Not sure if it is a big deal, but I think people should be aware of this.

It is the same for \\t, maybe others as well? This would really be useful to have documented.

Ben, I could use some help on a big build, any interest?

Bumping this because I've been trying to figure out the best way to recover the original data.

I found using something like this can help against the common ones I saw (\n, \t):
newtext = text.encode('raw-unicode-escape').decode('unicode-escape')

but this still has an edge case of "\\\\_" which gets loaded to json as "\\_".

  line # "... C : \\\\_ Windows \\\\_ System32 \\\\_ drivers "
  json.loads(line)['text'] # "... C : \\_ Windows \\_ System32 \\_ drivers"
  json.loads(line)['text'].encode('raw-unicode-escape').decode('unicode-escape') # DeprecationWarning: invalid escape sequence '\_' (text is unchanged)

I did some digging into some instances and found their original pages. It looks like this is supposed to be a singular plaintext backslash (as evidenced by the instance above).

Therefore I'm using something like this.

for line in f:
  X= json.loads(line)
  X['text'] = X['text'].encode('raw-unicode-escape').decode('unicode-escape').replace("\\_","\\")

I hope this helps people encountering the issue, but I can't guarantee that there are no more hidden edge cases.

Sign up or log in to comment