umarbutler commited on
Commit
1e971b9
·
verified ·
1 Parent(s): 1070605

fix: typos

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,6 +12,6 @@ With a vocabulary of only 65,536 tokens, documents compressed with the tokenizer
12
 
13
  The Kanon tokenizer is already being used in production by all of [Isaacus](https://isaacus.com/)' currently available [Kanon](https://isaacus.com/blog/introducing-kanon/) models.
14
 
15
- The Kanon tokenizer was trained on [Isaacus](https://isaacus.com/)' Blackstone Corpus, one of the world’s largest private repositories of contracts, decisions, legislation and other legal and government documents, covering a wide range of jurisdictions, including the U.S., U.K., Canada, Australia, New Zealand, Ireland, the entire European Union, the United Nations and the International Court of Justice, to name a few.
16
 
17
- The Kanon tokenizer is licensed freely, including for commercial usage, under the Apache 2.0 license. We actively encourage legal AI practioners, including our own competitors, to take advantage of the Kanon tokenizer when training their legal AI models to promote better interoperability between models while also improving their space efficiency.
 
12
 
13
  The Kanon tokenizer is already being used in production by all of [Isaacus](https://isaacus.com/)' currently available [Kanon](https://isaacus.com/blog/introducing-kanon/) models.
14
 
15
+ The Kanon tokenizer was trained on [Isaacus](https://isaacus.com/)' Blackstone Corpus, one of the world’s largest private repositories of contracts, decisions, legislation, and other legal and government documents, covering a wide range of jurisdictions, including the U.S., U.K., Canada, Australia, New Zealand, Ireland, the entire European Union, the United Nations, and the International Court of Justice, to name a few.
16
 
17
+ The Kanon tokenizer is licensed freely, including for commercial usage, under the Apache 2.0 license. We actively encourage legal AI practitioners, including our own competitors, to take advantage of the Kanon tokenizer when training their legal AI models to promote better interoperability between models while also improving their space efficiency.