gem_id
stringlengths 20
25
| id
stringlengths 24
24
| title
stringlengths 3
59
| context
stringlengths 151
3.71k
| question
stringlengths 1
270
| target
stringlengths 1
270
| references
list | answers
dict |
|---|---|---|---|---|---|---|---|
gem-squad_v2-train-103000
|
57280fad3acd2414000df367
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What was the two-byte precursor to UTF-16?
|
What was the two-byte precursor to UTF-16?
|
[
"What was the two-byte precursor to UTF-16? "
] |
{
"text": [
"UCS-2"
],
"answer_start": [
264
]
}
|
gem-squad_v2-train-103001
|
57280fad3acd2414000df368
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What is used almost exclusively for building new information processing systems?
|
What is used almost exclusively for building new information processing systems?
|
[
"What is used almost exclusively for building new information processing systems? "
] |
{
"text": [
"Unicode"
],
"answer_start": [
151
]
}
|
gem-squad_v2-train-103002
|
5acd2c8407355d001abf37ce
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What are legacy encodings used exclusively for?
|
What are legacy encodings used exclusively for?
|
[
"What are legacy encodings used exclusively for?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103003
|
5acd2c8407355d001abf37cf
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What was the precursor to the UCS-2?
|
What was the precursor to the UCS-2?
|
[
"What was the precursor to the UCS-2?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103004
|
5acd2c8407355d001abf37d0
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What descended from Mac OS X?
|
What descended from Mac OS X?
|
[
"What descended from Mac OS X?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103005
|
5acd2c8407355d001abf37d1
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What are Mac OS X and KDE called?
|
What are Mac OS X and KDE called?
|
[
"What are Mac OS X and KDE called?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103006
|
5acd2c8407355d001abf37d2
|
Unicode
|
Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista and Windows 7), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and KDE also use it for internal representation. Unicode is available on Windows 95 through Microsoft Layer for Unicode, as well as on its descendants, Windows 98 and Windows ME.
|
What is fixed-width one byte?
|
What is fixed-width one byte?
|
[
"What is fixed-width one byte?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103007
|
572810c62ca10214002d9d00
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
What is recommended for email transmission of Unicode?
|
What is recommended for email transmission of Unicode?
|
[
"What is recommended for email transmission of Unicode? "
] |
{
"text": [
"the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding"
],
"answer_start": [
323
]
}
|
gem-squad_v2-train-103008
|
572810c62ca10214002d9d01
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
Where are the details of the two mechanisms for email transmission specified?
|
Where are the details of the two mechanisms for email transmission specified?
|
[
"Where are the details of the two mechanisms for email transmission specified? "
] |
{
"text": [
"MIME standards"
],
"answer_start": [
557
]
}
|
gem-squad_v2-train-103009
|
572810c62ca10214002d9d02
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
How many different mechanisms does MIME define for encoding Unicode in email?
|
How many different mechanisms does MIME define for encoding Unicode in email?
|
[
"How many different mechanisms does MIME define for encoding Unicode in email? "
] |
{
"text": [
"two different mechanisms"
],
"answer_start": [
13
]
}
|
gem-squad_v2-train-103010
|
5acd2f3507355d001abf382e
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
How many ways can ASCII characters be encoded in an email?
|
How many ways can ASCII characters be encoded in an email?
|
[
"How many ways can ASCII characters be encoded in an email?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103011
|
5acd2f3507355d001abf382f
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
What is another way to refer to the text body?
|
What is another way to refer to the text body?
|
[
"What is another way to refer to the text body?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103012
|
5acd2f3507355d001abf3830
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
What specifications are visible to the user?
|
What specifications are visible to the user?
|
[
"What specifications are visible to the user?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103013
|
5acd2f3507355d001abf3831
|
Unicode
|
MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII-characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software.
|
When is Base64 not recommended?
|
When is Base64 not recommended?
|
[
"When is Base64 not recommended?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103014
|
572812d44b864d19001643c8
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
How many fonts support the majority of Unicode's character repertoire?
|
How many fonts support the majority of Unicode's character repertoire?
|
[
"How many fonts support the majority of Unicode's character repertoire? "
] |
{
"text": [
"fewer than a dozen fonts"
],
"answer_start": [
44
]
}
|
gem-squad_v2-train-103015
|
572812d44b864d19001643c9
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
What are the fonts that support Unicode referred to as?
|
What are the fonts that support Unicode referred to as?
|
[
"What are the fonts that support Unicode referred to as? "
] |
{
"text": [
"\"pan-Unicode\" fonts"
],
"answer_start": [
92
]
}
|
gem-squad_v2-train-103016
|
572812d44b864d19001643ca
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
Unicode-based fonts are normally focused on supporting what?
|
Unicode-based fonts are normally focused on supporting what?
|
[
"Unicode-based fonts are normally focused on supporting what? "
] |
{
"text": [
"basic ASCII and particular scripts or sets of characters or symbols"
],
"answer_start": [
243
]
}
|
gem-squad_v2-train-103017
|
5acd309b07355d001abf3888
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
What are most of the fonts on the market called?
|
What are most of the fonts on the market called?
|
[
"What are most of the fonts on the market called?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103018
|
5acd309b07355d001abf3889
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
Pan unicode fonts only support what?
|
Pan unicode fonts only support what?
|
[
"Pan unicode fonts only support what?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103019
|
5acd309b07355d001abf388a
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
What normally need more than one or two writing systems?
|
What normally need more than one or two writing systems?
|
[
"What normally need more than one or two writing systems?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103020
|
5acd309b07355d001abf388b
|
Unicode
|
Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
|
What is another name for computing environments?
|
What is another name for computing environments?
|
[
"What is another name for computing environments?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103021
|
572813b04b864d19001643e2
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What is the code for separating lines?
|
What is the code for separating lines?
|
[
"What is the code for separating lines? "
] |
{
"text": [
"U+2028"
],
"answer_start": [
44
]
}
|
gem-squad_v2-train-103022
|
572813b04b864d19001643e3
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What is the code for separating paragraphs?
|
What is the code for separating paragraphs?
|
[
"What is the code for separating paragraphs? "
] |
{
"text": [
"U+2029"
],
"answer_start": [
70
]
}
|
gem-squad_v2-train-103023
|
572813b04b864d19001643e4
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
How is newline normalization accomplished in Mac OS X?
|
How is newline normalization accomplished in Mac OS X?
|
[
"How is newline normalization accomplished in Mac OS X? "
] |
{
"text": [
"Cocoa text system"
],
"answer_start": [
602
]
}
|
gem-squad_v2-train-103024
|
572813b04b864d19001643e5
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
How does the newliine normallization format work?
|
How does the newliine normallization format work?
|
[
"How does the newliine normallization format work? "
] |
{
"text": [
"every possible newline character is converted internally to a common newline"
],
"answer_start": [
697
]
}
|
gem-squad_v2-train-103025
|
5acd431a07355d001abf3af0
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What widely adopted solution did Unicode provide?
|
What widely adopted solution did Unicode provide?
|
[
"What widely adopted solution did Unicode provide?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103026
|
5acd431a07355d001abf3af1
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What uses HTML recommendations?
|
What uses HTML recommendations?
|
[
"What uses HTML recommendations?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103027
|
5acd431a07355d001abf3af2
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
Why does it matter which newline character is chosen?
|
Why does it matter which newline character is chosen?
|
[
"Why does it matter which newline character is chosen?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103028
|
5acd431a07355d001abf3af3
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What is the name of the Cocoa text system?
|
What is the name of the Cocoa text system?
|
[
"What is the name of the Cocoa text system?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103029
|
5acd431a07355d001abf3af4
|
Unicode
|
In terms of the newline, Unicode introduced U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR. This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
|
What are common newlines converted into?
|
What are common newlines converted into?
|
[
"What are common newlines converted into?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103030
|
572814a54b864d190016440e
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
Why has Unicode been criticized for not separately encoding forms of kanji?
|
Why has Unicode been criticized for not separately encoding forms of kanji?
|
[
"Why has Unicode been criticized for not separately encoding forms of kanji?"
] |
{
"text": [
"complicates the processing of ancient Japanese and uncommon Japanese names"
],
"answer_start": [
120
]
}
|
gem-squad_v2-train-103031
|
572814a54b864d190016440f
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What is TRON?
|
What is TRON?
|
[
"What is TRON? "
] |
{
"text": [
"alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters"
],
"answer_start": [
580
]
}
|
gem-squad_v2-train-103032
|
572814a54b864d1900164410
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What perception does the unification of glyphs cause?
|
What perception does the unification of glyphs cause?
|
[
"What perception does the unification of glyphs cause? "
] |
{
"text": [
"the languages themselves, not just the basic character representation, are being merged"
],
"answer_start": [
426
]
}
|
gem-squad_v2-train-103033
|
5acd478d07355d001abf3bf8
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What does Unicode encode instead of characters?
|
What does Unicode encode instead of characters?
|
[
"What does Unicode encode instead of characters?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103034
|
5acd478d07355d001abf3bf9
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What are characters defined as?
|
What are characters defined as?
|
[
"What are characters defined as?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103035
|
5acd478d07355d001abf3bfa
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What alternative has become popular in Japan?
|
What alternative has become popular in Japan?
|
[
"What alternative has become popular in Japan?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103036
|
5acd478d07355d001abf3bfb
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What languages do not suffer from the unification of glyphs?
|
What languages do not suffer from the unification of glyphs?
|
[
"What languages do not suffer from the unification of glyphs?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103037
|
5acd478d07355d001abf3bfc
|
Unicode
|
Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.[clarification needed] There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it).
|
What are being merged alongside just the glyphs?
|
What are being merged alongside just the glyphs?
|
[
"What are being merged alongside just the glyphs?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103038
|
572815a04b864d190016443a
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
what tables of OpenType allow permit the selection of alternative glyph representations?
|
what tables of OpenType allow permit the selection of alternative glyph representations?
|
[
"what tables of OpenType allow permit the selection of alternative glyph representations?"
] |
{
"text": [
"Advanced Typographic"
],
"answer_start": [
240
]
}
|
gem-squad_v2-train-103039
|
572815a04b864d190016443b
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
Where is information provided to designate which character form to select?
|
Where is information provided to designate which character form to select?
|
[
"Where is information provided to designate which character form to select? "
] |
{
"text": [
"plain text"
],
"answer_start": [
460
]
}
|
gem-squad_v2-train-103040
|
572815a04b864d190016443c
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
How does modern font technology address the issue of depicting a Han character in alternate glyph representations?
|
How does modern font technology address the issue of depicting a Han character in alternate glyph representations?
|
[
"How does modern font technology address the issue of depicting a Han character in alternate glyph representations? "
] |
{
"text": [
"Unicode variation sequences"
],
"answer_start": [
194
]
}
|
gem-squad_v2-train-103041
|
5acd4aa407355d001abf3c28
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
What has worsened the practical aspects of unification?
|
What has worsened the practical aspects of unification?
|
[
"What has worsened the practical aspects of unification?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103042
|
5acd4aa407355d001abf3c29
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
What prevents OpenType from having alternative glyphs?
|
What prevents OpenType from having alternative glyphs?
|
[
"What prevents OpenType from having alternative glyphs?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103043
|
5acd4aa407355d001abf3c2a
|
Unicode
|
Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select.
|
What designates which alternate character to select?
|
What designates which alternate character to select?
|
[
"What designates which alternate character to select?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103044
|
57281627ff5b5019007d9cd4
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
Unicode was designed for a round trip format conversion to and from what?
|
Unicode was designed for a round trip format conversion to and from what?
|
[
"Unicode was designed for a round trip format conversion to and from what? "
] |
{
"text": [
"preexisting character encodings"
],
"answer_start": [
102
]
}
|
gem-squad_v2-train-103045
|
57281627ff5b5019007d9cd5
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
How many encoding forms are there for Korean Hangul?
|
How many encoding forms are there for Korean Hangul?
|
[
"How many encoding forms are there for Korean Hangul?"
] |
{
"text": [
"three different encoding forms"
],
"answer_start": [
478
]
}
|
gem-squad_v2-train-103046
|
57281627ff5b5019007d9cd6
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
Since what version can already existing characters no longer be added to the standard?
|
Since what version can already existing characters no longer be added to the standard?
|
[
"Since what version can already existing characters no longer be added to the standard? "
] |
{
"text": [
"version 3.0"
],
"answer_start": [
534
]
}
|
gem-squad_v2-train-103047
|
5acd55da07355d001abf3d98
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
What is Unicode converted into?
|
What is Unicode converted into?
|
[
"What is Unicode converted into?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103048
|
5acd55da07355d001abf3d99
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
How many encoding forms does Unicode have?
|
How many encoding forms does Unicode have?
|
[
"How many encoding forms does Unicode have?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103049
|
5acd55da07355d001abf3d9a
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
After what version did it become possible to add preexisting characters?
|
After what version did it become possible to add preexisting characters?
|
[
"After what version did it become possible to add preexisting characters?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103050
|
5acd55da07355d001abf3d9b
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
What type of characters can be added in versions post 3.0?
|
What type of characters can be added in versions post 3.0?
|
[
"What type of characters can be added in versions post 3.0? "
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103051
|
5acd55da07355d001abf3d9c
|
Unicode
|
Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be naïvely converted to Unicode, and then back and get back the same file. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode.
|
What does adding characters help to preserve?
|
What does adding characters help to preserve?
|
[
"What does adding characters help to preserve?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103052
|
572816943acd2414000df433
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
What kind of mappings must be provided between characters in existing legacy character sets and those in Unicode?
|
What kind of mappings must be provided between characters in existing legacy character sets and those in Unicode?
|
[
"What kind of mappings must be provided between characters in existing legacy character sets and those in Unicode?"
] |
{
"text": [
"Injective mappings"
],
"answer_start": [
0
]
}
|
gem-squad_v2-train-103053
|
572816943acd2414000df434
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
A lack of consistency between what earlier Japanese encodings and unicode led to mismatches?
|
A lack of consistency between what earlier Japanese encodings and unicode led to mismatches?
|
[
"A lack of consistency between what earlier Japanese encodings and unicode led to mismatches?"
] |
{
"text": [
"Shift-JIS or EUC-JP"
],
"answer_start": [
283
]
}
|
gem-squad_v2-train-103054
|
572816943acd2414000df435
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
what is the fullwidth tilde character code in Microsoft Windows?
|
what is the fullwidth tilde character code in Microsoft Windows?
|
[
"what is the fullwidth tilde character code in Microsoft Windows?"
] |
{
"text": [
"U+FF5E"
],
"answer_start": [
487
]
}
|
gem-squad_v2-train-103055
|
5acd56b907355d001abf3db6
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
What helps to convert characters out of Unicode?
|
What helps to convert characters out of Unicode?
|
[
"What helps to convert characters out of Unicode?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103056
|
5acd56b907355d001abf3db7
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
WAVE DASH is only found in what version of OS?
|
WAVE DASH is only found in what version of OS?
|
[
"WAVE DASH is only found in what version of OS?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103057
|
5acd56b907355d001abf3db8
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
What is the Windows code for EUC-JP?
|
What is the Windows code for EUC-JP?
|
[
"What is the Windows code for EUC-JP?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103058
|
5acd56b907355d001abf3db9
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
What are the names of the two forms of Unicode?
|
What are the names of the two forms of Unicode?
|
[
"What are the names of the two forms of Unicode?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103059
|
5acd56b907355d001abf3dba
|
Unicode
|
Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either U+FF5E ~ FULLWIDTH TILDE (in Microsoft Windows) or U+301C 〜 WAVE DASH (other vendors).
|
What does a lack of consistency prevent?
|
What does a lack of consistency prevent?
|
[
"What does a lack of consistency prevent?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103060
|
5728171b4b864d1900164452
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
How many code points are tamil and Devanagari allocated?
|
How many code points are tamil and Devanagari allocated?
|
[
"How many code points are tamil and Devanagari allocated? "
] |
{
"text": [
"128"
],
"answer_start": [
67
]
}
|
gem-squad_v2-train-103061
|
5728171b4b864d1900164453
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
What is another word for ligatures?
|
What is another word for ligatures?
|
[
"What is another word for ligatures? "
] |
{
"text": [
"conjuncts"
],
"answer_start": [
267
]
}
|
gem-squad_v2-train-103062
|
5728171b4b864d1900164454
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
What is the ISCII standard?
|
What is the ISCII standard?
|
[
"What is the ISCII standard? "
] |
{
"text": [
"128 code points"
],
"answer_start": [
67
]
}
|
gem-squad_v2-train-103063
|
5acd579107355d001abf3dec
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
How many points is Arabic given?
|
How many points is Arabic given?
|
[
"How many points is Arabic given?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103064
|
5acd579107355d001abf3ded
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
Who argued Indic scripts should follow the practice of other writing systems?
|
Who argued Indic scripts should follow the practice of other writing systems?
|
[
"Who argued Indic scripts should follow the practice of other writing systems?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103065
|
5acd579107355d001abf3dee
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
Why are more ligatures in Unicode likely to happen?
|
Why are more ligatures in Unicode likely to happen?
|
[
"Why are more ligatures in Unicode likely to happen?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103066
|
5acd579107355d001abf3def
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
Which organization successfully argued for the Tibetan script?
|
Which organization successfully argued for the Tibetan script?
|
[
"Which organization successfully argued for the Tibetan script?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103067
|
5acd579107355d001abf3df0
|
Unicode
|
Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for Tibetan script[citation needed] (the Chinese National Standard organization failed to achieve a similar change).
|
What is the visual order transformed into?
|
What is the visual order transformed into?
|
[
"What is the visual order transformed into?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103068
|
572817d93acd2414000df455
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
What standard did Unicode inherit involving a Thai language?
|
What standard did Unicode inherit involving a Thai language?
|
[
"What standard did Unicode inherit involving a Thai language? "
] |
{
"text": [
"Thai Industrial Standard 620"
],
"answer_start": [
317
]
}
|
gem-squad_v2-train-103069
|
572817d93acd2414000df456
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
Why has Thai alphabet support been criticized?
|
Why has Thai alphabet support been criticized?
|
[
"Why has Thai alphabet support been criticized? "
] |
{
"text": [
"its ordering of Thai characters"
],
"answer_start": [
46
]
}
|
gem-squad_v2-train-103070
|
572817d93acd2414000df457
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
How are the Thai characters ordered incorrectly?
|
How are the Thai characters ordered incorrectly?
|
[
"How are the Thai characters ordered incorrectly? "
] |
{
"text": [
"Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order"
],
"answer_start": [
62
]
}
|
gem-squad_v2-train-103071
|
5acd58ac07355d001abf3e14
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
How are Thai characters written similarly to other Unicode scripts?
|
How are Thai characters written similarly to other Unicode scripts?
|
[
"How are Thai characters written similarly to other Unicode scripts?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103072
|
5acd58ac07355d001abf3e15
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
What standard did Unicode create for the Thai language?
|
What standard did Unicode create for the Thai language?
|
[
"What standard did Unicode create for the Thai language?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103073
|
5acd58ac07355d001abf3e16
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
Who criticized the order of Thai characters?
|
Who criticized the order of Thai characters?
|
[
"Who criticized the order of Thai characters?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103074
|
5acd58ac07355d001abf3e17
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
What kind of machine was the Thai Industrial Standard 620 not implemented on?
|
What kind of machine was the Thai Industrial Standard 620 not implemented on?
|
[
"What kind of machine was the Thai Industrial Standard 620 not implemented on?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103075
|
5acd58ac07355d001abf3e18
|
Unicode
|
Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word แสดง [sa dɛːŋ] "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
|
What are written to the right of the preceding consonant?
|
What are written to the right of the preceding consonant?
|
[
"What are written to the right of the preceding consonant?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103076
|
572818e3ff5b5019007d9d30
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
How are characters with diacritical marks represented?
|
How are characters with diacritical marks represented?
|
[
"How are characters with diacritical marks represented? "
] |
{
"text": [
"either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks"
],
"answer_start": [
63
]
}
|
gem-squad_v2-train-103077
|
572818e3ff5b5019007d9d31
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What encoding does Charis SIL use?
|
What encoding does Charis SIL use?
|
[
"What encoding does Charis SIL use? "
] |
{
"text": [
"Graphite, OpenType, or AAT technologies"
],
"answer_start": [
905
]
}
|
gem-squad_v2-train-103078
|
572818e3ff5b5019007d9d32
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What is the issue with underdots and their placement?
|
What is the issue with underdots and their placement?
|
[
"What is the issue with underdots and their placement? "
] |
{
"text": [
"often be placed incorrectly"
],
"answer_start": [
607
]
}
|
gem-squad_v2-train-103079
|
572818e3ff5b5019007d9d33
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
Characters with what marks can be displayed as a single character or a decomposed sequence?
|
Characters with what marks can be displayed as a single character or a decomposed sequence?
|
[
"Characters with what marks can be displayed as a single character or a decomposed sequence? "
] |
{
"text": [
"diacritical marks"
],
"answer_start": [
16
]
}
|
gem-squad_v2-train-103080
|
572818e3ff5b5019007d9d34
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
How should the characters with the macron and acute be displayed?
|
How should the characters with the macron and acute be displayed?
|
[
"How should the characters with the macron and acute be displayed? "
] |
{
"text": [
"identically"
],
"answer_start": [
340
]
}
|
gem-squad_v2-train-103081
|
5acd59fb07355d001abf3e58
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What is the maximum number of non-spacing marks?
|
What is the maximum number of non-spacing marks?
|
[
"What is the maximum number of non-spacing marks?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103082
|
5acd59fb07355d001abf3e59
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What romanization mark is rarely visually incorrect?
|
What romanization mark is rarely visually incorrect?
|
[
"What romanization mark is rarely visually incorrect?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103083
|
5acd59fb07355d001abf3e5a
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What is the name of the rendering technology OpenType uses?
|
What is the name of the rendering technology OpenType uses?
|
[
"What is the name of the rendering technology OpenType uses?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103084
|
5acd59fb07355d001abf3e5b
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
What are rendered identically in practice?
|
What are rendered identically in practice?
|
[
"What are rendered identically in practice?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103085
|
5acd59fb07355d001abf3e5c
|
Unicode
|
Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly[citation needed]. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features.
|
Macrons are used only in what form of Idic?
|
Macrons are used only in what form of Idic?
|
[
"Macrons are used only in what form of Idic?"
] |
{
"text": [],
"answer_start": []
}
|
gem-squad_v2-train-103086
|
571a2ab710f8ca1400304f21
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
When was the first federal population census taken in the US?
|
When was the first federal population census taken in the US?
|
[
"When was the first federal population census taken in the US?"
] |
{
"text": [
"1790"
],
"answer_start": [
3
]
}
|
gem-squad_v2-train-103087
|
571a2ab710f8ca1400304f22
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
What were the two categories for race in the census?
|
What were the two categories for race in the census?
|
[
"What were the two categories for race in the census?"
] |
{
"text": [
"white or \"other.\""
],
"answer_start": [
135
]
}
|
gem-squad_v2-train-103088
|
571a2ab710f8ca1400304f23
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
Who all was identified by name in the house holds?
|
Who all was identified by name in the house holds?
|
[
"Who all was identified by name in the house holds?"
] |
{
"text": [
"Only the heads of households"
],
"answer_start": [
153
]
}
|
gem-squad_v2-train-103089
|
571a2ab710f8ca1400304f24
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
What were Indians categorized as after they were included as others?
|
What were Indians categorized as after they were included as others?
|
[
"What were Indians categorized as after they were included as others?"
] |
{
"text": [
"\"Free people of color\""
],
"answer_start": [
327
]
}
|
gem-squad_v2-train-103090
|
571a94b810f8ca1400305179
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
When was the first US federal population census taken?
|
When was the first US federal population census taken?
|
[
"When was the first US federal population census taken?"
] |
{
"text": [
"In 1790, the first federal population census was taken in the United States"
],
"answer_start": [
0
]
}
|
gem-squad_v2-train-103091
|
571a94b810f8ca140030517a
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
How were enumerators instructed to classify residents?
|
How were enumerators instructed to classify residents?
|
[
"How were enumerators instructed to classify residents?"
] |
{
"text": [
". Enumerators were instructed to classify free residents as white or \"other.\""
],
"answer_start": [
75
]
}
|
gem-squad_v2-train-103092
|
571a94b810f8ca140030517b
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
Was every resident listed by name?
|
Was every resident listed by name?
|
[
"Was every resident listed by name?"
] |
{
"text": [
"Only the heads of households were identified by name in the federal census until 1850."
],
"answer_start": [
153
]
}
|
gem-squad_v2-train-103093
|
571a94b810f8ca140030517c
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
Were all residents counted together or separately?
|
Were all residents counted together or separately?
|
[
"Were all residents counted together or separately?"
] |
{
"text": [
"Slaves were counted separately from free persons"
],
"answer_start": [
398
]
}
|
gem-squad_v2-train-103094
|
571a94b810f8ca140030517d
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
when did any changes to counting procedures happen?
|
when did any changes to counting procedures happen?
|
[
"when did any changes to counting procedures happen?"
] |
{
"text": [
"until the Civil War and end of slavery."
],
"answer_start": [
467
]
}
|
gem-squad_v2-train-103095
|
571dd9b9b64a571400c71d9a
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
When did the US begin to take census?
|
When did the US begin to take census?
|
[
"When did the US begin to take census?"
] |
{
"text": [
"1790"
],
"answer_start": [
3
]
}
|
gem-squad_v2-train-103096
|
571dd9b9b64a571400c71d9b
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
At what point were all members of the household named on a census?
|
At what point were all members of the household named on a census?
|
[
"At what point were all members of the household named on a census?"
] |
{
"text": [
"1850"
],
"answer_start": [
234
]
}
|
gem-squad_v2-train-103097
|
571dd9b9b64a571400c71d9c
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
Who were considered "free people of color"?
|
Who were considered "free people of color"?
|
[
"Who were considered \"free people of color\"?"
] |
{
"text": [
"Native Americans"
],
"answer_start": [
240
]
}
|
gem-squad_v2-train-103098
|
571dd9b9b64a571400c71d9d
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
What does mulatto mean?
|
What does mulatto mean?
|
[
"What does mulatto mean?"
] |
{
"text": [
"visible European ancestry in addition to African"
],
"answer_start": [
611
]
}
|
gem-squad_v2-train-103099
|
571dd9b9b64a571400c71d9e
|
Multiracial_American
|
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black.
|
Where would a Native American live to not be counted in the census?
|
Where would a Native American live to not be counted in the census?
|
[
"Where would a Native American live to not be counted in the census?"
] |
{
"text": [
"Indian reservations"
],
"answer_start": [
377
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.