Datasets:

Modalities:
Tabular
Text
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
d4401b0
·
1 Parent(s): a8ea5b9

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,53 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
52
-
53
- *.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
4
-
5
- # ECHR Cases
6
-
7
- The original data from [Chalkidis et al.](https://arxiv.org/abs/1906.02059), sourced from [archive.org](https://archive.org/details/ECHR-ACL2019).
8
-
9
- ## Preprocessing
10
-
11
- * Order is shuffled
12
- * Fact numbers preceeding each fact are removed (using the python regex `^[0-9]+\. `), as some cases didn't have fact numbers to begin with
13
- * Everything else is the same
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data_anon/test.jsonl → anon/echr-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:de6ec0c0cc97bb2c15e0fd3d09537807f95046068ea497ec31e7ecd126f43947
3
- size 33869151
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7980f882a8f607d5d1b7cf49d272228cde4df5637150179ca9534e4a9f708baf
3
+ size 14811845
data_anon/dev.jsonl → anon/echr-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e860b89584bd88e27da41139204a05a96f6d64aa06331051d420c307d2822fcc
3
- size 20379694
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe1f21695e3302110582be777ec7349e8d932c2d176fbf2a70608d53db7376fa
3
+ size 44905255
data/dev.jsonl → anon/echr-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b4f71bd11bb0a27021dcd88f2eca9f0400c42f435ec291f7895b4627b63b29f2
3
- size 21816052
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1036987966f91c1a53e38ae922b240a214f7910b8c0bde1ce15e38694086d74
3
+ size 9332975
data/train.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:171f91a4c82b362878d06fecc5c8d7a15aa1009b0d84c4427753d8458b7d399e
3
- size 105354910
 
 
 
 
data_anon/train.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b735134e63256e822ef0115735c59ccfaefe44488ffbfac8f25d80a3385073f
3
- size 98259925
 
 
 
 
echr.py DELETED
@@ -1,146 +0,0 @@
1
- import datasets
2
- import json
3
- import os
4
- from datasets import Value, Sequence
5
-
6
- _CITATION = """\
7
- @inproceedings{chalkidis-etal-2019-neural,
8
- title = "Neural Legal Judgment Prediction in {E}nglish",
9
- author = "Chalkidis, Ilias and
10
- Androutsopoulos, Ion and
11
- Aletras, Nikolaos",
12
- booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
13
- month = jul,
14
- year = "2019",
15
- address = "Florence, Italy",
16
- publisher = "Association for Computational Linguistics",
17
- url = "https://aclanthology.org/P19-1424",
18
- doi = "10.18653/v1/P19-1424",
19
- pages = "4317--4323",
20
- }
21
- """
22
-
23
- _HOMEPAGE = "https://archive.org/details/ECHR-ACL2019"
24
- _DESCRIPTION = """\
25
- The ECHR Cases dataset is designed for experimentation of neural judgment prediction, as in the original 2019 ACL paper "Neural Legal Judgment Prediction in English".
26
- """
27
-
28
-
29
- ARTICLES = {
30
- "2": "Right to life",
31
- "3": "Prohibition of torture",
32
- "4": "Prohibition of slavery and forced labour",
33
- "5": "Right to liberty and security",
34
- "6": "Right to a fair trial",
35
- "7": "No punishment without law",
36
- "8": "Right to respect for private and family life",
37
- "9": "Freedom of thought, conscience and religion",
38
- "10": "Freedom of expression",
39
- "11": "Freedom of assembly and association",
40
- "12": "Right to marry",
41
- "13": "Right to an effective remedy",
42
- "14": "Prohibition of discrimination",
43
- "15": "Derogation in time of emergency",
44
- "16": "Restrictions on political activity of aliens",
45
- "17": "Prohibition of abuse of rights",
46
- "18": "Limitation on use of restrictions on rights",
47
- "34": "Individual applications",
48
- "38": "Examination of the case",
49
- "39": "Friendly settlements",
50
- "46": "Binding force and execution of judgments",
51
- "P1-1": "Protection of property",
52
- "P1-2": "Right to education",
53
- "P1-3": "Right to free elections",
54
- "P3-1": "Right to free elections",
55
- "P4-1": "Prohibition of imprisonment for debt",
56
- "P4-2": "Freedom of movement",
57
- "P4-3": "Prohibition of expulsion of nationals",
58
- "P4-4": "Prohibition of collective expulsion of aliens",
59
- "P6-1": "Abolition of the death penalty",
60
- "P6-2": "Death penalty in time of war",
61
- "P6-3": "Prohibition of derogations",
62
- "P7-1": "Procedural safeguards relating to expulsion of aliens",
63
- "P7-2": "Right of appeal in criminal matters",
64
- "P7-3": "Compensation for wrongful conviction",
65
- "P7-4": "Right not to be tried or punished twice",
66
- "P7-5": "Equality between spouses",
67
- "P12-1": "General prohibition of discrimination",
68
- "P13-1": "Abolition of the death penalty",
69
- "P13-2": "Prohibition of derogations",
70
- "P13-3": "Prohibition of reservations",
71
- }
72
-
73
-
74
- class Echr(datasets.GeneratorBasedBuilder):
75
- """ECHR dataset."""
76
-
77
- BUILDER_CONFIGS = [
78
- datasets.BuilderConfig(name="non-anon", data_dir="data"),
79
- datasets.BuilderConfig(name="anon", data_dir="data_anon"),
80
- ]
81
-
82
- def _info(self):
83
- features = datasets.Features(
84
- {
85
- "itemid": Value(dtype="string"),
86
- "languageisocode": Value(dtype="string"),
87
- "respondent": Value(dtype="string"),
88
- "branch": Value(dtype="string"),
89
- "date": Value(dtype="int64"),
90
- "docname": Value(dtype="string"),
91
- "importance": Value(dtype="int64"),
92
- "conclusion": Value(dtype="string"),
93
- "judges": Value(dtype="string"),
94
- "text": Sequence(feature=Value(dtype="string")),
95
- "violated_articles": Sequence(feature=Value(dtype="string")),
96
- "violated_paragraphs": Sequence(feature=Value(dtype="string")),
97
- "violated_bulletpoints": Sequence(feature=Value(dtype="string")),
98
- "non_violated_articles": Sequence(feature=Value(dtype="string")),
99
- "non_violated_paragraphs": Sequence(feature=Value(dtype="string")),
100
- "non_violated_bulletpoints": Sequence(feature=Value(dtype="string")),
101
- "violated": Value(dtype="bool"),
102
- }
103
- )
104
-
105
- return datasets.DatasetInfo(
106
- features=features,
107
- homepage=_HOMEPAGE,
108
- description=_DESCRIPTION,
109
- citation=_CITATION,
110
- )
111
-
112
- def _split_generators(self, dl_manager):
113
- path_prefix = self.config.data_dir
114
- data_dir = dl_manager.download([os.path.join(path_prefix, f"{f}.jsonl") for f in ["train", "test", "dev"]])
115
- return [
116
- datasets.SplitGenerator(
117
- name=datasets.Split.TRAIN,
118
- # These kwargs will be passed to _generate_examples
119
- gen_kwargs={
120
- "filepath": data_dir[0],
121
- "split": "train",
122
- },
123
- ),
124
- datasets.SplitGenerator(
125
- name=datasets.Split.TEST,
126
- # These kwargs will be passed to _generate_examples
127
- gen_kwargs={
128
- "filepath": data_dir[1],
129
- "split": "test",
130
- },
131
- ),
132
- datasets.SplitGenerator(
133
- name=datasets.Split.VALIDATION,
134
- # These kwargs will be passed to _generate_examples
135
- gen_kwargs={
136
- "filepath": data_dir[2],
137
- "split": "dev",
138
- },
139
- ),
140
- ]
141
-
142
- def _generate_examples(self, filepath, split):
143
- with open(filepath, encoding="utf-8") as f:
144
- for id_, row in enumerate(f):
145
- data = json.loads(row)
146
- yield id_, data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
jsonlify.py DELETED
@@ -1,12 +0,0 @@
1
- # Script used to put files into jsonl format (original downloaded from web archive link, in readme)
2
- import glob
3
-
4
- for folder in glob.glob("*/"):
5
- a = []
6
- for file in glob.glob(f"./{folder}*.json"):
7
- contents = open(file, "r").read()
8
- a.append(contents)
9
-
10
- with open(f"{folder[:-1]}.jsonl", "w+") as f:
11
- f.write("\n".join(a))
12
- a.clear()
 
 
 
 
 
 
 
 
 
 
 
 
 
data/test.jsonl → non-anon/echr-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8fe8c5a1076064f118aafbb4497594e2f1aea8ba67e3f4ca15ecf47b60b860cd
3
- size 37589183
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1a7639ded81bd94565029e780d39175c0494defaffc929565ac8ab59c36bcce
3
+ size 17060492
non-anon/echr-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49cb5ad1276d181d4832afea606d5ebbf8ad95350c1a1dc037b1f74dcc71eea0
3
+ size 49763834
non-anon/echr-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edef70981b27af6df62d184dbdc48ec622bcf9a3992365740da62876781272f9
3
+ size 10307717
process.ipynb DELETED
@@ -1,134 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "code",
5
- "execution_count": 2,
6
- "metadata": {},
7
- "outputs": [],
8
- "source": [
9
- "import pandas as pd"
10
- ]
11
- },
12
- {
13
- "cell_type": "code",
14
- "execution_count": 2,
15
- "metadata": {},
16
- "outputs": [],
17
- "source": [
18
- "_dev = pd.read_json(\"EN_dev_Anon.jsonl\", lines=True)\n",
19
- "_test = pd.read_json(\"EN_test_Anon.jsonl\", lines=True)\n",
20
- "_train = pd.read_json(\"EN_train_Anon.jsonl\", lines=True)"
21
- ]
22
- },
23
- {
24
- "cell_type": "code",
25
- "execution_count": 3,
26
- "metadata": {},
27
- "outputs": [],
28
- "source": [
29
- "def process(df):\n",
30
- " df = df.copy()\n",
31
- " df.columns = df.columns.str.lower()\n",
32
- " df[\"violated\"] = df.violated_articles.str.len() != 0\n",
33
- " return df\n",
34
- "\n",
35
- "dev = process(_dev)\n",
36
- "test = process(_test)\n",
37
- "train = process(_train)"
38
- ]
39
- },
40
- {
41
- "cell_type": "code",
42
- "execution_count": 4,
43
- "metadata": {},
44
- "outputs": [],
45
- "source": [
46
- "d = {k: list(v) for k, v in train.groupby(\"violated\").indices.items()}\n",
47
- "dist = dict(dev.violated.value_counts().items())\n",
48
- "d_train = {k: v[0:dist[k]] for k, v in d.items()}\n",
49
- "d_remaining = {k: v[dist[k]:] for k, v in d.items()}\n",
50
- "\n",
51
- "new_rows = []\n",
52
- "for i in range(len(dev[\"violated\"])):\n",
53
- " label = i % 2 == 0\n",
54
- " new_rows.append(train.iloc[d_train[label].pop()])\n",
55
- "\n",
56
- "new_train = pd.concat([pd.DataFrame(new_rows), pd.DataFrame(train.iloc[i] for l in d_remaining.values() for i in l).sample(frac=1, random_state=42)])"
57
- ]
58
- },
59
- {
60
- "cell_type": "code",
61
- "execution_count": 5,
62
- "metadata": {},
63
- "outputs": [],
64
- "source": [
65
- "new_train.to_json(\"train.jsonl\", lines=True, orient=\"records\")\n",
66
- "test.to_json(\"test.jsonl\", lines=True, orient=\"records\")\n",
67
- "dev.to_json(\"dev.jsonl\", lines=True, orient=\"records\")"
68
- ]
69
- },
70
- {
71
- "cell_type": "code",
72
- "execution_count": 3,
73
- "metadata": {},
74
- "outputs": [],
75
- "source": [
76
- "train = pd.read_json(\"data/train.jsonl\", lines=True, orient=\"records\")\n",
77
- "test = pd.read_json(\"data/test.jsonl\", lines=True, orient=\"records\")\n",
78
- "dev = pd.read_json(\"data/dev.jsonl\", lines=True, orient=\"records\")"
79
- ]
80
- },
81
- {
82
- "cell_type": "code",
83
- "execution_count": 4,
84
- "metadata": {},
85
- "outputs": [],
86
- "source": [
87
- "import re\n",
88
- "\n",
89
- "number = re.compile(\"^([0-9]+|CARDINAL)\\s?\\. \")\n",
90
- "train[\"text\"] = train[\"text\"].map(lambda r: [re.sub(number, \"\", line) for line in r])\n",
91
- "test[\"text\"] = test[\"text\"].map(lambda r: [re.sub(number, \"\", line) for line in r])\n",
92
- "dev[\"text\"] = dev[\"text\"].map(lambda r: [re.sub(number, \"\", line) for line in r])"
93
- ]
94
- },
95
- {
96
- "cell_type": "code",
97
- "execution_count": 5,
98
- "metadata": {},
99
- "outputs": [],
100
- "source": [
101
- "train.to_json(\"data/train.jsonl\", lines=True, orient=\"records\")\n",
102
- "test.to_json(\"data/test.jsonl\", lines=True, orient=\"records\")\n",
103
- "dev.to_json(\"data/dev.jsonl\", lines=True, orient=\"records\")"
104
- ]
105
- }
106
- ],
107
- "metadata": {
108
- "kernelspec": {
109
- "display_name": "Python 3.10.5 64-bit",
110
- "language": "python",
111
- "name": "python3"
112
- },
113
- "language_info": {
114
- "codemirror_mode": {
115
- "name": "ipython",
116
- "version": 3
117
- },
118
- "file_extension": ".py",
119
- "mimetype": "text/x-python",
120
- "name": "python",
121
- "nbconvert_exporter": "python",
122
- "pygments_lexer": "ipython3",
123
- "version": "3.10.5"
124
- },
125
- "orig_nbformat": 4,
126
- "vscode": {
127
- "interpreter": {
128
- "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a"
129
- }
130
- }
131
- },
132
- "nbformat": 4,
133
- "nbformat_minor": 2
134
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests.ipynb DELETED
@@ -1,112 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "code",
5
- "execution_count": 2,
6
- "metadata": {},
7
- "outputs": [
8
- {
9
- "name": "stderr",
10
- "output_type": "stream",
11
- "text": [
12
- "Using custom data configuration jonathanli--echr-8f6bd4e68e0f7714\n"
13
- ]
14
- },
15
- {
16
- "name": "stdout",
17
- "output_type": "stream",
18
- "text": [
19
- "Downloading and preparing dataset json/jonathanli--echr to /home/jonathan/.cache/huggingface/datasets/jonathanli___json/jonathanli--echr-8f6bd4e68e0f7714/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253...\n"
20
- ]
21
- },
22
- {
23
- "name": "stderr",
24
- "output_type": "stream",
25
- "text": [
26
- "Downloading data: 100%|██████████| 106M/106M [00:01<00:00, 66.6MB/s]\n",
27
- "Downloading data: 100%|██████████| 37.9M/37.9M [00:00<00:00, 56.8MB/s]\n",
28
- "Downloading data: 100%|██████████| 21.9M/21.9M [00:00<00:00, 42.3MB/s]\n",
29
- "Downloading data files: 100%|██████████| 3/3 [00:05<00:00, 1.98s/it]\n",
30
- "Extracting data files: 100%|██████████| 3/3 [00:00<00:00, 2462.41it/s]\n",
31
- " \r"
32
- ]
33
- },
34
- {
35
- "name": "stdout",
36
- "output_type": "stream",
37
- "text": [
38
- "Dataset json downloaded and prepared to /home/jonathan/.cache/huggingface/datasets/jonathanli___json/jonathanli--echr-8f6bd4e68e0f7714/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253. Subsequent calls will reuse this data.\n"
39
- ]
40
- },
41
- {
42
- "name": "stderr",
43
- "output_type": "stream",
44
- "text": [
45
- "100%|██████████| 3/3 [00:00<00:00, 189.83it/s]\n"
46
- ]
47
- }
48
- ],
49
- "source": [
50
- "import datasets\n",
51
- "echr = datasets.load_dataset(\"jonathanli/echr\")"
52
- ]
53
- },
54
- {
55
- "cell_type": "code",
56
- "execution_count": 19,
57
- "metadata": {},
58
- "outputs": [
59
- {
60
- "name": "stdout",
61
- "output_type": "stream",
62
- "text": [
63
- "5. The applicant was born in 1961 and is currently detained in İzmit Ftype prison.\n",
64
- "6. On 6 May 2000 the applicant was arrested and taken into police custody on suspicion of membership of a criminal profit-making organisation and carrying out illegal activities on its behalf.\n",
65
- "7. On 8 May 2000 the investigating judge at the İstanbul State Security Court ordered the applicant’s pre-trial detention.\n",
66
- "8. On 19 June 2000 a bill of indictment was filed against the applicant and four other persons with the İstanbul State Security Court, accusing them of forming a criminal profit-making organisation and of being involved in incidents of murder, extortion and fraud.\n",
67
- "9. On 30 January 2003 the first-instance court acquitted the applicant and the other accused of the former charge on the ground that the mental elements of the crime had not been established on their parts. It followed that it lacked jurisdiction to examine the other charges brought against them and transferred the proceedings to the Kartal Assize Court.\n",
68
- "10. On 20 December 2004 the Court of Cassation quashed the judgment of the first-instance court, noting that the latter had erroneously acquitted the applicant and his co-accused of the charge concerned. In its decision, the court held that all components of forming a criminal profit-making organisation had been sufficiently established against the accused.\n",
69
- "11. Subsequently, the case was remitted to the first-instance court.\n",
70
- "12. Following the abolition of the State Security Courts by Law no. 5190, the İstanbul Assize Court resumed the criminal proceedings.\n",
71
- "13. During the proceedings, the İstanbul Assize Court reviewed the lawfulness of the applicant’s continued detention regularly at the end of each hearing or, at the latest, every thirty days, of its own motion, without holding any oral hearing.\n",
72
- "14. At the hearing on 3 March 2010 the İstanbul Assize Court decided, once more, to extend the applicant’s continued detention on account of the reasonable grounds of suspicion that he had committed the offences with which he was charged, and the state of the evidence in the case file.\n",
73
- "15. On 5 January 2011 having regard to the period he had spent in detention, the İstanbul Assize Court released the applicant.\n",
74
- "16. On the basis of the range of evidence in the case file, on 6 December 2011 the İstanbul Assize Court convicted the applicant of a number of crimes; including forming a criminal profit-making organisation, murder, abduction and extortion. Subsequently, the court sentenced the applicant to life imprisonment.\n",
75
- "17. According to the information in the case file, the applicant lodged an appeal with the Court of Cassation, before which the proceedings are currently pending.\n",
76
- "18. The relevant sections of the Turkish Code of Criminal Procedure (Law no.5271) can be found in the judgment of Araz v. Turkey (no. 44319/04, §§ 15-16, 20 May 2010).\n"
77
- ]
78
- }
79
- ],
80
- "source": [
81
- "print(\"\\n\".join(echr[\"train\"][110][\"text\"]))"
82
- ]
83
- }
84
- ],
85
- "metadata": {
86
- "kernelspec": {
87
- "display_name": "Python 3.10.5 64-bit",
88
- "language": "python",
89
- "name": "python3"
90
- },
91
- "language_info": {
92
- "codemirror_mode": {
93
- "name": "ipython",
94
- "version": 3
95
- },
96
- "file_extension": ".py",
97
- "mimetype": "text/x-python",
98
- "name": "python",
99
- "nbconvert_exporter": "python",
100
- "pygments_lexer": "ipython3",
101
- "version": "3.10.5"
102
- },
103
- "orig_nbformat": 4,
104
- "vscode": {
105
- "interpreter": {
106
- "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a"
107
- }
108
- }
109
- },
110
- "nbformat": 4,
111
- "nbformat_minor": 2
112
- }