text
stringlengths 2
132k
| source
dict |
|---|---|
The following is a partial list of the "G" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM). This list continues the information at List of MeSH codes (G13). Codes following these are found at List of MeSH codes (H01). For other MeSH codes, see List of MeSH codes. The source for this content is the set of 2006 MeSH Trees from the NLM. == MeSH G14 β genetic structures == === MeSH G14.080 β base sequence === ==== MeSH G14.080.040 β at rich sequence ==== ==== MeSH G14.080.380 β gc rich sequence ==== MeSH G14.080.380.160 β cpg islands ==== MeSH G14.080.534 β matrix attachment regions ==== ==== MeSH G14.080.689 β regulatory sequences, nucleic acid ==== MeSH G14.080.689.330 β enhancer elements (genetics) MeSH G14.080.689.330.240 β e-box elements MeSH G14.080.689.330.400 β hiv enhancer MeSH G14.080.689.330.700 β response elements MeSH G14.080.689.330.700.800 β serum response element MeSH G14.080.689.330.700.920 β vitamin d response element MeSH G14.080.689.390 β insulator elements MeSH G14.080.689.450 β locus control region MeSH G14.080.689.650 β operator regions (genetics) MeSH G14.080.689.675 β promoter regions (genetics) MeSH G14.080.689.675.700 β response elements MeSH G14.080.689.675.700.800 β serum response element MeSH G14.080.689.675.700.920 β vitamin d response element MeSH G14.080.689.675.850 β TATA box MeSH G14.080.689.687 β regulatory sequences, ribonucleic acid MeSH G14.080.689.687.249 β rna 3' polyadenylation signals MeSH G14.080.689.687.490 β rna splice sites MeSH G14.080.689.687.500 β rna 5' terminal oligopyrimidine sequence MeSH G14.080.689.755 β silencer elements, transcriptional MeSH G14.080.689.810 β terminator regions (genetics) ==== MeSH G14.080.708 β repetitive sequences, nucleic acid ==== MeSH G14.080.708.330 β interspersed repetitive sequences MeSH G14.080.708.330.200 β dna transposable elements MeSH G14.080.708.330.330 β genomic islands MeSH G14.080.708.330.800 β retroelements MeSH G14.080.708.330.800.175 β endogenous retroviruses MeSH G14.080.708.330.800.200 β genes, intracisternal a-particle MeSH G14.080.708.330.800.400 β long interspersed nucleotide elements MeSH G14.080.708.330.800.800 β short interspersed nucleotide elements MeSH
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
G14.080.708.330.800.800.050 β alu elements MeSH G14.080.708.800 β tandem repeat sequences MeSH G14.080.708.800.074 β dna repeat expansion MeSH G14.080.708.800.074.865 β trinucleotide repeat expansion MeSH G14.080.708.800.150 β dna, satellite MeSH G14.080.708.800.500 β microsatellite repeats MeSH G14.080.708.800.500.150 β dinucleotide repeats MeSH G14.080.708.800.500.850 β trinucleotide repeats MeSH G14.080.708.800.500.850.200 β trinucleotide repeat expansion MeSH G14.080.708.800.550 β minisatellite repeats MeSH G14.080.708.850 β terminal repeat sequences MeSH G14.080.708.850.400 β hiv long terminal repeat MeSH G14.080.708.850.400.400 β hiv enhancer === MeSH G14.160 β chromosome structures === ==== MeSH G14.160.165 β centromere ==== MeSH G14.160.165.500 β kinetochores ==== MeSH G14.160.175 β chromatids ==== ==== MeSH G14.160.180 β chromatin ==== MeSH G14.160.180.270 β euchromatin MeSH G14.160.180.383 β heterochromatin MeSH G14.160.180.383.800 β sex chromatin MeSH G14.160.180.625 β nucleosomes ==== MeSH G14.160.650 β nucleolus organizer region ==== ==== MeSH G14.160.830 β synaptonemal complex ==== ==== MeSH G14.160.845 β telomere ==== === MeSH G14.162 β chromosomes === ==== MeSH G14.162.167 β chromosomes, archaeal ==== ==== MeSH G14.162.178 β chromosomes, artificial ==== MeSH G14.162.178.170 β chromosomes, artificial, bacterial MeSH G14.162.178.190 β chromosomes, artificial, mammalian MeSH G14.162.178.190.117 β chromosomes, artificial, human MeSH G14.162.178.195 β chromosomes, artificial, p1 bacteriophage MeSH G14.162.178.200 β chromosomes, artificial, yeast ==== MeSH G14.162.190 β chromosomes, bacterial ==== MeSH G14.162.190.170 β chromosomes, artificial, bacterial ==== MeSH G14.162.360 β chromosomes, fungal ==== MeSH G14.162.360.800 β chromosomes, artificial, yeast ==== MeSH G14.162.520 β chromosomes, mammalian ==== MeSH G14.162.520.190 β chromosomes, artificial, mammalian MeSH G14.162.520.190.117 β chromosomes, artificial, human MeSH G14.162.520.300 β chromosomes, human MeSH G14.162.520.300.117 β chromosomes, artificial, human MeSH G14.162.520.300.235 β chromosomes, human, 1-3 MeSH G14.162.520.300.235.240 β chromosomes, human, pair 1 MeSH G14.162.520.300.235.245 β chromosomes, human, pair 2 MeSH G14.162.520.300.235.250 β chromosomes, human, pair 3 MeSH G14.162.520.300.280 β chromosomes, human, 4-5 MeSH G14.162.520.300.280.285 β chromosomes, human, pair 4 MeSH G14.162.520.300.280.290 β chromosomes, human, pair 5 MeSH G14.162.520.300.325 β chromosomes, human, 6-12 and x
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
MeSH G14.162.520.300.325.330 β chromosomes, human, pair 6 MeSH G14.162.520.300.325.335 β chromosomes, human, pair 7 MeSH G14.162.520.300.325.340 β chromosomes, human, pair 8 MeSH G14.162.520.300.325.345 β chromosomes, human, pair 9 MeSH G14.162.520.300.325.345.700 β philadelphia chromosome MeSH G14.162.520.300.325.350 β chromosomes, human, pair 10 MeSH G14.162.520.300.325.355 β chromosomes, human, pair 11 MeSH G14.162.520.300.325.360 β chromosomes, human, pair 12 MeSH G14.162.520.300.325.680 β chromosomes, human, x MeSH G14.162.520.300.370 β chromosomes, human, 13-15 MeSH G14.162.520.300.370.375 β chromosomes, human, pair 13 MeSH G14.162.520.300.370.380 β chromosomes, human, pair 14 MeSH G14.162.520.300.370.385 β chromosomes, human, pair 15 MeSH G14.162.520.300.415 β chromosomes, human, 16-18 MeSH G14.162.520.300.415.420 β chromosomes, human, pair 16 MeSH G14.162.520.300.415.425 β chromosomes, human, pair 17 MeSH G14.162.520.300.415.430 β chromosomes, human, pair 18 MeSH G14.162.520.300.460 β chromosomes, human, 19-20 MeSH G14.162.520.300.460.465 β chromosomes, human, pair 19 MeSH G14.162.520.300.460.470 β chromosomes, human, pair 20 MeSH G14.162.520.300.505 β chromosomes, human, 21-22 and y MeSH G14.162.520.300.505.510 β chromosomes, human, pair 21 MeSH G14.162.520.300.505.515 β chromosomes, human, pair 22 MeSH G14.162.520.300.505.515.700 β philadelphia chromosome MeSH G14.162.520.300.505.757 β chromosomes, human, y ==== MeSH G14.162.560 β chromosomes, plant ==== ==== MeSH G14.162.570 β isochromosomes ==== ==== MeSH G14.162.788 β ring chromosomes ==== ==== MeSH G14.162.865 β sex chromosomes ==== MeSH G14.162.865.800 β sex chromatin MeSH G14.162.865.982 β x chromosome MeSH G14.162.865.982.500 β chromosomes, human, x MeSH G14.162.865.983 β y chromosome MeSH G14.162.865.983.500 β chromosomes, human, y === MeSH G14.325 β gene library === ==== MeSH G14.325.425 β genomic library ==== ==== MeSH G14.325.640 β peptide library ==== === MeSH G14.335 β genetic code === ==== MeSH G14.335.060 β anticodon ==== ==== MeSH G14.335.355 β codon ==== MeSH G14.335.355.225 β codon, initiator MeSH G14.335.355.250 β codon, terminator MeSH G14.335.355.250.235 β codon, nonsense ==== MeSH G14.335.760 β reading frames ==== MeSH G14.335.760.640 β open reading frames === MeSH G14.337 β genetic vectors === ==== MeSH G14.337.249 β
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
chromosomes, artificial ==== MeSH G14.337.249.170 β chromosomes, artificial, bacterial MeSH G14.337.249.190 β chromosomes, artificial, mammalian MeSH G14.337.249.190.117 β chromosomes, artificial, human MeSH G14.337.249.195 β chromosomes, artificial, p1 bacteriophage MeSH G14.337.249.200 β chromosomes, artificial, yeast ==== MeSH G14.337.500 β cosmids ==== === MeSH G14.340 β genome === ==== MeSH G14.340.024 β genome components ==== MeSH G14.340.024.079 β attachment sites, microbiological MeSH G14.340.024.159 β cpg islands MeSH G14.340.024.189 β dna sequence, unstable MeSH G14.340.024.189.220 β dna repeat expansion MeSH G14.340.024.189.220.865 β trinucleotide repeat expansion MeSH G14.340.024.189.610 β chromosome fragile sites MeSH G14.340.024.220 β dna, intergenic MeSH G14.340.024.220.150 β dna, satellite MeSH G14.340.024.220.280 β 3' flanking region MeSH G14.340.024.220.282 β 5' flanking region MeSH G14.340.024.220.400 β introns MeSH G14.340.024.220.760 β replication origin MeSH G14.340.024.220.880 β untranslated regions MeSH G14.340.024.220.880.880 β 3' untranslated regions MeSH G14.340.024.220.880.885 β 5' untranslated regions MeSH G14.340.024.340 β genes MeSH G14.340.024.340.077 β alleles MeSH G14.340.024.340.137 β gene components MeSH G14.340.024.340.137.190 β codon MeSH G14.340.024.340.137.190.225 β codon, initiator MeSH G14.340.024.340.137.190.250 β codon, terminator MeSH G14.340.024.340.137.232 β exons MeSH G14.340.024.340.137.232.459 β hinge exons MeSH G14.340.024.340.137.232.920 β vdj exons MeSH G14.340.024.340.137.275 β expressed sequence tags MeSH G14.340.024.340.137.290 β 3' flanking region MeSH G14.340.024.340.137.295 β 5' flanking region MeSH G14.340.024.340.137.430 β immunoglobulin switch region MeSH G14.340.024.340.137.515 β introns MeSH G14.340.024.340.137.650 β open reading frames MeSH G14.340.024.340.137.750 β regulatory elements, transcriptional MeSH G14.340.024.340.137.750.249 β enhancer elements (genetics) MeSH G14.340.024.340.137.750.249.240 β e-box elements MeSH G14.340.024.340.137.750.249.400 β hiv enhancer MeSH G14.340.024.340.137.750.249.765 β response elements MeSH G14.340.024.340.137.750.249.765.800 β serum response element MeSH G14.340.024.340.137.750.249.765.920 β vitamin d response element MeSH G14.340.024.340.137.750.680 β promoter regions (genetics) MeSH G14.340.024.340.137.750.680.765 β response elements MeSH G14.340.024.340.137.750.680.765.800 β serum response element MeSH G14.340.024.340.137.750.680.765.920 β vitamin d response element MeSH G14.340.024.340.137.750.680.850 β TATA box MeSH G14.340.024.340.137.750.830 β terminator regions (genetics) MeSH G14.340.024.340.137.750.840 β transcription initiation site MeSH G14.340.024.340.137.775 β rna 3' polyadenylation signals MeSH
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
G14.340.024.340.137.780 β rna splice sites MeSH G14.340.024.340.137.785 β rna 5' terminal oligopyrimidine sequence MeSH G14.340.024.340.137.910 β untranslated regions MeSH G14.340.024.340.137.910.880 β 3' untranslated regions MeSH G14.340.024.340.137.910.885 β 5' untranslated regions MeSH G14.340.024.340.198 β genes, archaeal MeSH G14.340.024.340.201 β genes, bacterial MeSH G14.340.024.340.250 β genes, cdc MeSH G14.340.024.340.277 β genes, developmental MeSH G14.340.024.340.277.500 β genes, homeobox MeSH G14.340.024.340.304 β genes, dominant MeSH G14.340.024.340.308 β genes, duplicate MeSH G14.340.024.340.312 β genes, essential MeSH G14.340.024.340.320 β genes, fungal MeSH G14.340.024.340.320.089 β genes, mating type, fungal MeSH G14.340.024.340.328 β genes, helminth MeSH G14.340.024.340.345 β genes, immediate-early MeSH G14.340.024.340.351 β genes, immunoglobulin MeSH G14.340.024.340.351.300 β genes, immunoglobulin heavy chain MeSH G14.340.024.340.351.300.249 β hinge exons MeSH G14.340.024.340.351.300.500 β immunoglobulin switch region MeSH G14.340.024.340.351.310 β genes, immunoglobulin light chain MeSH G14.340.024.340.353 β genes, insect MeSH G14.340.024.340.359 β genes, lethal MeSH G14.340.024.340.361 β genes, mdr MeSH G14.340.024.340.376 β genes, mitochondrial MeSH G14.340.024.340.383 β genes, neoplasm MeSH G14.340.024.340.383.249 β genes, tumor suppressor MeSH G14.340.024.340.383.249.050 β genes, apc MeSH G14.340.024.340.383.249.100 β genes, brca1 MeSH G14.340.024.340.383.249.105 β genes, brca2 MeSH G14.340.024.340.383.249.200 β genes, dcc MeSH G14.340.024.340.383.249.320 β genes, mcc MeSH G14.340.024.340.383.249.340 β genes, neurofibromatosis 1 MeSH G14.340.024.340.383.249.345 β genes, neurofibromatosis 2 MeSH G14.340.024.340.383.249.375 β genes, p16 MeSH G14.340.024.340.383.249.385 β genes, p53 MeSH G14.340.024.340.383.249.400 β genes, retinoblastoma MeSH G14.340.024.340.383.249.420 β genes, wilms tumor MeSH G14.340.024.340.383.500 β oncogenes MeSH G14.340.024.340.383.500.791 β proto-oncogenes MeSH G14.340.024.340.383.500.791.100 β genes, abl MeSH G14.340.024.340.383.500.791.148 β genes, bcl-1 MeSH G14.340.024.340.383.500.791.150 β genes, bcl-2 MeSH G14.340.024.340.383.500.791.290 β genes, erba MeSH G14.340.024.340.383.500.791.295 β genes, erbb MeSH G14.340.024.340.383.500.791.295.300 β genes, erbb-1 MeSH G14.340.024.340.383.500.791.295.305 β genes, erbb-2 MeSH G14.340.024.340.383.500.791.325 β genes, fms MeSH G14.340.024.340.383.500.791.330 β genes, fos MeSH G14.340.024.340.383.500.791.365 β genes, jun MeSH G14.340.024.340.383.500.791.400 β genes, mos MeSH G14.340.024.340.383.500.791.418 β genes, myb MeSH G14.340.024.340.383.500.791.420 β genes, myc MeSH G14.340.024.340.383.500.791.550 β genes, ras MeSH G14.340.024.340.383.500.791.552 β genes, rel MeSH G14.340.024.340.383.500.791.560 β genes, sis MeSH
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
G14.340.024.340.383.500.791.570 β genes, src MeSH G14.340.024.340.391 β genes, overlapping MeSH G14.340.024.340.391.600 β nested genes MeSH G14.340.024.340.393 β genes, plant MeSH G14.340.024.340.395 β genes, protozoan MeSH G14.340.024.340.400 β genes, rag-1 MeSH G14.340.024.340.415 β genes, recessive MeSH G14.340.024.340.415.400 β genes, tumor suppressor MeSH G14.340.024.340.415.400.050 β genes, apc MeSH G14.340.024.340.415.400.100 β genes, brca1 MeSH G14.340.024.340.415.400.105 β genes, brca2 MeSH G14.340.024.340.415.400.200 β genes, dcc MeSH G14.340.024.340.415.400.320 β genes, mcc MeSH G14.340.024.340.415.400.340 β genes, neurofibromatosis 1 MeSH G14.340.024.340.415.400.345 β genes, neurofibromatosis 2 MeSH G14.340.024.340.415.400.375 β genes, p16 MeSH G14.340.024.340.415.400.385 β genes, p53 MeSH G14.340.024.340.415.400.400 β genes, retinoblastoma MeSH G14.340.024.340.415.400.420 β genes, wilms tumor MeSH G14.340.024.340.470 β genes, regulator MeSH G14.340.024.340.470.412 β genes, arac MeSH G14.340.024.340.470.413 β genes, nef MeSH G14.340.024.340.470.416 β genes, px MeSH G14.340.024.340.470.418 β genes, rev MeSH G14.340.024.340.470.420 β genes, switch MeSH G14.340.024.340.470.560 β genes, tat MeSH G14.340.024.340.470.575 β genes, vif MeSH G14.340.024.340.470.578 β genes, vpr MeSH G14.340.024.340.470.580 β genes, vpu MeSH G14.340.024.340.495 β genes, reporter MeSH G14.340.024.340.520 β genes, sry MeSH G14.340.024.340.540 β genes, suppressor MeSH G14.340.024.340.582 β genes, synthetic MeSH G14.340.024.340.585 β genes, t-cell receptor MeSH G14.340.024.340.585.050 β genes, t-cell receptor alpha MeSH G14.340.024.340.585.080 β genes, t-cell receptor beta MeSH G14.340.024.340.585.240 β genes, t-cell receptor delta MeSH G14.340.024.340.585.400 β genes, t-cell receptor gamma MeSH G14.340.024.340.605 β genes, viral MeSH G14.340.024.340.605.172 β genes, env MeSH G14.340.024.340.605.258 β genes, gag MeSH G14.340.024.340.605.345 β genes, immediate-early MeSH G14.340.024.340.605.360 β genes, intracisternal a-particle MeSH G14.340.024.340.605.600 β genes, nef MeSH G14.340.024.340.605.667 β genes, pol MeSH G14.340.024.340.605.735 β genes, px MeSH G14.340.024.340.605.775 β genes, rev MeSH G14.340.024.340.605.850 β genes, tat MeSH G14.340.024.340.605.890 β genes, vif MeSH G14.340.024.340.605.897 β genes, vpr MeSH G14.340.024.340.605.900 β genes, vpu MeSH G14.340.024.340.610 β major histocompatibility complex MeSH G14.340.024.340.610.595 β genes, mhc class i MeSH G14.340.024.340.610.600 β genes, mhc class ii MeSH G14.340.024.340.620 β minor histocompatibility loci MeSH G14.340.024.340.623 β minor lymphocyte stimulatory
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
loci MeSH G14.340.024.340.645 β multigene family MeSH G14.340.024.340.645.500 β genes, mdr MeSH G14.340.024.340.645.750 β genes, rrna MeSH G14.340.024.340.700 β pseudogenes MeSH G14.340.024.340.720 β quantitative trait loci MeSH G14.340.024.340.825 β transgenes MeSH G14.340.024.340.825.500 β genes, transgenic, suicide MeSH G14.340.024.420 β insulator elements MeSH G14.340.024.425 β interspersed repetitive sequences MeSH G14.340.024.425.200 β dna transposable elements MeSH G14.340.024.425.500 β genomic islands MeSH G14.340.024.425.800 β retroelements MeSH G14.340.024.425.800.175 β endogenous retroviruses MeSH G14.340.024.425.800.200 β genes, intracisternal a-particle MeSH G14.340.024.425.800.400 β long interspersed nucleotide elements MeSH G14.340.024.425.800.800 β short interspersed nucleotide elements MeSH G14.340.024.425.800.800.050 β alu elements MeSH G14.340.024.430 β isochores MeSH G14.340.024.470 β locus control region MeSH G14.340.024.630 β nucleolus organizer region MeSH G14.340.024.686 β operon MeSH G14.340.024.686.545 β lac operon MeSH G14.340.024.686.645 β operator regions (genetics) MeSH G14.340.024.686.817 β rrna operon MeSH G14.340.024.742 β regulon MeSH G14.340.024.745 β replicon MeSH G14.340.024.745.725 β replication origin MeSH G14.340.024.810 β sequence tagged sites MeSH G14.340.024.815 β silencer elements, transcriptional MeSH G14.340.024.850 β tandem repeat sequences MeSH G14.340.024.850.140 β dna repeat expansion MeSH G14.340.024.850.150 β dna, satellite MeSH G14.340.024.850.500 β microsatellite repeats MeSH G14.340.024.850.500.150 β dinucleotide repeats MeSH G14.340.024.850.500.850 β trinucleotide repeats MeSH G14.340.024.850.500.850.200 β trinucleotide repeat expansion MeSH G14.340.024.850.550 β minisatellite repeats ==== MeSH G14.340.050 β genome, archaeal ==== MeSH G14.340.050.500 β genes, archaeal ==== MeSH G14.340.300 β genome, bacterial ==== MeSH G14.340.300.249 β genes, bacterial MeSH G14.340.300.500 β operon MeSH G14.340.300.500.545 β lac operon MeSH G14.340.300.500.645 β operator regions (genetics) MeSH G14.340.300.500.817 β rrna operon ==== MeSH G14.340.325 β genome, fungal ==== MeSH G14.340.325.500 β genes, fungal MeSH G14.340.325.500.089 β genes, mating type, fungal ==== MeSH G14.340.337 β genome, helminth ==== MeSH G14.340.337.500 β genes, helminth ==== MeSH G14.340.350 β genome, human ==== ==== MeSH G14.340.357 β genome, insect ==== MeSH G14.340.357.500 β genes, insect ==== MeSH G14.340.365 β genome, plant ==== MeSH G14.340.365.500 β
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
genes, plant ==== MeSH G14.340.375 β genome, protozoan ==== MeSH G14.340.375.500 β genes, protozoan ==== MeSH G14.340.400 β genome, viral ==== MeSH G14.340.400.500 β genes, viral MeSH G14.340.400.500.172 β genes, env MeSH G14.340.400.500.258 β genes, gag MeSH G14.340.400.500.345 β genes, immediate-early MeSH G14.340.400.500.360 β genes, intracisternal a-particle MeSH G14.340.400.500.600 β genes, nef MeSH G14.340.400.500.667 β genes, pol MeSH G14.340.400.500.735 β genes, px MeSH G14.340.400.500.775 β genes, rev MeSH G14.340.400.500.850 β genes, tat MeSH G14.340.400.500.890 β genes, vif MeSH G14.340.400.500.897 β genes, vpr MeSH G14.340.400.500.900 β genes, vpu ==== MeSH G14.340.425 β genomic library ==== === MeSH G14.360 β histone code === === MeSH G14.600 β plasmids === ==== MeSH G14.600.080 β bacteriocin plasmids ==== ==== MeSH G14.600.250 β cosmids ==== ==== MeSH G14.600.300 β f factor ==== ==== MeSH G14.600.430 β hemolysin factors ==== ==== MeSH G14.600.500 β lactose factors ==== ==== MeSH G14.600.550 β plant tumor-inducing plasmids ==== ==== MeSH G14.600.600 β r factors ==== === MeSH G14.840 β templates, genetic === The list continues at List of MeSH codes (H01).
|
{
"page_id": 5115387,
"source": null,
"title": "List of MeSH codes (G14)"
}
|
Water testing is a broad description for various procedures used to analyze water quality. Millions of water quality tests are carried out daily to fulfill regulatory requirements and to maintain safety. Testing may be performed to evaluate: ambient or environmental water quality β the ability of a surface water body to support aquatic life as an ecosystem. See Environmental monitoring, Freshwater environmental quality parameters and Bioindicator. wastewater β characteristics of polluted water (domestic sewage or industrial waste) before treatment or after treatment. See Environmental chemistry and Wastewater quality indicators. "raw water" quality β characteristics of a water source prior to treatment for domestic consumption (drinking water). See Bacteriological water analysis and specific tests such as turbidity and hard water. "finished" water quality β water treated at a municipal water purification plant. See Bacteriological water analysis and Category:Water quality indicators. suitability of water for industrial uses such as laboratory, manufacturing or equipment cooling. See purified water. == Government regulation == Government regulations related to water testing and water quality for some major countries is given below. === China === ==== Ministry of Environmental Protection ==== The Ministry of Environmental Protection of the People's Republic of China is the nation's environmental protection department charged with the task of protecting China's air, water, and land from pollution and contamination. Directly under the State Council, it is empowered and required by law to implement environmental policies and enforce environmental laws and regulations. Complementing its regulatory role, it funds and organizes research and development. See Ministry of Environmental Protection of the People's Republic of China. ==== Regulatory challenges and debates ==== In late 2009, a survey was carried out by China Ministry of Housing and Urban-Rural Development to assess the water quality of urban supplies in China's cities, which revealed that "at least 1,000" water
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
treatment plants out of more than 4,000 plants surveyed at the county level and above failed to comply with government requirements. The survey results were never formally released to the public, but in 2012, China's Century Weekly published the leaked survey data. In response, Wang Xuening, a health ministry official, released figures derived from a pilot monitoring scheme in 2011 and suggested that 80% of China's urban tap water was up to standard. China's new drinking water standards involve 106 indicators. Of China's 35 major cities, only 40% of cities have the capacity to test for all 106 indicators. The department in charge of local water and the health administration department will enter into a discussion to determine results for more than 60 of the new measures; hence it is not required to test the water using every indicator. The grading of water quality is based on an overall average of 95% to fulfill government requirements. The frequency of water quality inspections at water treatment plants is twice yearly. === Pakistan === ==== Pakistan Council of Research in Water Resources ==== Established in 1964, the Pakistan Council of Research in Water Resources aims to conduct, organize, coordinate and promote research in all aspects of water resources. As a national research organization, it undertakes and promotes applied and basic research in different disciplines of water sector. ==== Recent developments ==== In March 2013, Minister for Science and Technology Mir Changez Khan Jamali notified the National Assembly that groundwater samples collected revealed that only 15-18% samples were deemed safe for drinking both in urban and rural areas in Pakistan. The Ministry has created 24 Water Quality Testing Laboratories across Pakistan, developed and commercialized water quality test kits, water filters, water disinfection tablets and drinking water treatment sachets, conducted training for 2,660 professionals
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
of water supply agencies and surveyed 10,000 water supply schemes out of a grand total of 12,000 schemes. === United Kingdom === ==== Drinking Water Inspectorate ==== The Drinking Water Inspectorate is a section of Department for Environment, Food and Rural Affairs set up to regulate the public water supply companies in England and Wales. Water testing in England and Wales can be conducted at the environmental health office at the local authority. See Drinking Water Inspectorate. === United States === ==== Department of Homeland Security ==== The U.S. Department of Homeland Security is a cabinet department of the United States federal government, created in response to the September 11 attacks, and with the primary responsibilities of protecting the United States of America and U.S. territories (including protectorates) from and responding to terrorist attacks, man-made accidents, and natural disasters. See United States Department of Homeland Security. The Homeland Security Presidential Directive 7 designates the Environmental Protection Agency as the sector-specific agency for the water sector's critical infrastructure protection activities. All Environmental Protection Agency activities related to water security are carried out in consultation with the Department of Homeland Security. Possible threats to water quality include contamination with deadly agents, such as cyanide, and physical attacks like the release of toxic gaseous chemicals. ==== Environmental Protection Agency ==== The principal U.S. federal laws governing water testing are the Safe Drinking Water Act (SDWA) and the Clean Water Act. The U.S. Environmental Protection Agency (EPA) issues regulations under each law specifying analytical test methods. EPA's annual Regulatory Agenda sets a schedule for specific objectives on improving its oversight of water testing. Drinking water analysis Under the Safe Drinking Water Act, public water systems are required to regularly monitor their treated water for contaminants. Water samples must be analyzed using EPA-approved testing methods,
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
by laboratories that are certified by EPA or a state agency. The 2013 revised total coliform rule and the 1989 total coliform rule are the only microbial drinking water regulations that apply to all public water systems. The revised rule highlights the frequency and timing of microbial testing by water systems based on population served, system type, and source water type. It also places a legal limit on the level for Escherichia coli. Potential health threats must be disclosed to EPA or the appropriate state agency, and public notification is required in some circumstances. Methods for measuring acute toxicity usually take between 24 and 96 hours to identify contaminants in water supplies. Wastewater analysis All facilities in the United States that discharge wastewater to surface waters (e.g. rivers, lakes or coastal waters) must obtain a permit under the National Pollutant Discharge Elimination System, a Clean Water Act program administered by EPA and state agencies. The facilities covered include sewage treatment plants, industrial and commercial plants, military bases and other facilities. Most permittees are required to regularly collect wastewater samples and analyze them for compliance with permit requirements, and report the results either to EPA or the state agency. ==== Private wells ==== Private wells are not regulated by the federal government. In general, private well owners are responsible for testing their wells. Some state or local governments regulate well construction and may require well testing. Generally well testing required by local governments is limited to a handful of contaminants including coliform and E. Coli bacteria and perhaps a few predominant local contaminants such as nitrates or arsenic. EPA publishes test methods for contaminants that it regulates under the SDWA. ==== Publication of test methods ==== Peer-reviewed test methods have been published by government agencies, private research organizations and international standards
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
organizations for ambient water, wastewater and drinking water. Approved published methods must be used when testing to demonstrate compliance with regulatory requirements. ==== Regulatory challenges and debates ==== ===== Hydraulic fracturing ===== The Energy Policy Act of 2005 created a loophole that exempts companies drilling for natural gas from disclosing the chemicals involved in fracturing operations that would normally be required under federal clean water laws. The loophole is commonly known as the "Halliburton loophole" because Dick Cheney, the former chief executive officer of Halliburton, was reportedly instrumental in its passage. Although the Safe Drinking Water Act excludes hydraulic fracturing from the Underground Injection Control regulations, the use of diesel fuel during hydraulic fracturing is still regulated. State oil and gas agencies may issue additional regulations for hydraulic fracturing. States or EPA have the authority under the Clean Water Act to regulate discharge of produced waters from hydraulic fracturing operations. In December 2011, federal environment officials scientifically linked underground water pollution with hydraulic fracturing for the first time in central Wyoming. EPA stated that the water supply contained at least 10 compounds known to be used in fracking fluids. The findings in the report contradicted arguments by the drilling industry on the safety of the fracturing process, such as the hydrologic pressure that naturally forces fluids downwards instead of upwards. EPA also commented that the pollution from 33 abandoned oil and gas waste pits were responsible for some degree of minor groundwater pollution in the vicinity. In January 2013, the Alaska Oil and Gas Conservation Commission, which is responsible for overseeing oil and gas production in Alaska, proposed new rules for regulating hydraulic fracturing in the state, which contains over two billion barrels of shale oil (second only to the Bakkan) and over 80 trillion cubic feet of natural gas.
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
Companies will be required to conduct water testing at least 90 days prior to and up to 120 days after hydraulically fracturing a well, which includes analysis of pH, alkalinity, total dissolved solids, and total petroleum hydrocarbons. The proposed rules necessitate disclosure of the identity and volume of chemicals used in fracturing fluid. See Alaska Oil and Gas Conservation Commission. In February 2013, the state of Illinois introduced the Illinois Hydraulic Fracturing Regulatory Act, H.B. 2615, which imposes strict controls on fracturing companies, such as chemical disclosure requirements and water testing requirements. The bill includes baseline and periodic post-frack testing of potentially affected waters, such as surface water and groundwater sources near fracturing wells, to identify contamination associated with hydraulic fracturing. Fracturing wells will be closed if fracturing fluid is released outside of the shale rock formation being fractured. ===== Pharmaceuticals and personal care products ===== Detectable levels of pharmaceuticals and personal care products, in the parts per trillion, are found in many public drinking water systems in the US as many water testing plants lack the technological know-how to remove these chemical compounds from raw water. There are now increasing worries about how these compounds degrade and react in the environment, during the treatment process, inside our bodies, and the long-term exposure to multiple contaminants at low levels. Out of over 80,000 chemicals registered with the EPA, the US federal drinking water rules mandate testing for only 83 chemicals, which calls for increased monitoring of pharmaceuticals on the presence and concentrations of chemical compounds in rivers, streams, and treated tap water. As traditional waste water regulations and treatment systems target microorganisms and nutrients, there are no federal standards for pharmaceuticals in drinking water or waste water. ==== Recent developments ==== In May 2012, the Environmental Protection Agency released a
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
new list of contaminants, known as the unregulated contaminant monitoring regulation 3 (UCMR3), that will be part of municipal water systems testing starting this year and continuing through 2015. The UCMR3 testing will help municipal water system operators measure the occurrence and exposure of contamination levels that may endanger human health. The State Hygienic Laboratory at the University of Iowa is the only state environmental public health laboratory that has been certified and approved to test for all 28 chemical contaminants on the new list. In March 2013, the Environmental Protection Agency developed a new rapid water quality test that provides accurate same day results of contamination levels, which marks a significant improvement from current tests that require at least 24 hours to obtain results. The new test will help authorities determine whether beaches are safe for swimming to keep the public from falling sick and could help prevent beaches from being closed. === International organizations === The International Maritime Organization, known as the Inter-Governmental Maritime Consultative Organization until 1982, was established in Geneva in 1948, and came into force ten years later, meeting for the first time in 1959. See International Maritime Organization. The International Maritime Organization has been at the forefront of the international community by taking the lead in addressing the transfer of aquatic invasive species through shipping. On 13 February 2004, the International Convention for the Control and Management of Ships' Ballast Water and Sediments was adopted by consensus at a diplomatic conference held at the International Maritime Organization headquarters in London. According to the convention, all ships are required to implement a ballast water and sediments management plan. All ships will have to carry a Ballast Water Record Book and will be required to carry out ballast water management procedures to a given standard. Parties
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
to the convention are given the option to take additional measures which are subject to criteria set out in the Convention and to International Maritime Organization guidelines. Ballast water management is subjected to the ballast water exchange standard and the ballast water performance standard. Ships performing ballast water exchange shall do so with an efficiency of 95 per cent volumetric exchange of ballast water and ships using a ballast water management system (BWMS) shall meet a performance standard based on agreed numbers of organisms per unit of volume. The convention will enter into force 12 months after ratification by 30 States, representing 35 per cent of world merchant shipping tonnage. See Ballast water discharge and the environment. == Water test initiatives == === EarthEcho Water Challenge === The [[World Water Monitoring Day|EarthEcho Water Challenge]] is an international education and outreach program that generates public awareness and involvement in safeguarding water resources globally by engaging citizens to conduct water testing of local water bodies. Participants learn how to conduct simple water quality tests, analyze common indicators of water health, specifically dissolved oxygen, pH, temperature, and turbidity. The program was originally called "World Water Monitoring Day" and later "World Water Monitoring Challenge", and was established in 2003. EarthEcho International encourages participants to conduct their monitoring activities as part of the "EarthEcho Water Challenge" during any period between March 22 (World Water Day) and December of each year. == Water test market == === Market size and structure === As of 2009, the global water test market, which includes in-house, small commercial and large laboratory groups, is approximately US$3.6 billion. The global market for low-end test equipment is roughly $300β400 million. The global market for in-line monitors is approximately $100β130 million. === Product offering === Key products include analytical systems, instrumentation, and reagents
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
for water quality and safety analysis. Reagents are chemical testing compounds that identify presence of chlorine, pH, alkalinity, turbidity and other metrics. The equipment market comprises low-end, onsite field testing equipment, in-line monitors, and high-end testing laboratory instruments. High-end lab equipment are Mass Spectrometry devices that conduct organic analysis, using Gas Chromatography and Liquid Chromatography, or metals analysis, using Inductively Coupled Plasma. ==== New developments ==== Several trends to monitor include digital sensor plug-and-play techniques and luminescent dissolved oxygen meters replacing sensors. === "Razor and Razor-blade" business model === The water test market is approximately two-thirds equipment and one-third consumables. Reagents are used with each test and generate recurring revenue for companies. Aftermarket maintenance agreements, operator training and parts replacement help to ensure resources are maximized. The market leader with an estimated 21% market share, Danaher, is able to reap EBIT margins in the high-teens-to-low-20% on test equipment, but can command 40%+ margins on the water test reagents. See Freebie marketing. === Distribution === Companies tend to employ the "direct-to-end-user" model for most products, but may also try to sell low-end equipment via the Internet to reduce distribution costs. === Pricing === Pricing depends on application and type of product. Instruments range from as low as $10 to thousands of dollars. === Suppliers === The low-end test equipment is dominated by few large suppliers, notably Germany's Loviband and Merck, DelAgua & ITS Europe Water Testing of the UK who work globally, and US-based LaMotte. Major manufacturers of in-line equipment include Siemens and Danaher's Hach. Thermo Scientific and Waters are key producers of high-end test equipment. === End markets === The end markets include municipal water plants, industrial users, such as beverage and electronics, and environmental agencies, such as the United States Geological Survey. == Water testing facilities == There are
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
two main types of laboratories: commercial and in-house. === In-house laboratories === In-house laboratories are usually present in municipal water and waste water facilities, breweries and pharmaceutical manufacturing plants. They account for roughly half of all tests run annually. === Commercial laboratories === Most of the commercial laboratories are single-site firms that only service institutions in the geographical region. The employee head count for each laboratory is usually fewer than five people, and revenues are under $1 million. These laboratories account for one quarter of all tests. There are several major laboratory groups, such as UK-based Inspicio and Australia-based ALS, which account for another quarter of all tests. == Privatization == === Opinion === The conventional impression is that private water systems, which sources groundwater from rural areas, produce higher water quality compared to public water systems. Studies have demonstrated that groundwater is vulnerable to antibiotic-resistant bacteria, which necessitates frequent water testing. However, critics like Charrois argue that inconvenience and time constraint impede regular testing in private wells and water systems, which poses risk of poor water quality to consumers. === Sydney water crisis === In 1998, Sydney, Australia's water supply, 85% controlled by Suez Lyonnaise des Eaux until 2021, contained high concentrations of parasites Giardia and Cryptosporidium. However, the public was not immediately informed of the water contamination when it had first occurred. === Ontario's Common Sense Revolution === In Ontario, Canada, the Harris government introduced the "Common Sense Revolution" to cut the large provincial deficit accumulated under the previous Rae government, implementing major cuts to the environment budget, privatizing water testing labs, deregulating water protection infrastructure, and firing trained water testing experts. See Mike Harris. In 1999, in spite of a Canadian federal government study that found a third of Ontario's rural wells were contaminated with E. coli,
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
the Ontario government dropped testing for E.coli from its Drinking Water Surveillance Program and subsequently closed the program in 2000. In June 2000, there was a wave of E. coli outbreaks in several communities in rural Ontario, where at least seven people died from consuming the water in Walkerton. The private testing company, A&L Laboratories, detected E. coli in the water but failed to disclose the contamination to provincial authorities due to a loophole in the "common sense" regulation. A&L Laboratories claimed that the test results were "confidential intellectual property" and therefore belonged only to the "client", who was the authorities of Walkerton who lacked the training for proper assessment. See Escherichia coli. == Recent news == === Water poisoning cases === In 2011, Hong Kong Education Secretary Michael Suen was diagnosed with Legionnaires' disease. The bacteria contamination stemmed from Hong Kong's HK$5.5 billion government headquarters site, where traces of the bacteria were found to be up to 14 times above acceptable levels. === Water contamination cases === In March 2013, French consumer magazine 60 Millions de Consommateurs and non-governmental organization Fondation France LibertΓ©s conducted an investigation that found traces of pesticides and prescription drugs, including a medicine for breast cancer treatment, in almost one in five French brands of bottled water, which are commonly touted as cleaner, healthier and purer alternatives to French tap water. Out of 47 brands of bottled water commonly available in French supermarkets, 10 brands contained "residues from drugs or pesticides". In March 2013, almost 200 water fountains in Jersey City public schools were found to contain lead above regulatory standards, where one of the water fountains had lead contamination at levels more than 800 times the EPA's standard. The situation warrants concern because exposure to lead in water could lead to mental retardation for
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
children. === Legal cases === In March 2013, a defense lawyer asked a federal judge to dismiss charges against the owner of Mississippi Environmental Analytical Laboratories Inc. accused of falsifying records on industrial waste water samples. According to the indictment, Borg Warner Emissions Systems Inc. hired Tennie White, the owner of the laboratory, to test waste water discharge at its car parts plant in Water Valley. White is accused of creating three reports in 2009 that indicated tests were completed when they were not. The motion to dismiss was based on the lawyer's argument that the documents referred to in the indictment were not signed and were not submitted to a government agency. === Sequestration cuts === Water quality testing for private wells in Chemung County is affected by budget cuts. == See also == List of chemical analysis methods Water chemistry analysis Water quality: measurement == References ==
|
{
"page_id": 13766136,
"source": null,
"title": "Water testing"
}
|
The river barrier hypothesis is a hypothesis seeking to partially explain the high species diversity in the Amazon Basin, first presented by Alfred Russel Wallace in his 1852 paper On Monkeys of the Amazon. It argues that the formation and movement of the Amazon and some of its tributaries presented a significant enough barrier to movement for wildlife populations to precipitate allopatric speciation. Facing different selection pressures and genetic drift, the divided populations diverged into separate species. There are several observable qualities that should be present if speciation has resulted from a river barrier. Divergence of species on either side of the river should increase with the size of the river, expressing weakly or not at all in the headwaters and more strongly in the wider, deeper channels further downriver. Organisms endemic to terra firme forest should be more affected than those that live in alluvial forests alongside the river, as they have a longer distance to cross before reaching appropriate habitat and lowland populations can rejoin relatively frequently when a river shifts or narrows in the early stages of oxbow lake formation. Finally, if a river barrier is the cause of speciation, sister species should exist on opposing shores more frequently than expected by chance. == Mechanisms == River barrier speciation occurs when a river is of sufficient size to provide a vicariance for allopatric speciation, or when the river is large enough to prevent or interfere with a genetic exchange between populations. Population division is initiated either when a river shifts into or forms within the range of a species that cannot cross it, effectively splitting the population in half, or when a small founder group is transported across an existing river through random chance. Usually a river's strength as a barrier is viewed as proportional to its
|
{
"page_id": 44371456,
"source": null,
"title": "River barrier hypothesis"
}
|
width; wider rivers present a longer crossing distance and thus a greater obstacle to movement. Barrier strength varies within a given river; narrow headwaters are easier to cross than wide downstream channels. Rivers that present a barrier for some species in a region may not necessarily do so for all, leading to species-by-species and clade, or genetically distinct group, differences in degree of isolation and differentiation on opposing shores. Large mammals and birds have little trouble crossing most streams, whereas small birds unaccustomed to long-distance flight can have particular difficulties and thus may be more subject to population division. Additionally, rivers more effectively divide species that prefer terra firme forest as meanders and the process of oxbow formation in alluvial regions can narrow otherwise impassable streams. == Support == Many research projects in the Amazon basin aimed to test the validity of the hypothesis. The southern chestnut-tailed antbird (Myrmeciza hemimelaena) is a species that exemplifies the hypothesis in nature. The antbirds' diversification and distribution were examined throughout the Amazon, three monophyletic, genetically distinct, populations of the bird were found; two of them are currently valid subspecies. Two of the clades existed on either side of the Madeira River and the third one had a range in between the Madeira River and two small tributaries JiparanΓ‘ and AripuanΓ£. This shows evidence of how these birds diversified due to possible riverine barriers, causing a limitation in gene flow. Another study found that saddle-back tamarins follow the premise that variant gene flow occurs at different parts of a river. Gene flow was found to be restricted to the narrower headwaters of rivers, while a decrease was observed toward the mouth. This is consistent with the hypothesis. Yet, some speculate that using a single mechanism to explain diversification in the tropics would be an
|
{
"page_id": 44371456,
"source": null,
"title": "River barrier hypothesis"
}
|
oversimplification. For example, there is evidence that genetic variation in the Blue-crowned Manakin may have been influenced by river barriers, Andean uplift, and range expansions. == Critique == Not all studies have found support for the hypothesis. One study tested the riverine hypothesis by observing populations of four species of Amazonian frogs along the JuruΓ‘ River. The team expected to see gene flows of different volumes when comparing sites that were on the same bank to sites that were across the river. They found that this was not the case. Gene flow seemed to be in almost equal quantities between either set of sites. Another study took the hypothesis a step further. They postulated that since rivers are supposed to be barriers for gene flow for certain taxa, then it should be a barrier at a community level. Variation in species of frogs and small mammals along and across the riverbanks of the JuruΓ‘ River were evaluated. No obvious gradient of decreasing similarity in species of frogs and mammals from the headwaters to the mouth of the river was found. Another discovery was the fact that there was no greater similarity between species that lived on the same bank than those that were on opposite banks of the river. These results indirectly dispute the aspects of the hypothesis that insist on speciation caused by riverine barriers. The validity of the hypothesis was tested further by examining poison dart frogs. This study (Lougheed et al.)'s results were incongruent with the hypothesis that species on either side of a river would be monophyletic relatives. The Lougheed study aimed to show that the ridge hypothesis has more credibility than the river hypothesis. Eighty-one species of non-flying mammals were trapped at cross-river sites along the JuruΓ‘ River in another experiment. The river seemed to
|
{
"page_id": 44371456,
"source": null,
"title": "River barrier hypothesis"
}
|
be a barrier for only a few taxa, with the majority either homogeneous throughout the research area or divided into monophyletic upriver and downriver clades. Patton argues that the geographic location of these clades suggest that landform evolution is an under-appreciated factor in diversification in Amazonia. This project further suggests that riverine barriers are not the only mechanism for speciation. All these critics argue that other factors influence speciation in Amazonia. Another shortcoming of the hypothesis is that it has been researched mostly in Amazonia, rather than in other river basins. Also, shifts of rivers may prevent the establishment of any patterns across rivers, further complicating means to test for the strength of the hypothesis. == References ==
|
{
"page_id": 44371456,
"source": null,
"title": "River barrier hypothesis"
}
|
A chromatid (Greek khrΕmat- 'color' + -id) is one half of a duplicated chromosome. Before replication, one chromosome is composed of one DNA molecule. In replication, the DNA molecule is copied, and the two molecules are known as chromatids. During the later stages of cell division these chromatids separate longitudinally to become individual chromosomes. Chromatid pairs are normally genetically identical, and said to be homozygous. However, if mutations occur, they will present slight differences, in which case they are heterozygous. The pairing of chromatids should not be confused with the ploidy of an organism, which is the number of homologous versions of a chromosome. == Sister chromatids == Chromatids may be sister or non-sister chromatids. A sister chromatid is either one of the two chromatids of the same chromosome joined together by a common centromere. A pair of sister chromatids is called a dyad. Once sister chromatids have separated (during the anaphase of mitosis or the anaphase II of meiosis during sexual reproduction), they are again called chromosomes, each having the same genetic mass as one of the individual chromatids that made up its parent. The DNA sequence of two sister chromatids is completely identical (apart from very rare DNA copying errors). Sister chromatid exchange (SCE) is the exchange of genetic information between two sister chromatids. SCEs can occur during mitosis or meiosis. SCEs appear to primarily reflect DNA recombinational repair processes responding to DNA damage (see article Sister chromatid exchange). Non-sister chromatids, on the other hand, refers to either of the two chromatids of paired homologous chromosomes, that is, the pairing of a paternal chromosome and a maternal chromosome. In chromosomal crossovers, non-sister (homologous) chromatids form chiasmata to exchange genetic material during the prophase I of meiosis (See Homologous chromosome pair). == See also == Kinetochore == References ==
|
{
"page_id": 200192,
"source": null,
"title": "Chromatid"
}
|
The molecular formula C23H21N7O may refer to: Entospletinib, an experimental drug, an inhibitor of spleen tyrosine kinase Tasosartan, an angiotensin II receptor antagonist
|
{
"page_id": 40963587,
"source": null,
"title": "C23H21N7O"
}
|
Microcell Mediated Chromosome Transfer (or MMCT) is a technique used in cell biology and genetics to transfer a chromosome from a defined donor cell line into a recipient cell line. MMCT has been in use since the 1970s and has contributed to a multitude of discoveries including tumor, metastasis and telomerase suppressor genes as well as information about epigenetics, x-inactivation, mitochondrial function and aneuploidy. MMCT follows the basic procedure where donor cells (i.e. cells providing one or more chromosomes or fragments to a recipient cell) are induced to multinucleate their chromosomes. These nuclei are then forced through the cell membrane to create microcells, which can be fused to a recipient cell line. == History == The term MMCT was first used by Fournier and Ruddle in 1977. Their method was based on previous work from 1974 by Ege, Ringertz, Veomett and colleagues, synthesizing the techniques used at the time to induce multinucleation in cells, nuclear removal and cell-cell fusions. The next major step in MMCT came during the 1980s when new transfection techniques were utilized to introduce selectable markers onto chromosomes thus making it possible to select for the introduction of specific chromosomes and more easily create defined hybrids. == Procedure == Procedures for MMCT differ slightly but they all require: the induction of multinucleation, enucleation (nuclear removal), and fusion. Multinucleation is usually accomplished through causing prolonged mitotic arrest by colcemid treatment. Certain cells will then "slip" out of mitosis and form multiple nuclei. These nuclei can then be removed using cytochalasin B to disrupt the cytoskeleton and centrifugation in a density gradient to force enucleation. The newly created microcells can then be fused to recipient (target) cells by exposure to poly ethylene glycol, use of Sendai virus, or electrofusion. Variations now allow construction of "humanized" mice with large pieces
|
{
"page_id": 30739972,
"source": null,
"title": "Microcell-mediated chromosome transfer"
}
|
from human chromosomes as well as new methods for human and mouse artificial chromosomes. == References ==
|
{
"page_id": 30739972,
"source": null,
"title": "Microcell-mediated chromosome transfer"
}
|
1-Hydroxy-7-azabenzotriazole (HOAt) is a triazole used as a peptide coupling reagent. It suppresses racemization that can otherwise occur during the reaction. HOAt has a melting point of 213-216 Β°C. == References ==
|
{
"page_id": 22089221,
"source": null,
"title": "1-Hydroxy-7-azabenzotriazole"
}
|
In human genetics, Haplogroup S is a human mitochondrial DNA (mtDNA) haplogroup found only among Indigenous Australians. It is a descendant of macrohaplogroup N. == Origin == Haplogroup S mtDNA evolved within Australia between 64,000 and 40,000 years ago (51 kya). == Distribution == It is found in the Indigenous Australian population. Haplogroup S2 found in Willandra Lakes human remain WLH4 dated back Late Holocene (3,000β500 years ago). The following table lists relevant GenBank samples: == Subclades == === Tree === This phylogenetic tree of haplogroup S subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research. The TMRCA for haplogroup S is between 49 and 51 KYA according to Nano Nagle's Aboriginal Australian mitochondrial genome variation β an increased understanding of population antiquity and diversity publication that published in 2017. S (64-40 kya) in Australia S1 (53-32 kya) in Australia S1a (44-29 kya) found in WA, NT, QLD and NSW S1b (37-22 kya) found in NT, QLD and NSW S1b1 (30-10 kya) found in NT and QLD S1b1a (24-6 kya) found in QLD S1b2 (17-3 kya) found in QLD S1b3 (20-4 kya) found in QLD and NSW S2 (44-22 kya) in Australia S2a (38-18 kya) found in NT, QLD, NSW and TAS S2a1 (31-12 kya) found in NSW, QLD and TAS S2a1a (19-6 kya) found in NSW and QLD S2a2 (38-11 kya) found in NT, QLD and NSW S2b (42-18 kya) found in WA, NT, QLD and VIC S2b1(27-9 kya) found in NT, QLD and VIC S2b2 (37-12 kya) found in WA, NT and QLD S-T152C! S3 (17-1 kya) found in NT S4 found in NT S5 found in WA S6 found in NSW == See also == Genealogical DNA test Genetic
|
{
"page_id": 10685957,
"source": null,
"title": "Haplogroup S (mtDNA)"
}
|
genealogy Human mitochondrial genetics Population genetics == References == == External links == Ian Logan's Mitochondrial DNA Site: Haplogroup S Mannis van Oven's Phylotree
|
{
"page_id": 10685957,
"source": null,
"title": "Haplogroup S (mtDNA)"
}
|
TrueAllele is a software program by Cybergenetics that analyzes DNA using statistical methods, a process called probabilistic genotyping. It is used in forensic identification. The program can be used in situations unsuited to traditional methods, such as when a mixture of multiple people's DNA is in a sample. Some studies, mostly conducted by Cybergenetics' Chief Scientific Officer Mark W. Perlin, have validated the program's accuracy. In one study, TrueAllele distinguished between the genetic code of first-degree relatives with "great accuracy". The President's Council of Advisors on Science and Technology has noted that many validation studies were made by people affiliated with TrueAllele and are therefore not independent, demanding more independent research. In one case, TrueAllele's results differed from the results of STRMix, another probabilistic genotyping program, leading to the judge rejecting the DNA evidence. The proprietary nature of the code has led to concerns over its reliability. Unlike some similar programs, TrueAllele is not open source, so judges and attorneys cannot check the program's code. == References ==
|
{
"page_id": 60165639,
"source": null,
"title": "TrueAllele"
}
|
The molecular formula C8H11NO3 (molar mass: 169.17 g/mol, exact mass: 169.073893 u) may refer to: Ammonium mandelate Norepinephrine, or noradrenaline Oxidopamine, a dopamine derivative Pyridoxine
|
{
"page_id": 23662088,
"source": null,
"title": "C8H11NO3"
}
|
Shlomo Havlin (Hebrew: Χ©ΧΧΧ ΧΧΧΧΧ) (born July 21, 1942) is a professor in the Department of Physics at Bar-Ilan University, Ramat-Gan, Israel. He served as President of the Israel Physical Society (1996β1999), Dean of Faculty of Exact Sciences (1999β2001), chairman, Department of Physics (1984β1988). In 2018 he won the Israel Prize for his accomplishments in physics. == Biography == Shlomo Havlin was born in Jerusalem, Israel (then part of The British Mandate of Palestine). He is named after his grandfather Rabbi Shlomo Zalman Havlin, founder of the Torat Emet yeshiva in Hebron. He is second cousins with Prof. Shlomo Zalman Havlin, founder of the Department of Information Studies and Librarianship at Bar-Ilan University. He graduated from Bar-Ilan and Tel-Aviv Universities with Highest Distinction. He obtained an academic position at Bar-Ilan University in 1972 where he became a full Professor in Physics at 1984. During 1978β1979 he was a Royal Society Visiting Fellow at the University of Edinburgh, where he worked with Professors William Cochran and Roger Cowley. In 1984 he became the Chair of the Physics Department at Bar-Ilan University until 1988. During 1983β1984 and 1989β1991, Havlin was a visiting scientist at NIH where he collaborated much with Drs. George Weiss, Ralph Nossal and other members of NIH. During 1984β1985 and 1991β1992 he was a visiting professor at Boston University, where he collaborated with Professor H. Eugene Stanley and many others. Between 2016 and 2019 Havlin was a visiting professor in Tokyo Institute of Technology, where he collaborated with Profs. Misako and Hideki Takayasu. == Centers and research impact == Havlin established four Centers at Bar-Ilan, the Gonda-Goldschmiedt Medical Diagnostic Research Center (1994), the Minerva Center for Mesoscopics, Fractals and Neural Networks (1998), Science Beyond 2000 β Science Education Unit (1996) and Israel Science Foundation National Center for Complex Networks
|
{
"page_id": 19074563,
"source": null,
"title": "Shlomo Havlin"
}
|
(2003). He was the President of the Israel Physical Society (1996β1999) and the Dean of the Faculty of Exact Sciences at Bar-Ilan University (1999β2001). Havlin had more than 200 graduate students and postdocs, and collaborated with more than 400 scientists around the globe. He published more than 700 articles and 11 books. He was in 2018 one of the two most cited Israel scientists. He is currently in the editorial board of several scientific journals: Fractals, Physica A, New Journal of Physics, Research Letters in Physics and co-editor of Europhysics Letters. == Prizes and awards == Havlin obtained numerous prizes for his research, including the Landau Prize for Outstanding Research in Physics (1988), the Humboldt Award β Germany (1992), Prize for best scientific paper of 2000, Bar-Ilan University (2000) and Prize for best popular scientific paper Minister of Science, Israel (2002). He also obtained the Nicholson Medal of the American Physical Society (2006), the Chaim Weizmann Prize for Exact Sciences (2009), the Julius Edgar Lilienfeld Prize for outstanding contribution in physics (2010), the Rothschild Prize for Physical and Chemical Sciences (2014), an Honorary Professor, Beihang University, Beijing, China (2016), the Distinguished Scientist Award, Chinese Academy of Sciences (2017), the Order of the Star of Italy, President of Italy (2017) and the Israel Prize for Physics and Chemistry (2018). Professor Havlin made many important contributions to science. The following are descriptions of his main contributions in randomness and complexity. == Main contributions == Disordered systems that are self-similar on a broad range of length scales are ubiquitous and often modeled by percolation-type models. The laws that describe transport processes or chemical reactions in these systems are significantly different from those in homogeneous systems. The earlier works of Prof. Havlin, where he discovered several of these important anomalies, had an enormous impact
|
{
"page_id": 19074563,
"source": null,
"title": "Shlomo Havlin"
}
|
on the development of the whole field and are summarized in the monograph βDiffusion and Reactions in Fractals and Disordered Systemsβ that he wrote together with his former graduate student Daniel ben-Avraham (Cambridge University Press, 2000). The book describes the anomalous physical laws discovered during 1980β2000 in fractals and disordered systems, many of them by Havlin and his collaborators. His review article (Adv. in Phys. (1987)) was cited more than 1100 times and was chosen by the journal's editors to be published again (Adv. in Phys. (2002)). In 2000, Havlin and his student Reuven Cohen, together with Daniel ben-Avraham developed a novel percolation-type approach and derived the first theory on the stability of realistic complex networks such as the Internet under random breakdown (Phys. Rev. Lett. 85, 4626 (2000)) and intentional attacks (Phys. Rev. Lett. 86, 3682 (2001)). This study is particularly useful for optimizing the stability of networks against intentional attacks and viruses. They also derived a novel result about the βsmall worldβ nature of complex networks and found that the diameter of scale free networks is significantly smaller and therefore called them βultrasmall worldsβ (Phys. Rev. Lett. 90, 58701 (2003)). Since 2010, Havlin and collaborators have focused heavily on interdependent networks. His paper with collaborators developed the percolation theory of network of networks (Nature, 464, 08932 (2010)) and initiated the current active research field of networks. In 2014, he introduced with collaborators the concept of recovery in percolation theory (Nature Physics, 10, 3438 (2014); Nature Comm., 2016). == Notes == == External links == Official website
|
{
"page_id": 19074563,
"source": null,
"title": "Shlomo Havlin"
}
|
This page provides supplementary chemical data on beryllium oxide. == Material Safety Data Sheet == Beryllium Oxide MSDS from American Beryllia == Structure and properties == == Thermodynamic properties == == Spectral data == == References ==
|
{
"page_id": 3018251,
"source": null,
"title": "Beryllium oxide (data page)"
}
|
Cytolysin refers to the substance secreted by microorganisms, plants or animals that is specifically toxic to individual cells, in many cases causing their dissolution through lysis. Cytolysins that have a specific action for certain cells are named accordingly. For instance, the cytolysins responsible for the destruction of red blood cells, thereby liberating hemoglobins, are named hemolysins, and so on. Cytolysins may be involved in immunity as well as in venoms. Hemolysin is also used by certain bacteria, such as Listeria monocytogenes, to disrupt the phagosome membrane of macrophages and escape into the cytoplasm of the cell. == History and background == The term "Cytolysin" or "Cytolytic toxin" was first introduced by Alan Bernheimer to describe membrane damaging toxins (MDTs) that have cytolytic effects to cells. The first kind of cytolytic toxin discovered have hemolytic effects on erythrocytes of certain sensitive species, such as Human. For this reason "Hemolysin" was first used to describe any MDTs. In the 1960s certain MDTs were proved to be destructive on cells other than erythrocytes, such as leukocytes. The term "Cytolysin" is then introduced by Bernheimer to replace "Hemolysin". Cytolysins can destruct membranes without creating lysis to cells. Therefore, "membrane damaging toxins" (MDTs) describes the essential actions of cytolysins. Cytolysins comprise more than 1/3 of all bacterial protein toxins. Bacterial protein toxins can be highly poisonous to human. For example, Botulinum is 3x105 more toxic than snake venom to human and its toxic dose is only 0.8x10β8 mg. A wide variety of gram-positive and gram-negative bacteria use cytolysin as their primary weapon for creating diseases, such as Enterococcus faecalis, Staphylococcus and Clostridium perfringens. A diverse range of studies has been done on cytolysins. Since the 1970s, more than 40 new cytolysins have been discovered and grouped into different families. At genetic level, the genetic structures
|
{
"page_id": 7671308,
"source": null,
"title": "Cytolysin"
}
|
of about 70 Cytolysin proteins has been studied and published. The detailed process of membrane damage has also been surveyed. Rossjohn et al. present the crystal structure of perfringolysin O, a thiol-activated cytolysin, which creates membrane holes on eukaryotic cells. A detailed model of membrane channel formation that reveals membrane insertion mechanism is constructed. Shatursky et al. studied the membrane insertion mechanism of Perfringolysin O (PFO), a cholesterol-dependent pore-forming cytolysin produced by pathogenic Clostridium perfringens. Instead of using a single amphipathic Ξ² hairpin per polypeptide, PFO monomer contains two amphipathic Ξ² hairpins, each spans the whole membrane. Larry et al. focused on the membrane penetrating models of RTX toxins, a family of MDT secreted by many gram-negative bacteria. The insertion and transport process of the protein from RTX to target lipid membrane was revealed. === Classification === The membrane-damaging cytolysins can be classified into three types based on their damaging mechanism: Cytolysins which attack eukaryotic cells' bilayer membranes by dissolving their phospholipids. Representative cytolysins include C. perfringens Ξ±-toxin (phospholipase C), S. aureus Ξ²-toxin (sphingomyelinase C) and Vibrio damsela (phospholipase D). Farlane et al. recognized C. perfringens Ξ±-toxin's molecular mechanism in 1941, which marked the pioneering work on any bacterial protein toxins. Cytolysins which attack the hydrophobic regions of membranes and act like "detergents". Examples of this type include the 26-amino-acid Ξ΄-toxins from Straphylococcus aureus, S. haemolyticus and S. lugdunensis, Bacillus subtilis toxin and the cytolysin from Pseudomonas aeruginosa. Cytolysins which form pores on target cells' membranes. These types of cytolysin are also known as pore-forming toxins (PFTs) and comprise the largest portion of all cytolysins. Examples of this type include perfringiolysin O from Clostridium perfringens bacteria, hemolysin from Escherichia coli, and listeriolysin from Listeria monocytogenes. Targets of this type of cytolysins range from general cell membranes to more specific microorganisms,
|
{
"page_id": 7671308,
"source": null,
"title": "Cytolysin"
}
|
such as cholesterols and phagocyte membranes. == Pore forming cytolysins == Pore forming cytolysins (PFCs) comprise near 65% of all membrane-damaging cytolysins. The first pore forming cytolysin is discovered by Manfred Mayer in 1972 of the C5-C9 insertion of erythrocytes. PFCs can be produced by a wide variety of sources, such as bacteria, fungi and even plants. The pathogenic process of PFCs normally involves forming channels or pores at the target cells' membranes. Note that the pores can have many structures. A porin-like structure allows molecules of certain sizes to pass through. Electric fields distribute unevenly across the pore and enable the selection molecules that can get through. This type of structure is shown in staphylococcal Ξ±-hemolysin. A pore can also be formed through membrane fusions. Controlled by Ca2+, the membrane fusion of vesicles form water-filled pores from proteolipids. Pore forming cytolysins such as perforin are used in cytotoxic killer T and NK cells to destroy infected cells. === pore forming process === A more complex pore formation process involves an oligomerization process of several PFC monomers. The pore forming process comprise three basic steps. The cytolysins are produced by certain microorganisms at first. Sometimes the producer organism needs to create a pore at its own membrane to release such cytolysins, like the case colicins produced by Escherichia coli. Cytolysins are released as protein monomers in a water-soluble state in this step. Note that cytolysins are often toxic to its producing hosts as well. For example, colicins consume nucleic acids of cells by using several enzymes. To prevent such toxicity, host cells produce immunity proteins for binding cytolysins before they do any damage inward. In the second step, cytolysins adhere to target cell membranes by matching the "receptors" on the membranes. Most receptors are proteins, but they can be other
|
{
"page_id": 7671308,
"source": null,
"title": "Cytolysin"
}
|
molecules as well, such as lipids or sugars. With the help of receptors, cytolysin monomers combine with each other and form clusters of oligomers. During this stage, cytolysins complete transition from water-soluble monomers state into oligomers state. Finally, the formed cytolysin clusters penetrate target cells' membranes and form membrane pores. The size of these pores varies from 1β2 nm ( S. aureus Ξ±-toxin, E. coli Ξ±-hemolysin, Aeromonas aerolysin) to 25β30 nm (streplysin O, pneumolysin). Depending on how the pores are formed, the pore forming cytolysins fall into two categories. Those forming pores with Ξ±-helices are named Ξ±-PFTs (Pore forming toxins). Those forming pores with Ξ²-barrel structures are named Ξ²-PFTs. Some of the common Ξ±-PFTs and Ξ²-PFTs are listed in the table below. === Consequences of cytolysins === The lethal effects of pore-forming cytolysins are performed by causing influx and outflux disorder in a single cell. Pores that allow ions like Na+ to pass through created imbalance in the target cell which exceeds its ion-balancing capacity. Attacked cells therefore expand to lysis. When target cell membranes are destructed, bacteria which produce the cytolysins can consume the intracellular elements of the cell, such as iron and cytokines. Some enzymes that decompose target-cells' critical structures can enter the cells without obstructions. == Cholesterol-dependent cytolysin == One specific type of cytolysin is the cholesterol-dependent cytolysin (CDC). CDCs exist in many Gram-positive bacteria. The pore forming process of CDCs require the presence of cholesterols on target-cell membranes. The pore size created by CDC is large (25β30 nm) due to the oligomeric process of cytolysins. Note that cholesterol are not always necessary at during the adhering phase. For example, Intermedilysin requires only the presence of protein receptors when attaching to target cells and cholesterols are required at pore forming. The formation of pores through CDCs involve
|
{
"page_id": 7671308,
"source": null,
"title": "Cytolysin"
}
|
an additional step than the steps analyzed above. The water-soluble monomers oligomerize to form an intermediate product named "pre-pore" complex and then a Ξ²-barrel is penetrated into the membrane. == See also == Hemolysis (microbiology) Thiol-activated cytolysin Sea anemone cytotoxic protein == References ==
|
{
"page_id": 7671308,
"source": null,
"title": "Cytolysin"
}
|
Petrus Peregrinus de Maricourt (Latin), Pierre Pelerin de Maricourt (French), or Peter Peregrinus of Maricourt (fl. 1269), was a French mathematician, physicist, and writer who conducted experiments on magnetism and wrote the first extant treatise describing the properties of magnets. His work is particularly noted for containing the earliest detailed discussion of freely pivoting compass needles, a fundamental component of the dry compass soon to appear in medieval navigation. He also wrote a treatise on the construction and use of a universal astrolabe. Peregrinus's text on the magnet is entitled in many of the manuscripts of it Epistola Petri Peregrini de Maricourt ad Sygerum de Foucaucourt, militem, de magnete ("Letter of Peter Peregrinus of Maricourt to Sygerus of Foucaucourt, Soldier, on the Magnet") but it is more commonly known by its short title, Epistola de magnete ("Letter on the Magnet"). The letter is addressed to an otherwise unknown Picard countryman named Sygerus (Sigerus, Ysaerus) of Foucaucourt, possibly a friend and neighbor of the author; Foucaucourt borders on the home area of Peregrinus around Maricourt, in the present-day department of the Somme, near Péronne. In only one of the 39 surviving manuscript copies the letter also bears the closing legend Actum in castris in obsidione Luceriæ anno domini 1269º 8º die augusti ("Done in camp during the siege of Lucera, August 8, 1269"), which might indicate that Peregrinus was in the army of Charles, duke of Anjou and king of Sicily, who in 1269 laid siege to the city of Lucera. However, given that only one manuscript attests this, the evidence is weak. There is no indication of why Peter received the sobriquet Peregrinus (or "pilgrim"), but it suggests that he may have been either a pilgrim at one point or a crusader; and the attack on Lucera of 1269 had
|
{
"page_id": 2756109,
"source": null,
"title": "Petrus Peregrinus de Maricourt"
}
|
been sanctioned as a crusade by the Pope. So Petrus Peregrinus may have served in that army. "You must realize, dearest friend," Peregrinus writes, "that while the investigator in this subject must understand nature and not be ignorant of the celestial motions, he must also be very diligent in the use of his own hands, so that through the operation of this stone he may show wonderful effects." == The content of the Epistola de magnete == In his letter of 1269, Peregrinus explains how to identify the poles of the compasses. He also describes the laws of magnetic attraction and repulsion. The letters also contain a description of an experiment with a repaired magnet, as well as a number of compasses, one of which "you will be able to direct your steps to cities and islands and to any place whatever in the world." Indeed, the increasing perfection of magnetic compasses during the thirteenth century allowed navigators such as Vandino and Ugolino Vivaldi to strike out on voyages to unknown lands. The Epistola de magnete is divided into two parts. Part One (10 chapters): This is a section that serves as a model of inductive reasoning based on definite experiences, and setting forth the fundamental laws of magnetism. He did not discover these laws, but presented them in logical order. Part One discusses the physical (but not the occult) properties of the lodestone and provides the first extant written account of the polarity of magnets. He was thus the first to use the word "pole" in this context. He provides methods for determining the north and south poles of a magnet, and he describes the effects magnets have upon one another, showing that like poles repel each other and unlike poles attract each other. He also treats the attraction
|
{
"page_id": 2756109,
"source": null,
"title": "Petrus Peregrinus de Maricourt"
}
|
of iron by lodestones, the magnetization of iron by lodestones, and the ability to reverse the polarity in such an induced magnet. Peregrinus attributed the Earth's magnetism to the action of celestial poles, rather than to the terrestrial poles of the planet itself. Part Two (three chapters): This section describes three devices that utilize the properties of magnets. He treats the practical applications of magnets, describing the "wet" floating compass as an instrument, and a "dry" pivoted compass in some detail. He also attempts to prove that with the help of magnets it is possible to realize perpetual motion (see History of perpetual motion machines). His device is a toothed wheel which passes near a lodestone so that the teeth are alternately attracted by one pole and repelled by the other. == The universal astrolabe text == The Nova Compositio Astrolabii Particularis (found in only 4 manuscripts) describes the construction and use of a universal astrolabe which could be used at a variety of latitudes without changing the plates. Unlike al-ZarqΔlΔ«'s more famous universal astrolabe in which vertical halves the heavens were projected onto a plane through the poles, this one had both the northern and southern hemispheres projected onto a plane through the equator (which was also the limit of projection). There are no known surviving astrolabes based on this treatise. The use of such an astrolabe is very complicated, and since it is probable that most sophisticated users were not frequent travelers, they were more likely happier with the traditional (and simpler) stereographic planispheric astrolabe. == Roger Bacon == The literature often mentions that Peregrinus was praised by Roger Bacon, who called him a "perfect mathematician" and one who valued experience over argument. But the association of the praise with Peregrinus appears only in a marginal gloss to
|
{
"page_id": 2756109,
"source": null,
"title": "Petrus Peregrinus de Maricourt"
}
|
Bacon's Opus tertium and only in one of the five manuscripts used in the critical edition, which leads us to conclude that it was a later comment added by someone else. That Bacon's praise was for Peregrinus is open to serious debate. == Legacy == The influence of Peregrinus's astrolabe was virtually nil. His reputation derives mainly from his work on magnetism. The De magnete became a very popular work from the Middle Ages onwards, as witnessed by the large number of manuscript copies. The first printed edition of it was issued at Augsburg, in 1558, by Achilles Gasser. In 1562, Jean Taisner published from the press of Johann Birkmann of Cologne a work entitled Opusculum perpetua memoria dignissimum, de natura magnetis et ejus effectibus, Item de motu continuo. This is considered a piece of plagiarism, as Taisnier presents, as though his own, the Epistola de magnete of Peregrinus and a treatise on the fall of bodies by Gianbattista Benedetti. William Gilbert acknowledged his debt to Peregrinus and incorporated this thirteenth-century scientist's experiments on magnetism into his own treatise, called De magnete. The Epistola de magnete was later issued by Guillaume Libri (Histoire des sciences mathΓ©matiques en Italie, vol 2 [Paris, 1838], pp. 487β505), but, based on only one manuscript, this edition was full of defects; corrected editions were published by Timoteo Bertelli (in Bulletino di bibliografia e di storia delle scienze matematiche e fisiche pubblicata da B. Boncampagni, 1 (1868), 70β80) and G. Hellmann (Rara magnetica 1269-1599 [Neudrucke von Schriften und Karten ΓΌber Meteorologie und Erdmagnetismus, 10], [Berlin, 1898]) . The modern critical edition was prepared by Loris Sturlese and appears in Petrus Peregrinus de Maricourt, Opera (Pisa, 1995), pp. 63β89. A translation into English has been made by Silvanus P. Thompson ("Epistle of Peter Peregrinus of Maricourt, to
|
{
"page_id": 2756109,
"source": null,
"title": "Petrus Peregrinus de Maricourt"
}
|
Sygerus of Foucaucourt, Soldier, concerning the Magnet", [London: Chiswick Press, 1902]); by Brother Arnold [=Joseph Charles Mertens] ("The Letter of Petrus Peregrinus on the Magnet, A.D. 1269", with introductory note by Brother Potamian [= M. F. O'Reilly], [New York, 1904]); and H. D. Harradon, ("Some Early Contributions to the History of Geomagnetism - I," in Terrestrial Magnetism and Atmospheric Electricity [now Journal of Geophysical Research] 48 [1943], 3β17 [text pp. 6β17]). The modern critical edition of the astrolabe text was prepared by Ron B. Thomson and appears in Petrus Peregrinus de Maricourt, Opera (Pisa, 1995), pp. 119β196. The philosopher and scientist Charles S. Peirce made a thorough study the Epistle of Petrus Peregrinus on the lodestone (MS. No. 7378; See Eisele, C. (1957) The Charles S. Peirce-Simon Newcomb Correspondence. Proceedings of the American Philosophical Society, Vol. 101, No. 5. p. 411). The European Geosciences Union (EGU) established the Petrus Peregrinus Medal in recognition for outstanding scientific contributions in the field of magnetism. == See also == History of geomagnetism History of electromagnetic theory == Notes == == References == This article incorporates text from a publication now in the public domain: Herbermann, Charles, ed. (1913). "Pierre de Maricourt". Catholic Encyclopedia. New York: Robert Appleton Company. == External links == EncyclopΓ¦dia Britannica: Peter Peregrinus of Maricourt Catholic Encyclopedia: Pierre de Maricourt Peter Peregrinus at IET Archives The letter of Petrus Peregrinus on the magnet, A.D. 1269 (translated 1904) Andreas Kleinert, Wie funktionierte das Perpetuum mobile des Petrus Peregrinus?, in NTM N.S. 11 (2003), 155β170, abstract
|
{
"page_id": 2756109,
"source": null,
"title": "Petrus Peregrinus de Maricourt"
}
|
SimThyr is a free continuous dynamic simulation program for the pituitary-thyroid feedback control system. The open-source program is based on a nonlinear model of thyroid homeostasis. In addition to simulations in the time domain the software supports various methods of sensitivity analysis. Its simulation engine is multi-threaded and supports multiple processor cores. SimThyr provides a GUI, which allows for visualising time series, modifying constant structure parameters of the feedback loop (e.g. for simulation of certain diseases), storing parameter sets as XML files (referred to as "scenarios" in the software) and exporting results of simulations in various formats that are suitable for statistical software. SimThyr is intended for both educational purposes and in-silico research. == Mathematical model == The underlying model of thyroid homeostasis is based on fundamental biochemical, physiological and pharmacological principles, e.g. Michaelis-Menten kinetics, non-competitive inhibition and empirically justified kinetic parameters. The model has been validated in healthy controls and in cohorts of patients with hypothyroidism and thyrotoxicosis. == Scientific uses == Multiple studies have employed SimThyr for in silico research on the control of thyroid function. The original version was developed to check hypotheses about the generation of pulsatile TSH release. Later and expanded versions of the software were used to develop the hypothesis of the TSH-T3 shunt in the hypothalamus-pituitary-thyroid axis, to assess the validity of calculated parameters of thyroid homeostasis (including SPINA-GT and SPINA-GD) and to study allostatic mechanisms leading to non-thyroidal illness syndrome. SimThyr was also used to show that the release rate of thyrotropin is controlled by multiple factors other than T4 and that the relation between free T4 and TSH may be different in euthyroidism, hypothyroidism and thyrotoxicosis. == Public perception, reception and discussion of the software == SimThyr is free and open-source software. This ensures the source code to be available, which
|
{
"page_id": 59248142,
"source": null,
"title": "SimThyr"
}
|
facilitates scientific discussion and reviewing of the underlying model. Additionally, the fact that it is freely available may result in economical benefits. The software provides an editor that enables users to modify most structure parameters of the information processing structure. This functionality fosters simulation of several functional diseases of the thyroid and the pituitary gland. Parameter sets may be stored as MIRIAM- and MIASE-compliant XML files. On the other hand, the complexity of the user interface and the lack of the ability to model treatment effects have been criticized. == See also == Hypothalamicβpituitaryβthyroid axis Thyroid function tests == References == == External links == Official website of the SimThyr project Curated information at Zenodo Curated information at SciCrunch
|
{
"page_id": 59248142,
"source": null,
"title": "SimThyr"
}
|
The molecular formula C11H15BrN2O3 (molar mass: 303.15 g/mol, exact mass: 302.0266 u) may refer to: Butallylonal Narcobarbital
|
{
"page_id": 61214222,
"source": null,
"title": "C11H15BrN2O3"
}
|
Haplogroup pre-JT is a human mitochondrial DNA haplogroup (mtDNA). It is also called R2'JT. == Origin == Haplogroup pre-JT is a descendant of the haplogroup R. It is characterised by the mutation T4216C. The pre-JT clade has two direct descendant lineages, haplogroup JT and haplogroup R2. == Distribution == According to YFull MTree, haplogroup R2'JT has allegedly been sequenced in at least three individuals, among whom one came from ancient Egypt and one from modern Denmark. However, Ian Logan mutationally interpreted the Denmark sample as being a member of T1a. One carrier of haplogroup R2'JT was found in an in-depth study of "108 Scandinavian Neolithic individuals". == Subclades == Its major subclade is Haplogroup JT, which further divides into Haplogroup J and Haplogroup T. Its other subclade is Haplogroup R2, which has such branches as R2a, R2b, and R2c. === Tree === R2'JT R2 JT J T == See also == Genealogical DNA test Genetic genealogy Human mitochondrial genetics Population genetics == References == == External links == Ian Logan's Mitochondrial DNA Site
|
{
"page_id": 10685970,
"source": null,
"title": "Haplogroup pre-JT"
}
|
In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action. Research about decision-making is also published under the label problem solving, particularly in European psychological research. == Overview == Decision-making can be regarded as a problem-solving activity yielding a solution deemed to be optimal, or at least satisfactory. It is therefore a process which can be more or less rational or irrational and can be based on explicit or tacit knowledge and beliefs. Tacit knowledge is often used to fill the gaps in complex decision-making processes. Usually, both of these types of knowledge, tacit and explicit, are used together in the decision-making process. Human performance has been the subject of active research from several perspectives: Psychological: examining individual decisions in the context of a set of needs, preferences and values the individual has or seeks. Cognitive: the decision-making process is regarded as a continuous process integrated in the interaction with the environment. Normative: the analysis of individual decisions concerned with the logic of decision-making, or communicative rationality, and the invariant choice it leads to. A major part of decision-making involves the analysis of a finite set of alternatives described in terms of evaluative criteria. Then the task might be to rank these alternatives in terms of how attractive they are to the decision-maker(s) when all the criteria are considered simultaneously. Another task might be to find the best alternative or to determine the relative total priority of each alternative
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
(for instance, if alternatives represent projects competing for funds) when all the criteria are considered simultaneously. Solving such problems is the focus of multiple-criteria decision analysis (MCDA). This area of decision-making, although long established, has attracted the interest of many researchers and practitioners and is still highly debated as there are many MCDA methods which may yield very different results when they are applied to exactly the same data. This leads to the formulation of a decision-making paradox. Logical decision-making is an important part of all science-based professions, where specialists apply their knowledge in a given area to make informed decisions. For example, medical decision-making often involves a diagnosis and the selection of appropriate treatment. But naturalistic decision-making research shows that in situations with higher time pressure, higher stakes, or increased ambiguities, experts may use intuitive decision-making rather than structured approaches. They may follow a recognition-primed decision that fits their experience, and arrive at a course of action without weighing alternatives. The decision-maker's environment can play a part in the decision-making process. For example, environmental complexity is a factor that influences cognitive function. A complex environment is an environment with a large number of different possible states which come and go over time. Studies done at the University of Colorado have shown that more complex environments correlate with higher cognitive function, which means that a decision can be influenced by the location. One experiment measured complexity in a room by the number of small objects and appliances present; a simple room had less of those things. Cognitive function was greatly affected by the higher measure of environmental complexity, making it easier to think about the situation and make a better decision. == Problem solving vs. decision making == It is important to differentiate between problem solving, or problem analysis, and
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
decision-making. Problem solving is the process of investigating the given information and finding all possible solutions through invention or discovery. Traditionally, it is argued that problem solving is a step towards decision making, so that the information gathered in that process may be used towards decision-making. Characteristics of problem-solving Problems are merely deviations from performance standards. Problems must be precisely identified and described Problems are caused by a change from a distinctive feature Something can always be used to distinguish between what has and has not been affected by a cause Causes of problems can be deduced from relevant changes found in analyzing the problem The most likely cause of a problem is the one that exactly explains all the facts while having the fewest (or weakest) assumptions (Occam's razor). Characteristics of decision-making Objectives must first be established Objectives must be classified and placed in order of importance Alternative actions must be developed The alternatives must be evaluated against all the objectives The alternative that is able to achieve all the objectives is the tentative decision The tentative decision is evaluated for more possible consequences Decisive actions are taken, and additional actions are taken to prevent any adverse consequences from becoming problems and starting both systems (problem analysis and decision-making) all over again There are steps that are generally followed that result in a decision model that can be used to determine an optimal production plan In a situation featuring conflict, role-playing may be helpful for predicting decisions to be made by involved parties When participants do not agree on what the future will look like, Decision-making Under Deep Uncertainty may play a role. === Analysis paralysis === When a group or individual is unable to make it through the problem-solving step on the way to making a decision, they
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
could be experiencing analysis paralysis. Analysis paralysis is the state that a person enters where they are unable to make a decision, in effect paralyzing the outcome. Some of the main causes for analysis paralysis is the overwhelming flood of incoming data or the tendency to overanalyze the situation at hand. There are said to be three different types of analysis paralysis. The first is analysis process paralysis. This type of paralysis is often spoken of as a cyclical process. One is unable to make a decision because they get stuck going over the information again and again for fear of making the wrong decision. The second is decision precision paralysis. This paralysis is cyclical, just like the first one, but instead of going over the same information, the decision-maker will find new questions and information from their analysis and that will lead them to explore into further possibilities rather than making a decision. The third is risk uncertainty paralysis. This paralysis occurs when the decision-maker wants to eliminate any uncertainty but the examination of provided information is unable to get rid of all uncertainty. === Extinction by instinct === On the opposite side of analysis paralysis is the phenomenon called extinction by instinct. Extinction by instinct is the state that a person is in when they make careless decisions without detailed planning or thorough systematic processes. Extinction by instinct can possibly be fixed by implementing a structural system, like checks and balances into a group or one's life. Analysis paralysis is the exact opposite where a group's schedule could be saturated by too much of a structural checks and balance system. Groupthink is another occurrence that falls under the idea of extinction by instinct. Groupthink is when members in a group become more involved in the "value of the
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
group (and their being part of it) higher than anything else"; thus, creating a habit of making decisions quickly and unanimously. In other words, a group stuck in groupthink is participating in the phenomenon of extinction by instinct. === Information overload === Information overload is "a gap between the volume of information and the tools we have to assimilate" it. Information used in decision-making is to reduce or eliminate the uncertainty. Excessive information affects problem processing and tasking, which affects decision-making. Psychologist George Armitage Miller suggests that humans' decision making becomes inhibited because human brains can only hold a limited amount of information. Crystal C. Hall and colleagues described an "illusion of knowledge", which means that as individuals encounter too much knowledge, it can interfere with their ability to make rational decisions. Other names for information overload are information anxiety, information explosion, infobesity, and infoxication. === Decision fatigue === Decision fatigue is when a sizable amount of decision-making leads to a decline in decision-making skills. People who make decisions in an extended period of time begin to lose mental energy needed to analyze all possible solutions. Impulsive decision-making and decision avoidance are two possible paths that extend from decision fatigue. Impulse decisions are made more often when a person is tired of analysis situations or solutions; the solution they make is to act and not think. Decision avoidance is when a person evades the situation entirely by not ever making a decision. Decision avoidance is different from analysis paralysis because this sensation is about avoiding the situation entirely, while analysis paralysis is continually looking at the decisions to be made but still unable to make a choice. === Post-decision analysis === Evaluation and analysis of past decisions are complementary to decision-making. See also mental accounting and Postmortem documentation. == Neuroscience
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
== Decision-making is a region of intense study in the fields of systems neuroscience, and cognitive neuroscience. Several brain structures, including the anterior cingulate cortex (ACC), orbitofrontal cortex, and the overlapping ventromedial prefrontal cortex are believed to be involved in decision-making processes. A neuroimaging study found distinctive patterns of neural activation in these regions depending on whether decisions were made on the basis of perceived personal volition or following directions from someone else. Patients with damage to the ventromedial prefrontal cortex have difficulty making advantageous decisions. A common laboratory paradigm for studying neural decision-making is the two-alternative forced choice task (2AFC), in which a subject has to choose between two alternatives within a certain time. A study of a two-alternative forced choice task involving rhesus monkeys found that neurons in the parietal cortex not only represent the formation of a decision but also signal the degree of certainty (or "confidence") associated with the decision. A 2012 study found that rats and humans can optimally accumulate incoming sensory evidence, to make statistically optimal decisions. Another study found that lesions to the ACC in the macaque resulted in impaired decision-making in the long run of reinforcement guided tasks suggesting that the ACC may be involved in evaluating past reinforcement information and guiding future action. It has recently been argued that the development of formal frameworks will allow neuroscientists to study richer and more naturalistic paradigms than simple 2AFC decision tasks; in particular, such decisions may involve planning and information search across temporally extended environments. === Emotions === Emotion appears able to aid the decision-making process. Decision-making often occurs in the face of uncertainty about whether one's choices will lead to benefit or harm (see also Risk). The somatic marker hypothesis is a neurobiological theory of how decisions are made in the face
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
of uncertain outcomes. This theory holds that such decisions are aided by emotions, in the form of bodily states, that are elicited during the deliberation of future consequences and that mark different options for behavior as being advantageous or disadvantageous. This process involves an interplay between neural systems that elicit emotional/bodily states and neural systems that map these emotional/bodily states. A recent lesion mapping study of 152 patients with focal brain lesions conducted by Aron K. Barbey and colleagues provided evidence to help discover the neural mechanisms of emotional intelligence. == Decision-making techniques == Decision-making techniques can be separated into two broad categories: group decision-making techniques and individual decision-making techniques. Individual decision-making techniques can also often be applied by a group. === Group === Consensus decision-making tries to avoid "winners" and "losers". Consensus requires that a majority approve a given course of action, but that the minority agree to go along with the course of action. In other words, if the minority opposes the course of action, consensus requires that the course of action be modified to remove objectionable features. Voting-based methods: Majority requires support from more than 50% of the members of the group. Thus, the bar for action is lower than with consensus. See also Condorcet method. Plurality, where the largest faction in a group decides, even if it falls short of a majority. Score voting (or range voting) lets each member score one or more of the available options, specifying both preference and intensity of preference information. The option with the highest total or average is chosen. This method has experimentally been shown to produce the lowest Bayesian regret among common voting methods, even when voters are strategic. It addresses issues of voting paradox and majority rule. See also approval voting. Quadratic voting allows participants to cast
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
their preference and intensity of preference for each decision (as opposed to a simple for or against decision). As in score voting, it addresses issues of voting paradox and majority rule. Delphi method is a structured communication technique for groups, originally developed for collaborative forecasting but has also been used for policy making. Dotmocracy is a facilitation method that relies on the use of special forms called Dotmocracy. They are sheets that allows large groups to collectively brainstorm and recognize agreements on an unlimited number of ideas they have each written. Participative decision-making occurs when an authority opens up the decision-making process to a group of people for a collaborative effort. Decision engineering uses a visual map of the decision-making process based on system dynamics and can be automated through a decision modeling tool, integrating big data, machine learning, and expert knowledge as appropriate. === Individual === Decisional balance sheet: listing the advantages and disadvantages (benefits and costs, pros and cons) of each option, as suggested by Plato's Protagoras and by Benjamin Franklin. Expected-value optimization: choosing the alternative with the highest probability-weighted utility, possibly with some consideration for risk aversion. This may involve considering the opportunity cost of different alternatives. See also Decision analysis and Decision theory. Satisficing: examining alternatives only until the first acceptable one is found. The opposite is maximizing or optimizing, in which many or all alternatives are examined in order to find the best option. Acquiesce to a person in authority or an "expert"; "just following orders". Anti-authoritarianism: taking the most opposite action compared to the advice of mistrusted authorities. Flipism e.g. flipping a coin, cutting a deck of playing cards, and other random or coincidence methods β or prayer, tarot cards, astrology, augurs, revelation, or other forms of divination, superstition or pseudoscience. Automated decision support:
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
setting up criteria for automated decisions. Decision support systems: using decision-making software when faced with highly complex decisions or when considering many stakeholders, categories, or other factors that affect decisions. Decision coaching refers to support given by a health-care professionals to assist a person when making a health-related or medical-related decision. Decision coaching is an active process where the health professional and the patient are active in the decision-making process. == Steps == A variety of researchers have formulated similar prescriptive steps aimed at improving decision-making. === GOFER === In the 1980s, psychologist Leon Mann and colleagues developed a decision-making process called GOFER, which they taught to adolescents, as summarized in the book Teaching Decision Making To Adolescents. The process was based on extensive earlier research conducted with psychologist Irving Janis. GOFER is an acronym for five decision-making steps: Goals clarification: Survey values and objectives. Options generation: Consider a wide range of alternative actions. Facts-finding: Search for information. Consideration of Effects: Weigh the positive and negative consequences of the options. Review and implementation: Plan how to review the options and implement them. === Other === In 2007, Pam Brown of Singleton Hospital in Swansea, Wales, divided the decision-making process into seven steps: Outline the goal and outcome. Gather data. Develop alternatives (i.e., brainstorming). List pros and cons of each alternative. Make the decision. Immediately take action to implement it. Learn from and reflect on the decision. In 2008, Kristina Guo published the DECIDE model of decision-making, which has six parts: Define the problem Establish or Enumerate all the criteria (constraints) Consider or Collect all the alternatives Identify the best alternative Develop and implement a plan of action Evaluate and monitor the solution and examine feedback when necessary In 2009, professor John Pijanowski described how the Arkansas Program, an ethics curriculum
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
at the University of Arkansas, used eight stages of moral decision-making based on the work of James Rest:: 6 Establishing community: Create and nurture the relationships, norms, and procedures that will influence how problems are understood and communicated. This stage takes place prior to and during a moral dilemma. Perception: Recognize that a problem exists. Interpretation: Identify competing explanations for the problem, and evaluate the drivers behind those interpretations. Judgment: Sift through various possible actions or responses and determine which is more justifiable. Motivation: Examine the competing commitments which may distract from a more moral course of action and then prioritize and commit to moral values over other personal, institutional or social values. Action: Follow through with action that supports the more justified decision. Reflection in action. Reflection on action. === Group stages === There are four stages or phases that should be involved in all group decision-making: Orientation. Members meet for the first time and start to get to know each other. Conflict. Once group members become familiar with each other, disputes, little fights and arguments occur. Group members eventually work it out. Emergence. The group begins to clear up vague opinions by talking about them. Reinforcement. Members finally make a decision and provide justification for it. It is said that establishing critical norms in a group improves the quality of decisions, while the majority of opinions (called consensus norms) do not. Conflicts in socialization are divided in to functional and dysfunctional types. Functional conflicts are mostly the questioning the managers assumptions in their decision making and dysfunctional conflicts are like personal attacks and every action which decrease team effectiveness. Functional conflicts are the better ones to gain higher quality decision-making caused by the increased team knowledge and shared understanding. == Rational and irrational == In economics, it is
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
thought that if humans are rational and free to make their own decisions, then they would behave according to rational choice theory.: 368β370 Rational choice theory says that a person consistently makes choices that lead to the best situation for themselves, taking into account all available considerations including costs and benefits; the rationality of these considerations is from the point of view of the person themselves, so a decision is not irrational just because someone else finds it questionable. In reality, however, there are some factors that affect decision-making abilities and cause people to make irrational decisions β for example, to make contradictory choices when faced with the same problem framed in two different ways (see also Allais paradox). Rational decision-making is a multi-step process for making choices between alternatives. The process of rational decision-making favors logic, objectivity, and analysis over subjectivity and insight. The irrational decision is more counter to logic. The decisions are made in haste and outcomes are not considered. One of the most prominent theories of decision-making is subjective expected utility (SEU) theory, which describes the rational behavior of the decision-maker. The decision maker assesses different alternatives by their utilities and the subjective probability of occurrence. Rational decision-making is often grounded on experience and theories that are able to put this approach on solid mathematical grounds so that subjectivity is reduced to a minimum, see e.g. scenario optimization. Rational decision is generally seen as the best or most likely decision to achieve the set goals or outcome. == Children, adolescents, and adults == === Children === It has been found that, unlike adults, children are less likely to have research strategy behaviors. One such behavior is adaptive decision-making, which is described as funneling and then analyzing the more promising information provided if the number of options
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
to choose from increases. Adaptive decision-making behavior is somewhat present for children, ages 11β12 and older, but decreases in the presence the younger they are. The reason children are not as fluid in their decision making is that they lack the ability to weigh the cost and effort needed to gather information in the decision-making process. Some possibilities that explain this inability are knowledge deficits and lack of utilization skills. Children lack the metacognitive knowledge necessary to know when to use any strategies they do possess to change their approach to decision-making. When it comes to the idea of fairness in decision-making, children and adults differ much less. Children are able to understand the concept of fairness in decision-making from an early age. Toddlers and infants, ranging from 9β21 months, understand basic principles of equality. The main difference found is that more complex principles of fairness in decision making such as contextual and intentional information do not come until children get older. === Adolescents === During their adolescent years, teens are known for their high-risk behaviors and rash decisions. Research has shown that there are differences in cognitive processes between adolescents and adults during decision-making. Researchers have concluded that differences in decision-making are not due to a lack of logic or reasoning, but more due to the immaturity of psychosocial capacities that influence decision-making. Examples of their undeveloped capacities which influence decision-making would be impulse control, emotion regulation, delayed gratification and resistance to peer pressure. In the past, researchers have thought that adolescent behavior was simply due to incompetency regarding decision-making. Currently, researchers have concluded that adults and adolescents are both competent decision-makers, not just adults. However, adolescents' competent decision-making skills decrease when psychosocial capacities become present. Research has shown that risk-taking behaviors in adolescents may be the product of
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
interactions between the socioemotional brain network and its cognitive-control network. The socioemotional part of the brain processes social and emotional stimuli and has been shown to be important in reward processing. The cognitive-control network assists in planning and self-regulation. Both of these sections of the brain change over the course of puberty. However, the socioemotional network changes quickly and abruptly, while the cognitive-control network changes more gradually. Because of this difference in change, the cognitive-control network, which usually regulates the socioemotional network, struggles to control the socioemotional network when psychosocial capacities are present. When adolescents are exposed to social and emotional stimuli, their socioemotional network is activated as well as areas of the brain involved in reward processing. Because teens often gain a sense of reward from risk-taking behaviors, their repetition becomes ever more probable due to the reward experienced. In this, the process mirrors addiction. Teens can become addicted to risky behavior because they are in a high state of arousal and are rewarded for it not only by their own internal functions but also by their peers around them. A recent study suggests that adolescents have difficulties adequately adjusting beliefs in response to bad news (such as reading that smoking poses a greater risk to health than they thought), but do not differ from adults in their ability to alter beliefs in response to good news. This creates biased beliefs, which may lead to greater risk-taking. === Adults === Adults are generally better able to control their risk-taking because their cognitive-control system has matured enough to the point where it can control the socioemotional network, even in the context of high arousal or when psychosocial capacities are present. Also, adults are less likely to find themselves in situations that push them to do risky things. For example, teens
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
are more likely to be around peers who peer pressure them into doing things, while adults are not as exposed to this sort of social setting. == Cognitive and personal biases == Biases usually affects decision-making processes. They appear more when decision task has time pressure, is done under high stress and/or are highly complex. Here is a list of commonly debated biases in judgment and decision-making: Selective search for evidence (also known as confirmation bias): People tend to be willing to gather facts that support certain conclusions but disregard other facts that support different conclusions. Individuals who are highly defensive in this manner show significantly greater left prefrontal cortex activity as measured by EEG than do less defensive individuals. Premature termination of search for evidence: People tend to accept the first alternative that looks like it might work. Cognitive inertia is the unwillingness to change existing thought patterns in the face of new circumstances. Selective perception: People actively screen out information that they do not think is important (see also Prejudice). In one demonstration of this effect, the discounting of arguments with which one disagrees (by judging them as untrue or irrelevant) was decreased by selective activation of the right prefrontal cortex. Wishful thinking is a tendency to want to see things in a certain β usually positive β light, which can distort perception and thinking. Choice-supportive bias occurs when people distort their memories of chosen and rejected options to make the chosen options seem more attractive. Recency: People tend to place more attention on more recent information and either ignore or forget more distant information (see Semantic priming). The opposite effect in the first set of data or other information is termed primacy effect. Repetition bias is a willingness to believe what one has been told most often
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
and by the greatest number of different sources. Anchoring and adjustment: Decisions are unduly influenced by initial information that shapes our view of subsequent information. Groupthink is peer pressure to conform to the opinions held by the group. Source credibility bias is a tendency to reject a person's statement on the basis of a bias against the person, organization, or group to which the person belongs. People preferentially accept statements by others that they like (see also Prejudice). Incremental decision-making and escalating commitment: People look at a decision as a small step in a process, and this tends to perpetuate a series of similar decisions. This can be contrasted with zero-based decision-making (see Slippery slope). Attribution asymmetry: People tend to attribute their own success to internal factors, including abilities and talents, but explain their failures in terms of external factors such as bad luck. The reverse bias is shown when people explain others' success or failure. Role fulfillment is a tendency to conform to others' decision-making expectations. Underestimating uncertainty and the illusion of control: People tend to underestimate future uncertainty because of a tendency to believe they have more control over events than they really do. Framing bias: This is best avoided by increasing numeracy and presenting data in several formats (for example, using both absolute and relative scales). Sunk-cost fallacy is a specific type of framing effect that affects decision-making. It involves an individual making a decision about a current situation based on what they have previously invested in the situation.: 372 An example of this would be an individual who is refraining from dropping a class that they are most likely to fail, due to the fact that they feel as though they have done so much work in the course thus far. Prospect theory involves the idea
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
that when faced with a decision-making event, an individual is more likely to take on a risk when evaluating potential losses, and is more likely to avoid risks when evaluating potential gains. This can influence one's decision-making depending if the situation entails a threat or opportunity.: 373 Optimism bias is a tendency to overestimate the likelihood of positive events occurring in the future and underestimate the likelihood of negative life events. Such biased expectations are generated and maintained in the face of counter-evidence through a tendency to discount undesirable information. An optimism bias can alter risk perception and decision-making in many domains, ranging from finance to health. Reference class forecasting was developed to eliminate or reduce cognitive biases in decision-making. == Cognitive limitations in groups == In groups, people generate decisions through active and complex processes. One method consists of three steps: initial preferences are expressed by members; the members of the group then gather and share information concerning those preferences; finally, the members combine their views and make a single choice about how to face the problem. Although these steps are relatively ordinary, judgements are often distorted by cognitive and motivational biases, including "sins of commission", "sins of omission", and "sins of imprecision". == Cognitive styles == === Optimizing vs. satisficing === Herbert A. Simon coined the phrase "bounded rationality" to express the idea that human decision-making is limited by available information, available time and the mind's information-processing ability. Further psychological research has identified individual differences between two cognitive styles: maximizers try to make an optimal decision, whereas satisficers simply try to find a solution that is "good enough". Maximizers tend to take longer to make decisions due to the need to maximize performance across all variables and make tradeoffs carefully; they also tend to more often regret their
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
decisions (perhaps because they are more able than satisficers to recognize that a decision turned out to be sub-optimal). === Intuitive vs. rational === The psychologist Daniel Kahneman, adopting terms originally proposed by the psychologists Keith Stanovich and Richard West, has theorized that a person's decision-making is the result of an interplay between two kinds of cognitive processes: an automatic intuitive system (called "System 1") and an effortful rational system (called "System 2"). System 1 is a bottom-up, fast, and implicit system of decision-making, while system 2 is a top-down, slow, and explicit system of decision-making. System 1 includes simple heuristics in judgment and decision-making such as the affect heuristic, the availability heuristic, the familiarity heuristic, and the representativeness heuristic. === Combinatorial vs. positional === Styles and methods of decision-making were elaborated by Aron Katsenelinboigen, the founder of predispositioning theory. In his analysis of styles and methods, Katsenelinboigen referred to the game of chess, saying that "chess does disclose various methods of operation, notably the creation of predisposition methods which may be applicable to other, more complex systems.": 5 Katsenelinboigen states that apart from the methods (reactive and selective) and sub-methods randomization, predispositions, programming), there are two major styles: positional and combinational. Both styles are utilized in the game of chess. The two styles reflect two basic approaches to uncertainty: deterministic (combinational style) and indeterministic (positional style). Katsenelinboigen's definition of the two styles is the following. The combinational style is characterized by: a very narrow, clearly defined, primarily material goal; and a program that links the initial position with the outcome. In defining the combinational style in chess, Katsenelinboigen wrote: "The combinational style features a clearly formulated limited objective, namely the capture of material (the main constituent element of a chess position). The objective is implemented via a well-defined, and
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
in some cases, unique sequence of moves aimed at reaching the set goal. As a rule, this sequence leaves no options for the opponent. Finding a combinational objective allows the player to focus all his energies on efficient execution, that is, the player's analysis may be limited to the pieces directly partaking in the combination. This approach is the crux of the combination and the combinational style of play.: 57 The positional style is distinguished by: a positional goal; and a formation of semi-complete linkages between the initial step and final outcome. "Unlike the combinational player, the positional player is occupied, first and foremost, with the elaboration of the position that will allow him to develop in the unknown future. In playing the positional style, the player must evaluate relational and material parameters as independent variables. ... The positional style gives the player the opportunity to develop a position until it becomes pregnant with a combination. However, the combination is not the final goal of the positional player β it helps him to achieve the desirable, keeping in mind a predisposition for future development. The pyrrhic victory is the best example of one's inability to think positionally." The positional style serves to: create a predisposition to the future development of the position; induce the environment in a certain way; absorb an unexpected outcome in one's favor; and avoid the negative aspects of unexpected outcomes. === Influence of MyersβBriggs type === According to Isabel Briggs Myers, a person's decision-making process depends to a significant degree on their cognitive style. Myers developed a set of four bi-polar dimensions, called the MyersβBriggs Type Indicator (MBTI). The terminal points on these dimensions are: thinking and feeling; extroversion and introversion; judgment and perception; and sensing and intuition. She claimed that a person's decision-making style correlates
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
well with how they score on these four dimensions. For example, someone who scored near the thinking, extroversion, sensing, and judgment ends of the dimensions would tend to have a logical, analytical, objective, critical, and empirical decision-making style. However, some psychologists say that the MBTI lacks reliability and validity and is poorly constructed. Other studies suggest that these national or cross-cultural differences in decision-making exist across entire societies. For example, Maris Martinsons has found that American, Japanese and Chinese business leaders each exhibit a distinctive national style of decision-making. The MyersβBriggs typology has been the subject of criticism regarding its poor psychometric properties. === General decision-making style (GDMS) === In the general decision-making style (GDMS) test developed by Suzanne Scott and Reginald Bruce, there are five decision-making styles: rational, intuitive, dependent, avoidant, and spontaneous. These five different decision-making styles change depending on the context and situation, and one style is not necessarily better than any other. In the examples below, the individual is working for a company and is offered a job from a different company. The rational style is an in-depth search for, and a strong consideration of, other options and/or information prior to making a decision. In this style, the individual would research the new job being offered, review their current job, and look at the pros and cons of taking the new job versus staying with their current company. The intuitive style is confidence in one's initial feelings and gut reactions. In this style, if the individual initially prefers the new job because they have a feeling that the work environment is better suited for them, then they would decide to take the new job. The individual might not make this decision as soon as the job is offered. The dependent style is asking for other people's
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
input and instructions on what decision should be made. In this style, the individual could ask friends, family, coworkers, etc., but the individual might not ask all of these people. The avoidant style is averting the responsibility of making a decision. In this style, the individual would not make a decision. Therefore, the individual would stick with their current job. The spontaneous style is a need to make a decision as soon as possible rather than waiting to make a decision. In this style, the individual would either reject or accept the job as soon as it is offered. == See also == == References == == Further reading == Brockman, John (2013). Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction. Harper Perennial. ISBN 9780062258540. Kahneman, Daniel; Lovallo, Dan; Sibony, Olivier; Charan, Ram (2013). HBR's 10 Must Reads on Making Smart Decisions. Harvard Business Review Press. ISBN 978-1422189894. Partnoy, Frank (2013). Wait: The Art and Science of Delay. PublicAffairs. ISBN 978-1610392471. Caranovic, Djuradj (2023). The Book That Changed Me. HBR. ISBN 979-8329501254. == External links == Quotations related to Decision-making at Wikiquote
|
{
"page_id": 265752,
"source": null,
"title": "Decision-making"
}
|
The molecular formula C33H36N4O6 (molar mass: 584.662 g/mol, exact mass: 584.2635 u) may refer to: Bilirubin Lumirubin
|
{
"page_id": 24120856,
"source": null,
"title": "C33H36N4O6"
}
|
The Royal Physical Society of Edinburgh was a learned society based in Edinburgh, Scotland "for the cultivation of the physical sciences". The society was founded in 1771 as the Physico-Chirurgical Society but soon after changed its name to the Physical Society. After being granted a Royal Charter in 1778 it became the Royal Physical Society of Edinburgh. It absorbed a number of other societies over the next fifty years, including the Edinburgh Medico-Chirurgical Society in 1782 (not to be confused with the extant Medico-Chirurgical Society of Edinburgh, founded in 1821), the American Physical Society in 1796 (not to be confused with the extant American Physical Society, founded in 1899), the Hibernian Medical Society in 1799, the Chemical Society in 1803, the Natural History Society in 1812 and the Didactic Society in 1813. The society occupied a lecture hall in Nicholson Street, Edinburgh, complete with library. From 1854 to 1965, it published the journal Proceedings of the Royal Physical Society of Edinburgh, devoted to articles on experimental biology and natural history. Members of the society were known as Fellows and permitted to use the post-nominal letters FRPSE. Presidents were elected at intervals, sometimes more than one for each year. Some of the records of the Society, for the period 1828β1884, are maintained by the Royal Scottish Geographical Society. == Presidents of the Society == == References ==
|
{
"page_id": 48500250,
"source": null,
"title": "Royal Physical Society of Edinburgh"
}
|
The Wiley Prize in Biomedical Sciences is intended to recognize breakthrough research in pure or applied life science research that is distinguished by its excellence, originality and impact on our understanding of biological systems and processes. The award may recognize a specific contribution or series of contributions that demonstrate the nominee's significant leadership in the development of research concepts or their clinical application. Particular emphasis will be placed on research that champions novel approaches and challenges accepted thinking in the biomedical sciences. The Wiley Foundation, established in 2001, is the endowing body that supports the Wiley Prize in Biomedical Sciences. This international award is presented annually and consists of a $35,000 prize and a luncheon in honor of the recipient. The award is presented at a ceremony at The Rockefeller University, where the recipient delivers an honorary lecture as part of the Rockefeller University Lecture Series. As of 2016, six recipients have gone on to be awarded the Nobel Prize in Physiology or Medicine. == Award recipients == Source: Wiley Foundation 2002 H. Robert Horvitz of the Massachusetts Institute of Technology and Stanley J. Korsmeyer of the Dana Farber Cancer Institute β For his seminal research on programmed cell death and the discovery that a genetic pathway accounts for the programmed cell death within an organism, and Korsmeyer was chosen for his discovery of the relationship between human lymphomas and the fundamental biological process of apoptosis. Korsmeyer's experiments established that blocking cell death plays a primary role in cancer. 2003 Andrew Z. Fire, of both the Carnegie Institution of Washington and the Johns Hopkins University; Craig C. Mello, of the University of Massachusetts Medical School; Thomas Tuschl, formerly of the Max-Planck Institute for Biophysical Chemistry in Goettingen, Germany, and most recently of The Rockefeller University; and David Baulcombe, of the
|
{
"page_id": 21368349,
"source": null,
"title": "Wiley Prize"
}
|
Sainsbury Laboratory at the John Innes Centre in Norwich, England β For contributions to discoveries of novel mechanisms for regulating gene expression by small interfering RNAs (siRNA). 2004 C. David Allis, Ph.D., Joy and Jack Fishman, Professor, Laboratory of Chromatin Biology and Epigenetics at the Rockefeller University in New York β For the significant discovery that transcription factors can enzymatically modify histones to regulate gene activity. 2005 Peter Walter, a Howard Hughes Medical Institute investigator, and Professor and Chairman of the Department of Biochemistry & Biophysics at the University of California San Francisco, and Kazutoshi Mori, a professor of biophysics, in the Graduate School of Science at Kyoto University, in Japan β For the discovery of the novel pathway by which cells regulate the capacity of their intracellular compartments to produce correctly folded proteins for export. 2006 Elizabeth H. Blackburn, Morris Herztein Professor of Biology and Physiology in the Department of Biochemistry and Biophysics at the University of California, San Francisco, and Carol Greider, Daniel Nathans Professor and Director of Molecular Biology & Genetics at Johns Hopkins University β For the discovery of telomerase, the enzyme that maintains chromosomal integrity and the recognition of its importance in aging, cancer and stem cell biology. 2007 F. Ulrich Hartl, Director at the Max Planck Institute of Biochemistry, in Munich, Germany, and Arthur L. Horwich, Eugene Higgins Professor of Genetics and Pediatrics at the Yale University School of Medicine, and Investigator, Howard Hughes Medical Institute. β For elucidation of the molecular machinery that guides proteins into their proper functional shape, thereby preventing the accumulation of protein aggregates that underlie many diseases, such as Alzheimer's and Parkinson's. 2008 Richard P. Lifton of the Yale University School of Medicine. β For the discovery of the genes that cause many forms of high and low blood
|
{
"page_id": 21368349,
"source": null,
"title": "Wiley Prize"
}
|
pressure in humans. 2009 Bonnie Bassler of the Department of Molecular Biology at Princeton University and the Howard Hughes Medical Institute. β For pioneering investigations of quorum sensing, a mechanism that allows bacteria to "talk" to each other to coordinate their behavior, even between species. 2010 Peter Hegemann, Professor of Molecular Biophysics, Humboldt University, Berlin; Georg Nagel, Professor of Molecular Plant Physiology, Department of Botany, University of WΓΌrzburg; and Ernst Bamberg, Professor and Director of the Dept of Biophysical Chemistry, Max Planck Institute for Biophysics, Frankfurt, Germany for their discovery of channelrhodopsins, a family of light-activated ion channels. The discovery has greatly enlarged and strengthened the new field of optogenetics. Channelrhodopsins also provide a high potential for biomedical applications such as the recovery of vision and optical deep brain stimulation for treatment of Parkinson's and other diseases, instead of the more invasive electrode-based treatments. 2011 Lily Jan and Yuh Nung Jan of Howard Hughes Medical Institute at the University of California, San Francisco for their molecular identification of a founding member of a family of potassium ion channels that control nerve cell activity throughout the animal kingdom. 2012 Michael Sheetz, Columbia University; James Spudich, Stanford University, and Ronald Vale, University of California, San Francisco for explaining how cargo is moved by molecular motors along two different systems of tracks within cells. 2013 Michael Young, Rockefeller University; Jeffrey Hall, Brandeis University (Emeritus), and Michael Rosbash, Brandeis University for the discovery of the molecular mechanisms governing circadian rhythms. 2014 William Kaelin, Jr.; Steven McKnight; Peter J. Ratcliffe; Gregg L. Semenza for their work in oxygen sensing systems. 2015 Evelyn M. Witkin and Stephen Elledge for their studies of the DNA damage response. 2016 Yoshinori Ohsumi for the discovery of how cells recycle their components in an orderly manner. This process, autophagy (self-eating),
|
{
"page_id": 21368349,
"source": null,
"title": "Wiley Prize"
}
|
is critical for the maintenance and repair of cells and tissues. 2017 Joachim Frank, Richard Henderson, and Marin van Heel for pioneering developments in electron microscopy. 2018 Lynne E. Maquat for elucidating the mechanism of nonsense-mediated messenger RNA decay. 2019 Svante PÀÀbo and David Reich for sequencing the genomes of ancient humans and extinct relatives. 2020 No award due to the COVID-19 pandemic. 2021 Clifford Brangwynne, Anthony Hyman, and Michael Rosen for a new principle of subcellular compartmentalization based on formation of phase-separated biomolecular condensates. 2022 David Baker, Demis Hassabis, and John Jumper for pioneering studies in protein structure predictions. 2023 Michael J. Welsh, Paul Negulescu, Fredrick Van Goor, and Sabine Hadida for research and development leading to medicines that effectively treat cystic fibrosis by correcting the folding, trafficking, and functioning of the mutated cystic fibrosis transmembrane regulator (CFTR). 2024 Judith Kimble, Allan Spradling, and Raymond Schofield for their discovery of the stem cell niche, a localized environment that controls stem-cell identity. 2025 Spyros Artavanis-Tsakonas and Iva Greenwald for their work in the discovery and elucidation of the Notch signaling pathway, a crucial mechanism in cellular development and disease. == See also == List of biology awards List of medicine awards == References == == External links == The Wiley Foundation Laureates 2021 - Protein structure prediction
|
{
"page_id": 21368349,
"source": null,
"title": "Wiley Prize"
}
|
The basilic vein is a large superficial vein of the upper limb that helps drain parts of the hand and forearm. It originates on the medial (ulnar) side of the dorsal venous network of the hand and travels up the base of the forearm, where its course is generally visible through the skin as it travels in the subcutaneous fat and fascia lying superficial to the muscles. The basilic vein terminates by uniting with the brachial veins to form the axillary vein. == Anatomy == === Course === As it ascends the medial side of the biceps in the arm proper (between the elbow and shoulder), the basilic vein normally perforates the brachial fascia (deep fascia) in the middle of the medial bicipital groove, and run upwards medial to the brachial artery to the lower border of teres major, continuing as the axillary vein. === Tributaries and anastomoses === Near the region anterior to the cubital fossa (in the bend of the elbow joint), the basilic vein usually communicates with the cephalic vein (the other large superficial vein of the upper extremity) via the median cubital vein. The layout of superficial veins in the forearm is highly variable from person to person, and there is a profuse network of unnamed superficial veins that the basilic vein communicates with. Around the inferior border of the teres major muscle and just proximal to the basilic vein's termination, the anterior and posterior circumflex humeral veins drain into it. == Clinical significance == === Venipuncture === Along with other superficial veins in the forearm, the basilic vein is an acceptable site for venipuncture. Nevertheless, IV nurses sometimes dub the basilic vein "the virgin vein", since with the arm typically supinated during phlebotomy the basilic vein below the elbow becomes awkward to access, and is
|
{
"page_id": 2297376,
"source": null,
"title": "Basilic vein"
}
|
therefore infrequently used. === Venous grafts === Vascular surgeons sometimes utilize the basilic vein to create an AV (arteriovenous) fistula or AV graft for hemodialysis access in patients with kidney failure. == Additional images == == See also == Cephalic vein Median cubital vein == External links == Anatomy photo:07:st-0701 at the SUNY Downstate Medical Center Radiology image: UpperLimb:18VenoFo from Radiology Atlas at SUNY Downstate Medical Center (need to enable Java) Illustration == References ==
|
{
"page_id": 2297376,
"source": null,
"title": "Basilic vein"
}
|
Vomocytosis (sometimes called non-lytic expulsion) is the cellular process by phagocytes expel live organisms that they have engulfed without destroying the organism. Vomocytosis is one of many methods used by cells to expel internal materials into their external environment, yet it is distinct in that both the engulfed organism and host cell remain undamaged by expulsion. As engulfed organisms are released without being destroyed, vomocytosis has been hypothesized to be utilized by pathogens as an escape mechanism from the immune system. The exact mechanisms, as well as the repertoire of cells that utilize this mechanism, are currently unknown, yet interest in this unique cellular process is driving continued research with the hopes of elucidating these unknowns. == Discovery == Vomocytosis was first reported in 2006 by two groups, working simultaneously in the UK and the US, based on time-lapse microscopy footage characterising the interaction between macrophages and the human fungal pathogen Cryptococcus neoformans. Subsequently, this process has also been seen with other fungal pathogens such as Candida albicans and Candida krusei. It has also been speculated that the process may be related to the expulsion of bacterial pathogens such as Mycobacterium marinum from host cells. Vomocytosis has been observed in phagocytic cells from mice, humans and birds, as well as being directly observed in zebrafish and indirectly detected (via flow cytometry) in mice. Amoebae exhibit a similar process to vomocytosis whereby phagosomal material that cannot be digested is exocytosed. Cryptococci are exocytosed from amoebae via this mechanism but inhibition of the constitutive pathway demonstrated that cryptococci could also be expelled via vomocytosis. == Mechanism == A full understanding of the mechanisms involved in vomocytosis is not currently known, yet advances in research have driven initial mechanistic descriptions and crucial steps involved in the process. Research has shown vomocytosis does not
|
{
"page_id": 47386145,
"source": null,
"title": "Vomocytosis"
}
|
occur when pathogens are dead or when engulfed materials are non-living, indicating the survival of phagosomal cargo may be crucial for triggering or enhancing vomocytosis. Additionally, the phagosomal pH may play important roles in vomocytosis efficacy as research has demonstrated vomocytosis rates drop as phagocytes become more acidic and vomocytosis is increased by the addition of weak bases to phagocytes. The membrane composition and cellular state are implicated in vomocytosis as vomocytosis has been shown to decrease with membrane permiability and increase in states of autophagy. Furthermore, inflammatory signals such as Type I interferons, which are produced in response to viral infections, are known to enhance vomocytosis. The impacts of these described forces on inducing vomocytosis are still being elaborated, and it is likely that they are variable based on other unknown external and internal factors. Just as in standard exocytosis, rearrangements of the actin cytoskeleton within the host cell are crucial for allowing vomocytosis to occur. In contrast to standard exocytosis, the engulfed pathogen is not lysed by internal components of the host cell, and the vesicle is brought close to the cellular membrane where it can fuse and release the pathogen cargo. Annexin A2, a membrane-bound protein, helps regulate vomocytosis and promote the fusing of vesicles to the plasma membranes. In annexin A2 deficient cell lines, rates of vomocytosis were decreased. Furthermore, screens of macrophage kinase inhibitors revealed signaling pathways linked to vomocytosis. ERK5, involved in the MAPK signaling pathway that communicates surface signals to cellular DNA, was shown to suppress vomocytosis. Additional signaling pathways involved in vomocytosis have yet to be determined. Furthermore, different morphologies of vomocytosis have been documented and it is possible that the underlying cellular mechanism may vary between them. == Biological significance == Research has been devoted to understanding the mechanisms and importance
|
{
"page_id": 47386145,
"source": null,
"title": "Vomocytosis"
}
|
of vomocytosis as it is hypothesized to be linked to many significant biological processes. Vomocytosis plays a role in lateral transfer, a process by which cells transfer engulfed cargo to a neighboring recipient cell, as initial cells expel their cargo undamaged so they can be uptaken by recipient cells. Additionally, vomocytosis is hypothesized to be utilized as an escape mechanism by pathogens as it allows them to evade degradation by macrophages. Since there is no damage to host cells or pathogens during vomocytosis, the immune system is not triggered, which allows for further potential evasion from hosts. More research is necessary to determine whether vomocytosis is initiated by engulfed pathogens for this purpose or by host cells and this is simply an unintentional benefit to pathogens. An additional hypothesis is that vomocytosis may enhance pathogenesis or spread of a pathogen as they are engulfed by macrophages and later expelled in locations that may potentially be different from the site of acute infection. Enhancing our understanding of host-pathogen interactions will clarify our understanding of vomocytosis's role in infection progression. Lastly, vomocytosis has been implicated in tumor response as tumor-associated macrophages (TAMs) are speculated to be able to modulate the tumor microenvironment (TME) via vomocytosis. Better understanding the mechanisms of inducing and regulating vomocytosis will enhance our knowledge of host-pathogen and host-self interactions, allowing for advances in our ability to respond to infections and tumors. == References ==
|
{
"page_id": 47386145,
"source": null,
"title": "Vomocytosis"
}
|
In vitro spermatogenesis is the process of creating male gametes (spermatozoa) outside of the body in a culture system. The process could be useful for fertility preservation, infertility treatment and may further develop the understanding of spermatogenesis at the cellular and molecular level. Spermatogenesis is a highly complex process and artificially rebuilding it in vitro is challenging. These include creating a similar microenvironment to that of the testis as well as supporting endocrine and paracrine signalling, and ensuring survival of the somatic and germ cells from spermatogonial stem cells (SSCs) to mature spermatozoa. Different methods of culturing can be used in the process such as isolated cell cultures, fragment cultures and 3D cultures == Culture techniques == === Isolated cell cultures === Cell cultures can include either monocultures, where one cell population is cultured, or co-culturing systems, where several cell lines (must be at least two) can be cultured together. Cells are initially isolated for culture by enzymatically digesting the testis tissue to separate out the different cell types for culture The process of isolating cells can lead to cell damage. The main advantage of monoculture is that the effect of different influences on one specific cell population of cells can be investigated. Co-culture allows for the interactions between cell populations to be observed and experimented on, which is seen as an advantage over the monoculture model. Isolated cell culture, specifically co-culture of testis tissue, has been a useful technique for examining the influences of specific factors such as hormones or different feeder cells on the progression of spermatogenesis in vitro. For example, factors such as temperature, feeder cell influence and the role of testosterone and follicle-stimulating hormone (FSH) have all been investigated using isolated cell culture techniques. Studies have concluded that different factors can influence the culture of germ
|
{
"page_id": 58527266,
"source": null,
"title": "In vitro spermatogenesis"
}
|
cells e.g. media, growth factors, hormones and temperature. For example, when culturing immortalized mouse germ cells at temperatures of 35, 37 and 29Β°C, these cells proliferate most rapidly at the highest temperature and least rapidly at the lowest but there were varying levels of differentiation. At the highest temperature no differentiation were detected, some was seen at 37Β°C and some early spermatids appearing at 32Β°C. Isolated cell culture technique has been successfully used for in vitro production of sperm using mouse as an animal model. Investigations of appropriate feeder cells concluded that a variety of cells could encourage development of germ cells such as Sertoli cells, Leydig cells and peritubular myoid cells but the most essential is Sertoli cells, but Leydig and peritubular myoid cells both contribute to the microenvironment that encourage stem cells to remain pluripotent and self renew in the testis. === Testes fragment cultures === In fragment cultures, the testis is removed and fragments of tissue are cultured in supplemental media containing different growth factors to induce spermatogenesis and form functional gametes. The development of this culture technique has taken place mainly with the use of animal models e.g. mice or rat testis tissue. The advantage of using this method is that it maintains the natural spatial arrangement of the seminiferous tubules. However, hypoxia is a recurring problem in these cultures where the low oxygen supply hinders the development and maturation of spermatids (significantly more in adult than immature testis tissues). Other challenges with this type of culture include maintaining the structure of the seminiferous tubules which makes it more difficult for longer-term cell cultures as the tissue structures can flatten out making it hard to work with. To resolve some of these issues, 3D cultures can be used. In 2012, mature spermatozoa capable of fertilization was
|
{
"page_id": 58527266,
"source": null,
"title": "In vitro spermatogenesis"
}
|
isolated from in vitro culture of immature mouse testis tissue. === 3D cultures === 3D cultures use sponge, models or scaffolds that resemble the elements of the extracellular matrix to achieve a more natural spatial structure of the seminiferous tubules and to better represent the tissues and the interaction between different cell types in an ex vivo experiment. Different components of the extracellular matrix such as collagen, agar and calcium alginate are commonly used to form the gel or scaffold which can provide oxygen and nutrients. To propagate 3D cultures, testicular cell cultures are imbedded into the porous sponge/scaffold and allowed to colonise the structure which can then survive for several weeks to allow spermatogonia to differentiate and mature into spermatozoa. In addition, shaking 3D cultures during the seeding process allows for an increased oxygen supply which helps overcome the issue of hypoxia and so improves the lifespan of cells. In contrast to monocultures, fragment/3D cultures are able to establish in vitro conditions that can somewhat resemble the testicular microenvironment to allow a more accurate study of the testicular physiology and its associations with the in vitro development of sperm cells. == Future implications == === Scientific === The ability to recapitulate spermatogenesis In vitro provides a unique opportunity to study this biological process through oftentimes cheaper and faster method of research than in vivo work. Observation is often easier in vitro, as the targeted cells are mostly isolated and immobile. Another significant advantage of in vitro research is the ease with which environmental factors can be changed and monitored. There are also techniques which are not practical or feasible in vivo which can now be explored. In vitro work is not without its own challenges. For example, one loses the natural structure provided by the in vivo tissue, and
|
{
"page_id": 58527266,
"source": null,
"title": "In vitro spermatogenesis"
}
|
thus cell connections which could be important to the function of the tissue. === Clinical === While rodent spermatogenesis is not identical to its human counterpart, especially due to the high evolution rate of the male reproductive tract, these techniques are a solid starting point for future human applications. Various categories of infertile men may benefit from advances in these techniques, especially those with a lack of viable gamete production. These men cannot benefit, for example, from sperm extraction techniques, and currently have little to no options for producing genetic descendants. Notably, males who have undergone chemo/radiotherapy prepubertally may benefit from in vitro spermatogenesis. These people did not have the option to cryopreserve viable sperm before their procedure, and thus the ability to generate genetically descended sperm later in life is invaluable. Possible methods that could be applied (to this and other groups) are induction of spermatogenesis in testis samples taken prepubertally, or, if these samples are not available/viable, new methods that manipulate stem cell differentiation could produce SSCs 'from scratch', using adult stem cell samples. An alternative method is to graft preserved tissue back onto adult cancer survivors, however this comes with operational risks, as well as a risk of reintroducing malignant cells. Even if using this method however, in vitro spermatogenesis advances would allow for sample expansion and observation to better ensure quality and quantity of graft tissue. In those with healthy or preserved SSCs but without a cellular environment to support them, in vitro spermatogenesis could be used following transplant of the SSCs into healthy donor tissue. Another group that could be helped by in vitro spermatogenesis are those with any form of genetic impediment to sperm production. Those with no viable SSC development are an obvious target, but also those with varying levels of spermatogenic arrest;
|
{
"page_id": 58527266,
"source": null,
"title": "In vitro spermatogenesis"
}
|
previously their underdeveloped germ cells have been injected into oocytes, however this has a success rate of only 3% in humans. Finally, in vitro spermatogenesis using animal or human cells can be used to assess the effects and toxicity of drugs before in vivo testing. == References ==
|
{
"page_id": 58527266,
"source": null,
"title": "In vitro spermatogenesis"
}
|
The term predation rate refers to the frequency with which an organism captures and consumes its prey in an ecosystem. Coupled with the kill rate, the predation rate drives the population dynamics of predation. This statistic is related to Predatorβprey dynamics and may be influenced by several factors. In order for predation to occur, a predator and its prey must encounter one another. A low concentration of prey decreases the likelihood of such encounters. The prey encounter rate is determined by the abundance of organisms and a predatorβs ability to locate its prey. Covering more territory increases the likelihood that a predator will meet its prey. In areas of low prey density, predators are adapted to be more motile, engage in filter feeding, or use attractants such as chemical lures. If predation increased simply with prey concentration, the relationship would be linear until a limit is reached. This scenario is represented by Holling's type I functional response, which is rarely observed in nature. Several factors affect this relationship, including handling time (the time required for a predator to consume its prey), selective feeding behaviors, and learning. In contrast, Holling's type II and type III functional responses account for the time predators spend handling prey and the reduced efficiency in locating prey at low densities. Predation rate is also influenced by spatial and temporal mismatch. An extreme example occurred in the Arctic in May of 2021 and 2022, when large blooms of Phytoplankton were observed alongside low concentrations of grazers. As the phytoplankton bloomed and died, the energy was not transferred into the Food web. Although primary production was high, the food web experienced an energy deficit. Spatial mismatch is particularly concerning under Climate change, as changing environmental parametersβsuch as rising Sea surface temperature and alterations in terrestrial habitats (e.g., loss
|
{
"page_id": 79236644,
"source": null,
"title": "Predation rates"
}
|
of Tundra and melting Sea ice)βcan create conditions that are no longer conducive to the populations they once supported == References ==
|
{
"page_id": 79236644,
"source": null,
"title": "Predation rates"
}
|
Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter". Later, advances in hardware and the development of the backpropagation algorithm, as well as recurrent neural networks and convolutional neural networks, renewed interest in ANNs. The 2010s saw the development of a deep neural network (i.e., one with many layers) called AlexNet. It greatly outperformed other image recognition models, and is thought to have launched the ongoing AI spring, and further increasing interest in deep learning. The transformer architecture was first described in 2017 as a method to teach ANNs grammatical dependencies in language, and is the predominant architecture used by large language models such as GPT-4. Diffusion models were first described in 2015, and became the basis of image generation models such as DALL-E in the 2020s. == Perceptrons and other early neural networks == The simplest feedforward network consists of a single weight layer without activation functions. It would be just a linear map, and training it would be linear regression. Linear regression by least squares method was used by Adrien-Marie Legendre (1805) and Carl Friedrich Gauss (1795) for the prediction of planetary movement. A Logical Calculus of the Ideas Immanent in Nervous Activity (Warren McCulloch and Walter Pitts, 1943) studied several abstract models for neural networks using symbolic logic of Rudolf Carnap and Principia Mathematica. The paper argued that several abstract models of neural networks (some learning, some not learning) have the same computational power as
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
Turing machines. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. This work led to work on nerve networks and their link to finite automata. In the early 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Hebbian learning is unsupervised learning. This evolved into models for long-term potentiation. Researchers started applying these ideas to computational models in 1948 with Turing's B-type machines. B. Farley and Wesley A. Clark (1954) first used computational machines, then called "calculators", to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956). Frank Rosenblatt (1958) created the perceptron, an algorithm for pattern recognition. A multilayer perceptron (MLP) comprised 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. With mathematical notation, Rosenblatt described circuitry not in the basic perceptron, such as the exclusive-or circuit that could not be processed by neural networks at the time. In 1959, a biological model proposed by Nobel laureates Hubel and Wiesel was based on their discovery of two types of cells in the primary visual cortex: simple cells and complex cells. He later published a 1962 book also introduced variants and computer experiments, including a version with four-layer perceptrons where the last two layers have learned weights (and thus a proper multilayer perceptron).: section 16 Some consider that the 1962 book developed and explored all of the basic ingredients of the deep learning systems of today. Some say that research stagnated following Marvin Minsky and Papert Perceptrons (1969). Group method of data handling, a method to train
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
arbitrarily deep neural networks was published by Alexey Ivakhnenko and Lapa in 1967, which they regarded as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method. The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. == Backpropagation == Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was developed multiple times in early 1970s. The earliest published instance was Seppo Linnainmaa's master thesis (1970). Paul Werbos developed it independently in 1971, but had difficulty publishing it until 1982. In 1986, David E. Rumelhart et al. popularized backpropagation. == Recurrent network architectures == One origin of the recurrent neural network (RNN) was statistical mechanics. The Ising model was developed by Wilhelm Lenz and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time. Shun'ichi Amari in 1972 proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
of learning. This was popularized as the Hopfield network (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. In 1933, Lorente de NΓ³ discovered "recurrent, reciprocal connections" by Golgi's method, and proposed that excitatory loops explain certain aspects of the vestibulo-ocular reflex. Hebb considered "reverberating circuit" as an explanation for short-term memory. (McCulloch & Pitts 1943) considered neural networks that contains cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past. Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. === LSTM === Sepp Hochreiter's diploma thesis (1991) proposed the neural history compressor, and identified and analyzed the vanishing gradient problem. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture. Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture. Around 2006, LSTM started to revolutionize speech recognition,
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
outperforming traditional models in certain speech applications. LSTM also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google voice search, and dictation on Android devices. LSTM broke records for improved machine translation, language modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. == Convolutional neural networks (CNNs) == The origin of the CNN architecture is the "neocognitron" introduced by Kunihiko Fukushima in 1980. It was inspired by work of Hubel and Wiesel in the 1950s and 1960s which showed that cat visual cortices contain neurons that individually respond to small regions of the visual field. The neocognitron introduced the two basic types of layers in CNNs: convolutional layers, and downsampling layers. A convolutional layer contains units whose receptive fields cover a patch of the previous layer. The weight vector (the set of adaptive parameters) of such a unit is often called a filter. Units can share filters. Downsampling layers contain units whose receptive fields cover patches of previous convolutional layers. Such a unit typically computes the average of the activations of the units in its patch. This downsampling helps to correctly classify objects in visual scenes even when the objects are shifted. In 1969, Kunihiko Fukushima also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep neural networks in general. The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel and was one of the first CNNs, as it achieved shift invariance. It did so by utilizing weight sharing in combination with backpropagation training. Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one. In 1988, Wei Zhang et al. applied
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
backpropagation to a CNN (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. They also proposed an implementation of the CNN with an optical computing system. Kunihiko Fukushima published the neocognitron in 1980. Max pooling appears in a 1982 publication on the neocognitron. In 1989, Yann LeCun et al. trained a CNN with the purpose of recognizing handwritten ZIP codes on mail. While the algorithm worked, training required 3 days. It used max pooling. Learning was fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types. Subsequently, Wei Zhang, et al. modified their model by removing the last fully connected layer and applied it for medical image object segmentation in 1991 and breast cancer detection in mammograms in 1994. In a variant of the neocognitron called the cresceptron, instead of using Fukushima's spatial averaging, J. Weng et al. also used max-pooling where a downsampling unit computes the maximum of the activations of the units in its patch. LeNet-5, a 7-level CNN by Yann LeCun et al. in 1998, that classifies digits, was applied by several banks to recognize hand-written numbers on checks (British English: cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of CNNs, so this technique is constrained by the availability of computing resources. In 2010, Backpropagation training through max-pooling was accelerated by GPUs and shown to perform better than other pooling variants. Behnke (2003) relied only on the sign of the gradient (Rprop) on problems such as image reconstruction and face localization. Rprop is a first-order optimization algorithm created by Martin Riedmiller and Heinrich Braun in 1992. == Deep learning == The deep learning revolution started
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning. A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and JΓΌrgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. Many discoveries were empirical and focused on engineering. For example, in 2011, Xavier Glorot, Antoine Bordes and Yoshua Bengio found that the ReLU worked better than widely used activation functions prior to 2011. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs. In 2014, the state of the art was training βvery deep neural networkβ with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
as the "degradation" problem. In 2015, two techniques were developed concurrently to train very deep networks: highway network and residual neural network (ResNet). The ResNet research team attempted to train deeper ones by empirically testing various tricks for training deeper networks until they discovered the deep residual network architecture. == Generative adversarial networks == In 1991, Juergen Schmidhuber published "artificial curiosity", neural networks in a zero-sum game. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. It was extended to "predictability minimization" to create disentangled representations of input patterns. Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN. An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013. Another inspiration for GANs was noise-contrastive estimation, which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010β2014. Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALLΒ·E 2 (2022) and Stable Diffusion (2022). == Attention mechanism and Transformer == The human selective attention had been studied in neuroscience and cognitive psychology. Selective attention of audition was studied in the cocktail party effect (Colin Cherry, 1953). (Donald Broadbent, 1958) proposed the filter model of attention. Selective attention of vision was studied in the 1960s by George Sperling's partial report paradigm. It was also noticed that saccade control is modulated by cognitive processes, in that the eye moves preferentially towards areas of high salience. As the fovea of the eye is small, the eye cannot sharply resolve all of the visual field at once. The use of saccade control allows the eye to quickly scan important features of a scene. These researches inspired algorithms, such as a variant of the Neocognitron. Conversely, developments in neural networks had inspired circuit models of biological visual attention. A key aspect of attention mechanism is the use of multiplicative operations, which had been studied under the names of higher-order neural networks, multiplication units, sigma-pi units, fast weight controllers, and hyper-networks. === Recurrent attention === During the deep learning era, attention mechanism was developed solve similar problems in encoding-decoding. The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. A seq2seq architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of attention mechanism and Transformer. An image captioning model was proposed in 2015, citing inspiration from the seq2seq model. that would encode
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
an input image into a fixed-length vector. (Xu et al. 2015), citing (Bahdanau et al. 2014), applied the attention mechanism as used in the seq2seq model to image captioning. === Transformer === One problem with seq2seq models was their use of recurrent neural networks, which are not parallelizable as both the encoder and the decoder processes the sequence token-by-token. The decomposable attention attempted to solve this problem by processing the input sequence in parallel, before computing a "soft alignment matrix" ("alignment" is the terminology used by (Bahdanau et al. 2014)). This allowed parallel processing. The idea of using attention mechanism for self-attention, instead of in an encoder-decoder (cross-attention), was also proposed during this period, such as in differentiable neural computers and neural Turing machines. It was termed intra-attention where an LSTM is augmented with a memory network as it encodes an input sequence. These strands of development were combined in the Transformer architecture, published in Attention Is All You Need (2017). Subsequently, attention mechanisms were extended within the framework of Transformer architecture. Seq2seq models with attention still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them to be accelerated on GPUs. In 2016, decomposable attention applied attention mechanism to the feedforward network, which are easy to parallelize. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence is sufficient for language translation, thus the title "attention is all you need". In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to processes all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. Its parallelizability was an
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
important factor to its widespread use in large neural networks. == Unsupervised and self-supervised learning == === Self-organizing maps === Self-organizing maps (SOMs) were described by Teuvo Kohonen in 1982. SOMs are neurophysiologically inspired artificial neural networks that learn low-dimensional representations of high-dimensional data while preserving the topological structure of the data. They are trained using competitive learning. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological "map" of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body. === Boltzmann machines === During 1985β1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). Geoffrey Hinton et al. (2006) proposed learning a high-level internal representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. This RBM is a generative stochastic feedforward neural network that can learn a probability distribution over its set of inputs. Once sufficiently many layers have been learned, the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations. === Deep learning === In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos. == Other aspects == === Knowledge distillation
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.