text
stringlengths
11
320k
source
stringlengths
26
161
Chemistry (from Egyptian kēme (chem), meaning "earth" [ 1 ] ) is the physical science concerned with the composition, structure, and properties of matter , as well as the changes it undergoes during chemical reactions . [ 2 ] Below is a list of chemistry-related articles in alphabetical order. Chemical compounds are listed separately at List of inorganic compounds , List of biomolecules , or List of organic compounds . The Outline of chemistry delineates different aspects of chemistry.
https://en.wikipedia.org/wiki/Index_of_chemistry_articles
This is an alphabetical list of articles pertaining specifically to civil engineering . For a broad overview of engineering, please see List of engineering topics . For biographies please see List of civil engineers . Accuracy and precision – American Society of Civil Engineers – Applied mechanics – Arch Beam (structure) – Bending – Brittle – Buckling Carbon fiber – Check dam – Classical mechanics – Composite material – Compressive strength – Computational fluid dynamics – Computer-aided design – Conservation of mass – Concrete – Corrosion Dam – Damping ratio – Deformation – Delamination – Design – Dimensionless number – Drafting – Dynamics Elasticity – Engineering drawing – Exploratory engineering Factor of safety – Fatigue – Fillet – Finite element analysis – Finite element method – Fluid mechanics – Force – Friction – Fundamentals of Engineering exam Gauge – Gauge (engineering) – Granular material Heating and cooling systems – Hydraulics – Hydrostatics Inclined plane – Inertia – Instrumentation – Invention Joint Lever – Liability – Life cycle cost analysis – Limit state design – Load transfer Margin of safety – Mass transfer – Materials – Materials engineering – Material selection – Mechanics – Moment – Moment of inertia Normal stress – Nozzle Physics – Plasticity – Plastic moment – Poisson's ratio – Position vector – Pressure – Product lifecycle management – Professional engineer – Project management – Pulley – Pump – Pile foundation Quality – Quality control – Quantity surveying Reliability engineering – Resistive force – Reverse engineering – Rigid body – Reinforced concrete – Safety engineering – Shear force diagrams – Shear modulus – Shear strength – Shear stress – Simple machine – Simulation – Slide rule – Solid mechanics – Solid modeling – Spoolbase – Statics – Stress–strain curve – Structural failure – Student design competition – Surveying – Technical drawing – Technology – Tensile strength – Tensile stress – Theodolite – Theory of elasticity – Toughness – Turbine – Vector – Viscosity – Vibration Wedge – Weight transfer – Weir Yield strength – Young's modulus
https://en.wikipedia.org/wiki/Index_of_civil_engineering_articles
Originally, the word computing was synonymous with counting and calculating, and the science and technology of mathematical calculations . Today, "computing" means using computers and other computing machines. It includes their operation and usage, the electrical processes carried out within the computing hardware itself, and the theoretical concepts governing them ( computer science ). See also: List of programmers , List of computing people , List of computer scientists , List of basic computer science topics , List of terms relating to algorithms and data structures . Topics on computing include: 1.TR.6 – 100BaseVG – 100VG-AnyLAN – 10BASE-2 – 10BASE-5 – 10BASE-T – 120 reset – 1-bit computing – 16-bit computing – 16550 UART – 1NF – 1TBS – 20-GATE – 20-GATE – 2B1D – 2B1Q – 2D – 2NF – 3-tier (computing) – 32-bit application – 32-bit computing – 320xx microprocessor – 386BSD – 3Com Corporation – 3DO – 3D computer graphics – 3GL – 3NF – 3Station – 4.2BSD – 4-bit computing – 404 error – 431A – 473L system – 486SX – 4GL – 4NF – 51-FORTH – 56 kbit/s line – 5ESS switch – 5NF – 5th Glove – 6.001 – 64-bit computing – 680x0 – 6x86 – 8-bit clean – 8-bit computing – 8.3 filename – 80x86 – 82430FX – 82430HX – 82430MX – 82430VX – 8514 (display standard) – 8514-A – 88open – 8N1 – 8x86 – 90–90 rule – 9PAC ABC ALGOL – ABLE – ABSET – ABSYS – Accent – Acceptance, Test Or Launch Language – Accessible Computing – Ada – Addressing mode – AIM alliance – AirPort – AIX – Algocracy – ALGOL – Algorithm – AltiVec – Amdahl's law – America Online – Amiga – AmigaE – Analysis of algorithms – AOL – APL – Apple Computer, Inc. – Apple II – AppleScript – Array programming – Arithmetic and logical unit – ASCII – Active Server Pages – ASP.NET – Assembly language – Atari – Atlas Autocode – AutoLISP – Automaton – AWK B (programming language) – Backus–Naur form – Basic Rate Interface (2B+D) – BASIC – Batch job – BCPL – Befunge – BeOS – Berkeley Software Distribution – BETA – Big O notation – Binary symmetric channel – Binary Synchronous Transmission – Binary numeral system – Bit – BLISS – Blu-ray – Blue screen of death – Bourne shell (sh) Bourne-Again shell (bash) – Better Portable Graphics (BPG) – Brainfuck – Btrieve – Burrows–Abadi–Needham logic – Business computing C++ – C# – C – Cache – Canonical LR parser – Cat (Unix) – CD-ROM – Central processing unit – Chimera – Chomsky normal form – CIH virus – Classic Mac OS – COBOL – Cocoa (software) – Code and fix – Code Red worm – ColdFusion – Colouring algorithm – COMAL – Comm (Unix) – Command line interface – Command line interpreter – COMMAND.COM – Commercial at (computing) – Commodore 1541 – Commodore 1581 – Commodore 64 – Common logarithm – Common Unix Printing System – Compact disc – Compiler – Computability theory – Computational complexity theory – Computation – Computer-aided design – Computer-aided manufacturing – Computer architecture – Computer cluster – Computer hardware – Computer monitor – Computer network – Computer numbering format – Computer programming – Computer science – Computer security – Computer software – Computer system – Computer – Computing – Context-free grammar – Context-sensitive grammar – Context-sensitive language – Control flow – Control store – Control unit – CORAL66 – CP/M – CPL – Cracking (software) – Cracking (passwords) – Cryptanalysis – Cryptography – Cybersquatting – CYK algorithm – Cyrix 6x86 D – Data compression – Database normalization – Decidable set – Deep Blue – Desktop environment – Desktop publishing – Deterministic finite automaton – Dialer – DIBOL – Diff – Digital camera – DEC (Digital Equipment Corporation) – Digital signal processing – Digital visual interface – Direct manipulation interface – Disk storage – Distance transform – Distance map – Distance field – Docblock – DVD – DVI (TeX) – Dvorak keyboard layout – Dylan Earth Simulator – EBCDIC – ECMAScript (a.k.a. JavaScript) – Electronic data processing (EDP) – Enhanced Versatile Disc (EVD) – ENIAC – Enterprise Java Beans (EJB) – Entscheidungsproblem – Equality (relational operator) – Erlang – Enterprise resource planning (ERP) – ES EVM – Ethernet – Euclidean algorithm – Euphoria – Exploit (computer security) Fast Ethernet – Federated Naming Service – Field specification – Final Cut Pro – Finite-state automaton – FireWire – First-generation language – Floating-point unit – Floppy disk – Formal language – Forth – Fortran – Fourth-generation language – Fragmentation – Free On-line Dictionary of Computing – Free Software Foundation – Free software movement – Free software – Freescale 68HC11 – Freeware – Function-level programming – Functional programming G5 – GEM – General Algebraic Modeling System – Genie – GNU – GNU Bison – Gnutella – Graphical user interface – Graphics Device Interface – Greibach normal form – G.hn hack (technology slang) – Hacker (computer security) – Hacker (hobbyist) – Hacker (programmer subculture) – Hacker (term) – Halting problem – Hard Drive – Haskell – HD DVD – History of computing – History of computing hardware – History of Microsoft Windows – History of operating systems – History of the graphical user interface – Hitachi 6309 – Home computer – Human–computer interaction IA-32 – IA-64 – IBM PC – Interactive computation – IBM – iBook – iCab – iCal – Icon – iDVD – IEEE 802.2 – IEEE 802.3 – IEEE floating-point standard – iMac – Image processing – iMovie – Indentation style Inform – Instruction register – Intel 8008 – Intel 80186 – Intel 80188 – Intel 80386 – Intel 80486SX – Intel 80486 – Intel 8048 – Intel 8051 – Intel 8080 – Intel 8086 – Intel 80x86 – Intel – INTERCAL – International Electrotechnical Commission – Internet Explorer – Internet – iPhoto – iPod – iResQ – Irreversible circuit – iSync – iTunes J (programming language) – Java Platform, Enterprise Edition – Java Platform, Micro Edition – Java Platform, Standard Edition – Java API – Java – Java virtual machine (JVM) – JavaScript (standardized as ECMAScript ) – JPEG K&R – KDE – Kilobyte – Kleene star – Klez – KRYPTON LALR parser – Lambda calculus – Lasso – LaTeX – Leet – Legal aspects of computing – Lex – LibreOffice – Limbo – Linked list – Linux – Lisp – List of IBM products – List of Intel microprocessors – List of programming languages – List of operating systems – List of Soviet computer systems – LL parser – Logic programming – Logo – Lotus 1-2-3 – LR parser – Lua – Lynx language – Lynx browser m4 – macOS Server – macOS – Mac – MAD – Mainframe computer – Malware – Mary – Mealy machine – Megabyte – Melissa worm – Mercury – Mesa – Microcode – Microprocessor – Microprogram – Microsequencer – Microsoft Windows – Microsoft – MIPS architecture - Miranda – ML – MMC – MMU – MMX – Mobile Trin – Modula – MOO – Moore's Law – Moore machine – Morris worm – MOS Technology 6502 – MOS Technology 650x – MOS Technology 6510 – Motorola 68000 – Motorola 6800 – Motorola 68020 – Motorola 68030 – Motorola 68040 – Motorola 68060 – Motorola 6809 – Motorola 680x0 – Motorola 68LC040 – Motorola 88000 – Mozilla – MPEG – MS-DOS – Multics – Multiprocessing – MUMPS .NET – NetBSD – Netlib – Netscape Navigator – NeXT, Inc. – Nial – Nybble – Ninety–ninety rule – Non-uniform memory access – Nondeterministic finite automaton Oberon – Objective-C – object – OCaml – occam – OmniWeb – One True Brace Style – OpenBSD – Open source – Open Source Initiative – OpenVMS - Opera (web browser) – Operating system advocacy – Operating system PA-RISC – Page description language – Pancake sorting – Parallax Propeller – Parallel computing – Parser (language) – Parsing (technique) – Partial function – Pascal – PDP – Peer-to-peer network – Perl – Personal computer – PHP – PILOT – PL/I – Pointer – Poplog – Portable Document Format (PDF) – Poser – PostScript – PowerBook – PowerPC – PowerPC G4 – Prefix grammar – Preprocessor – Primitive recursive function – Programming language – Prolog – PSPACE-complete – Pulse-code modulation (PCM) – Pushdown automaton – Python QuarkXPress – QuickTime – QWERTY R (programming language) – RAM (random-access memory) – RAM drive – Random access – RascalMPL – Ratfor – RCA 1802 – Read-only memory (ROM) – REBOL – Recovery-oriented computing – Recursive descent parser – Recursion (computer science) – Recursive set – Recursively enumerable language – Recursively enumerable set – Reference (computer science) – Referential transparency – Register – Regular expression – Regular grammar – Regular language – RPG – Retrocomputing – REXX – RFC – RISC – RS/6000 – Ruby Safari (web browser) – SAIL – Script kiddie – Scripting language – SCSI – Second-generation programming language – Secure Sockets Layer – sed – Self (or SELF) – Semaphore (programming) – Sequential access – SETL – Shareware – Shell script – Shellcode – SIMD – Simula – Sircam – Slide rule – SLIP – SLR parser – Smalltalk – Server Message Block – SMBus – SMIL (computer) – Smiley – SNOBOL – Software engineering – SONET – Space-cadet keyboard – SPARC International – Specialist (computer) – SPITBOL – SQL – SQL slammer worm – SR – SSL – Service-oriented architecture – S/SL – Stale pointer bug – Standard ML (or SML) – Stateless server – Stepping level - Structured programming – Subject-oriented programming – Subnetwork – Supercomputer – Swap space – Symbolic mathematics – Symlink – Symmetric multiprocessing – Syntactic sugar – SyQuest Technology – SYSKEY – System board – System programming language – System R (IBM) – System X (supercomputer) TADS – Tcl – TECO (text editor) – Text editor – TeX – Third-generation language – Timeline of computing – Timeline of computing 1950–1979 – Timeline of computing 1980–1989 – Timeline of computing 1990–1999 – Timeline of computing hardware before 1950 (2400 BC–1949) – Tk – TPU – Trac – Transparency (computing) – Trin II – Trin VX – Turing machine – Turing – 2B1Q UAT – Unicode – Unicon – Unix – Unix shell – UNIX System V – Unlambda – USB – Unreachable memory Var'aq – VAX – VBScript – Vector processor – Ventura Publisher – Very-large-scale integration – Video editing – Virtual memory – Visual Basic (classic) – Visual Basic .NET – Visual FoxPro – Von Neumann architecture WD16 – Web 2.0 – Web browser – Western Design Center – The WELL - Western Design Center 65C02 – Western Design Center 65816 – Whitespace – Wiki – Window manager – Windows 1.0 – Windows 2000 – Windows 95 – Windows Me – Windows NT – Windows XP – Windows 7 – Word processor – World Wide Web – WYSIWYG X Window System – X86 – Xmouse Yacc – YaST – Yet another – Yorick Z notation – Z shell – Zilog Z80 – Zooming User Interface – ZX80 – ZX81 – ZX Spectrum
https://en.wikipedia.org/wiki/Index_of_computing_articles
This page is a list of construction topics . Abated - Abrasive blasting - AC power plugs and sockets - Access mat - Accrington brick - Accropode - Acid brick - Acoustic plaster - Active daylighting - Adaptive reuse - Aerial crane - Aerosol paint - Aggregate base - Agile construction - Akmon - Alternative natural materials - Anchorage in reinforced concrete - Angle grinder - Arc welding - Artificial stone - Asbestos cement - Asbestos insulating board - Asbestos shingle - Asphalt concrete - Asphalt roll roofing - Autoclaved aerated concrete - Autonomous building - Azulejo - Australian Construction Contracts - Axe Backhoe - Balloon framing - Bamboo construction - Bamboo-mud wall - Bandsaw - Banksman - Barrel roof - Baseboard - Basement waterproofing - Batten - Batter board - Belt sander - Bill of quantities - Bioasphalt - Biocidal natural building material - Bituminous waterproofing - Block paving - Blowtorch - Board roof - Bochka roof - Bond beam - Boulder wall - Bowen Construction - Box crib - Breaker - Brettstapel - Brick - Brick clamp - Brick hod - Bricklayer - Brickwork - Bughole - Builder's risk insurance - Builders hardware - Builders' rites - Building - Building automation - Building code - Building construction - Building control body - Building cooperative - Building design - Building diagnostics - Building engineer - Building envelope - Building estimator - Building implosion - Building information modeling - Building information modeling in green building - Building insulation - Building insulation materials - Building-integrated photovoltaics - Building life cycle - Building maintenance unit - Building material - Building officials - Building performance - Building performance simulation - Building regulations approval - Building regulations in the United Kingdom - Building science - Building services engineering - Building typology - Bull's eye level - Bulldozer - Bundwerk - Bush hammer - Butterfly roof Calcium aluminate cements - Camber beam - Carpenter's axe - Carpentry - Cast in place concrete - Cast stone - Caulk - Cavity wall insulation - Cellulose insulation - Cement - Cement board - Cement-bonded wood fiber - Cement clinker - Cement kiln - Cement mill - Cement render - Cement tile - Cementing equipment - Cementitious foam insulation - Cenocell - Central heating - Centring - Ceramic building material - Ceramic tile cutter - Chaska brick - Chief Construction Adviser to UK Government - Chimney - Circular saw - Civil engineer - Civil engineering - Civil estimator - Cladding (construction) - Clerk of the Works - Climate-adaptive building shell - Climbing formwork - Clinker brick - Close studding - Coastal engineering - Coating - Cold-formed steel - Collar beam - Collyweston stone slate - Compactor - Complex Projects Contract - Composite material - Composting toilet - Compressed earth block - Computer-aided design - Concrete - Concrete degradation - Concrete densifier - Concrete finisher - Concrete float - Concrete fracture analysis - Concrete grinder - Concrete hinge - Concrete leveling - Concrete mixer - Concrete masonry unit - Concrete moisture meter - Concrete plant - Concrete pump - Concrete recycling - Concrete saw - Concrete sealer - Concrete ship - Concrete slab - Concrete slump test - Conical roof - Constructability - Constructed wetland - Constructing Excellence - Construction - Construction Alliance - Construction and renovation fires - Construction bidding - Construction buyer - Construction collaboration technology - Construction communication - Construction contract - Construction delay - Construction engineering - Construction equipment theft - Construction estimating software - Construction foreman - Construction industry of India - Construction industry of Iran - Construction industry of Japan - Construction industry of Romania - Construction industry of the United Kingdom - Construction law - Constructionline - Construction loan - Construction management - Construction paper - Construction partnering - Construction Photography - Construction Research and Innovation Strategy Panel - Construction site safety - Construction trailer - Construction waste - Construction worker - Cool pavement - Copper cladding - Cordwood construction - Core-and-veneer - Corn construction - Cornerstone - Corrosion fatigue - Corrugated galvanised iron - Cost engineering - Cost overrun - Cover meter - Crane - Crane vessel - Crawl space - Crawler excavator - Cream City brick - Creep and shrinkage of concrete - Cross bracing - Cross-laminated timber - Custom home - Cutting tool Damp proofing - Deck - Deconstruction - Decorative concrete - Decorative laminate - Decorative stones - Deep foundation - Deep plan - Demolition - Demolition waste - Design–bid–build - Design–build - Detailed engineering - Diagrid - Diamond grinding - Diamond grinding of pavement - Die grinder - Dimensional lumber - Directional boring - Displacement ventilation - Distribution board - Dolos - Domestic roof construction - Double envelope house - Double tee - Dragon beam - Drain (plumbing) - Drainage - Drifter drill - Drill - Drilling and blasting - Driven to refusal - Dropped ceiling - Dry mortar production line - Drywall - Drywall mechanic - Ducrete - Dump truck - Dumper - Duplex - Dutch brick - Dutch gable roof - Dutch roof tiles - Dwang Early skyscrapers - Earthbag construction - Earthquake engineering - Earthquake-resistant structures - Earthquake simulation - Eco-cement - Egyptian pyramid construction techniques - Electrical engineer - Electrical wiring - Electrician - Electric resistance welding - Elemental cost planning - Elevator mechanic - Encasement - Encaustic tile - Endurance time method - Energetically modified cement - Engineering - Engineered cementitious composite - Engineering brick - Engineering, procurement, and construction - Enviroboard - Environmental impact of concrete - Equivalent Concrete Performance Concept - Erosion control - Eternit - Excavator - Expanded clay aggregate - Expanded polystyrene concrete - External render - Exterior insulation finishing system - External wall insulation Falsework - Facade - Facade engineering - Facadism - Facility condition assessment - Fareham red brick - Fast-track construction - Fastener - Faux painting - Fédération Française du Bâtiment - Ferrocement - Fiberboard - Fiber cement siding - Fiberglass - Fiberglass sheet laminating - Fiber-reinforced composite - Fiber-reinforced concrete - Fiber roll - Fibre cement - Fibre-reinforced plastic - Filigree concrete - Fill trestle - Fire brick - Fire door - Fire protection ( Active fire protection / Passive fire protection ) - Fire Protection Engineering - Firestopping - Fireproofing - Fire safety - Fire sprinkler system - First fix and second fix - Flashing - Flat roof - Floating raft system - Floor plan - Flux-cored arc welding - Fly ash brick - Foam concrete - Foam glass - Forge welding - Formstone - Formwork - Foundation - Framer - Framing - Frost damage - Furring Gable roof - Gambrel - Gas metal arc welding - Geofoam - Geologic preliminary investigation - GigaCrete - Girt - Glass brick - Glass fiber reinforced concrete - Glazier - Glazing - Glued laminated timber - Grade beam - Grader - Grating - Green building - Green building and wood - Green building in Germany - Green (certification) - Green roof - Green wall - Groundbreaking - Ground reinforcement - Grout - Grouted roof - Guastavino tile - Gypsum block - Gypsum concrete Hammer - Hammerbeam roof - Hammer drill - Hard hat - Harling - Harvard brick - Heat pump - Heavy equipment - Heavy equipment operator - Hempcrete - Herodotus Machine - Herringbone pattern - High-performance fiber-reinforced cementitious composites - High-rise building - High-visibility clothing - History of construction - History of structural engineering - History of the world's tallest buildings - Heating, ventilation, and air-conditioning - Hoisting - Home construction - Home improvement - Home wiring - Hot-melt adhesive - House - House painter and decorator - House raising - Housewrap - Hurricane-proof building - Hybrid masonry - Hydrodemolition - Hydrophobic concrete - Hypertufa I-beam - I-joist - Iberian paleochristian decorated tile - Illegal construction - Imbrex and tegula - Impact wrench - Imperial roof decoration - Industrialization of construction - Insulated glazing - Insulated siding - Insulating concrete form - Insulation materials - Integrated framing assembly - Integrated project delivery - Interior protection - International Building Code - Ironworker Jackhammer - Jack post - Japanese carpentry - Jettying - Jigsaw - Joinery - Joint - Joint compound - Johnson bar Knee wall - Knockdown texture Laborer - Ladder - Lakhori bricks - Laminate panel - Lath and plaster - Laser level - Launching gantry - Lean construction - Level luffing crane - Lewis (lifting appliance) - Lift slab construction - Lifting equipment - Lighting - Light tower - Lightening holes - Lime mortar - Line of thrust - Live bottom trailer - Living building material - Load-bearing wall - Loader - Log building - London stock brick - Low-energy building techniques - Low-energy house - Low-rise building - Lump sum contract - Lunarcrete - Lustron house Mansard roof - Marbleizing - Masonry - Masonry trowel - Masonry veneer - Mass concrete - Master builder - Material efficiency - Material passport - Mathematical tile - Mechanical connections - Mechanical, electrical, and plumbing - Mechanics lien - Medieval letter tile - Medium-density fibreboard - Megaproject - Megastructure - Metal profiles - Microtunneling - Middle-third rule - Miller Act - Millwork - Millwright - Mobile crane - Modular addition - Modular building - Moiré tell-tale - Moling - Moment-resisting frame - Monocrete construction - Mono-pitched roof - Mortar - Mudbrick - Mudcrete - Multi-tool Nail gun - Nanak Shahi bricks - Nanoconcrete - NEC Engineering and Construction Contract - New Austrian tunnelling method - New-construction building commissioning - Nibbler - Non-shrink grout Occupancy - Offshore construction - Off-site construction - Operational bill - Opus africanum - Opus albarium - Opus craticum - Opus gallicum - Opus incertum - Opus isodomum - Opus latericium - Opus mixtum - Opus quadratum - Opus reticulatum - Opus spicatum - Opus vittatum - Oriented strand board - Oxy-fuel welding and cutting Painter and decorator - Painterwork - Panelling - Pantile - Papercrete - Parge coat - Particle board - Passive daylighting - Passive house - Passive survivability - Pavement - Pavement engineering - Pavement milling - Paver base - Penetrant (mechanical, electrical, or structural) - Performance bond - Permeable paving - Pierrotage - Pile cap - Pile driver - Pile splice - Pipefitter - Pipelayer - Planetary surface construction - Plank (wood) - Planning permission - Planning permission in the United Kingdom - Plasma arc welding - Plasterer - Plasterwork - Plastic lumber - Plot plan - Plug and feather - Plumb bob - Plumber - Plumbing - Plumbing drawing - Pneumatic tool - Pole building framing - Polished concrete - Polychrome brickwork - Polymer concrete - Porch collapse - Portable building - Portland cement - Portland stone - Portuguese pavement - Post in ground - Poteaux-sur-sol - Powder coating - Power concrete screed - Power shovel - Power tool - Power trowel - Precast concrete - Pre-construction services - Pre-engineered building - Prefabricated building - Prefabrication - Prestressed concrete - Prestressed structure - Primer (paint) - Project agreement (Canada) - Project delivery method - Project management - Properties of concrete - Punch list - Purlin Quadruple glazing - Quantity surveyor - Quantity take-off - Quarry tile - Quarter minus R-value (insulation) - Radial arm saw - Radiant barrier - Radiator reflector - Rafter - Rainscreen - Raised floor - RAL colour standard - RAL colors - Rammed earth - Random orbital sander - Rapid construction - Ready-mix concrete - Real estate - Rebar - Rebar detailing - Rebar spacer - Reciprocal frame - Reciprocating saw - Red List building materials - Red rosin paper - Redevelopment - Reed mat (plastering) - Reema construction - Reglet - Reinforced concrete - Reinforced concrete structures durability - Relocatable buildings - Repointing - Resilience (engineering and construction) - Retentions in the British construction industry - Rice-hull bagwall construction - Rigger - Rigid panel - Ring crane - Rivet gun - Road - Road surface - Roller-compacted concrete - Roman cement - Roof - Roof coating - Roof edge protection - Roofer - Roof shingle - Roof tiles - Room air distribution - Rosendale cement - Rotary hammer - Roughcast - Rubberized asphalt - Rubble - Rubble trench foundation - Rubblization - Ruin value Saddle roof - Salt-concrete - Saltillo tile - Sander - Sandhog - Sandjacking - Sandwich panel - Sarking - Saw-tooth roof - Sawyer - Scabbling - Scaffolding - Schmidt hammer - Screed - Screw piles - Scrim and sarking - Sediment control - Segregation in concrete - Self-build - Self-cleaning floor - Self-consolidating concrete - Self-framing metal buildings - Self-leveling concrete - Septic tank - Serviceability - Sett - Settlement (structural) - Sewage treatment - Shallow foundation - Shear - Shear wall - Sheet metal - Shelf angle - Shielded metal arc welding - Shiplap - Shop drawing - Shoring - Shotcrete - Shovel - Shovel ready - Sick building syndrome - Siding - Sill plate - Site survey - Skyscraper - Skyscraper design and construction - Slate industry in Wales - Slater - Sledgehammer - Slipform stonemasonry - Slip forming - Smalley - Snecked masonry - Soft story building - Soil cement - Solid ground floor - Sorel cement - Spackling paste - Spirit level - Split-level home - Spray painting - Stack effect - Staff - Staffordshire blue brick - Staggered truss system - Staircase jig - Stair tread - Stairs - Stamped asphalt - Stamped concrete - Steam shovel - Steeplejack - Sticky rice mortar - Stonemason's hammer - Storey pole - Storm drain - Storm window - Steel building - Steel fixer - Steel frame - Steel plate construction - Stone carving - Stone sealer - Stone veneer - Storey - Strand jack - Strap footing - Straw-bale construction - Strength of materials - Strongback - Structural building components - Structural channel - Structural clay tile - Structural drawing - Structural engineering - Structural insulated panel - Structural integrity and failure - Structural material - Structural robustness - Structural steel - Structure relocation - Strut channel - Stucco - Submerged arc welding - Submittals - Subsidence - Substructure - Suction excavator - Suicide bidding - Sulfur concrete - Superadobe - Superinsulation - Superintendent - Surfaced block - Survey stakes - Sustainability in construction - Sustainable flooring - Sustainable refurbishment T-beam - Tabby concrete - Table saw - Tar paper - Teardown - Telescopic handler - Temperley transporter - Temporary fencing - Tented roof - Terraced house - Tetrapod - Textile-reinforced concrete - Thatching - Thermal bridge - Thermal insulation - Thinset - Thin-shell structure - Three-decker - Tie - Tie down hardware - Tile - Tilt slab - Tilt up - Timber - Timber framing - Timber framing tools - Timber pilings - Timber recycling - Timber roof truss - Tin ceiling - Tiocem - Toe board - Topping out - Townhouse - Tracked loader - Traditional Korean roof construction - Transite - Treadwheel crane - Trench shield - Trencher - Trenchless technology - Truss - Tube and clamp scaffold - Tuckpointing - Tunnel boring machine - Tunnel construction - Tunnel hole-through - Tunnel rock recycling - Twig work - Types of concrete Umarell - Uncertainties in building design and building energy assessment - Underfloor air distribution - Underground construction - Underpinning - Unfinished building - Uniclass - Uniformat Verify in field - Vertical damp proof barrier - Vibro stone column - Vinyl composition tile - Vinyl siding - Virtual design and construction - Vitrified tile - Voided biaxial slab - Volumetric concrete mixer Waffle slab - Walking excavator - Wall - Wall chaser - Wall footing - Wall plan - Wall stud - Water–cement ratio - Water heating - Water level - Waterproofing - Wattle and daub - Wearing course - Weathering steel - Weatherization - Weld access hole - Welded wire mesh - Welder - Welding - Welding power supply - Wheel tractor-scraper - White Card - Window capping - Window insulation film - Window well cover - Wiring closet - Wood-plastic composite - Wood shingle - Wool insulation - Wrecking ball - Wrought iron Xbloc Zellij - Zero-energy building - Zome
https://en.wikipedia.org/wiki/Index_of_construction_articles
This is a list of topics in evolutionary biology . abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – anagenesis – anti-predator adaptation – applications of evolution – apomorphy – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism Henry Walter Bates – biological organisation – Black Queen hypothesis – Brassica oleracea – breed Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – disruptive selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evolution – exaptation – extinction Joe Felsenstein – R.A. Fisher – Fisher's reproductive value – fitness – fitness landscape – fixation index (F ST ) – fluctuating selection – E.B. Ford – fossil – frequency-dependent selection Galápagos Islands – gene – gene-centric view of evolution – gene duplication – gene flow – gene pool – genetic drift – genetic hitchhiking – genetic recombination – genetic variation – genotype – gene–environment correlation – gene–environment interaction – genotype–phenotype distinction – Stephen Jay Gould – gradualism – Peter and Rosemary Grant – group selection J. B. S. Haldane – W. D. Hamilton – Hardy–Weinberg principle – heredity – hierarchy of life – history of evolutionary thought – history of speciation – homologous chromosomes – homology (biology) – horizontal gene transfer – human evolution – human evolutionary genetics – human vestigiality – Julian Huxley – Thomas Henry Huxley inclusive fitness – insect evolution – Invertebrate paleontology (a.k.a. invertebrate paleobiology or paleozoology) karyotype – kin selection – Motoo Kimura – koinophilia Jean-Baptiste Lamarck – Lamarckism – landrace – language – last universal common ancestor – level of support for evolution – Richard Lewontin – list of gene families – list of human evolution fossils – life-history theory – Wen-Hsiung Li – living fossils – Charles Lyell macroevolution – macromutation – The Major Transitions in Evolution – maladaptation – The Malay Archipelago – mass extinctions – mating systems – John Maynard Smith – Ernst Mayr – Gregor Mendel – memetics – Mendelian inheritance – Mesozoic–Cenozoic radiation – microevolution – micropaleontology ( a.k.a. micropaleobiology) – Miller–Urey experiment – mimicry – Mitochondrial Eve – modern evolutionary synthesis – molecular clock – molecular evolution – molecular phylogeny – molecular systematics – mosaic evolution – most recent common ancestor – Hermann Joseph Muller – Muller's ratchet – mutation – mutational meltdown natural selection – natural genetic engineering – nature versus nurture – negative selection – Neo-Darwinism – neutral theory of molecular evolution – Baron Franz Nopcsa – " Nothing in Biology Makes Sense Except in the Light of Evolution " Susumu Ohno – Aleksandr Oparin – On The Origin of Species – Ordovician radiation – origin of birds – origin of language – orthologous genes (orthologs) paleoanthropology – paleobiology – paleobotany – paleontology – paleozoology ( of vertebrates – of invertebrates ) – parallel evolution – paralogous genes (paralogs) – parapatric speciation – paraphyletic – particulate inheritance – peppered moth – peppered moth evolution – peripatric speciation – phenotype – phylogenetics – phylogeny – phylogenetic tree – Pikaia – Plant evolution – polymorphism (biology) – population – population bottleneck – population dynamics – population genetics – preadaptation – prehistoric archaeology – Principles of Geology – George R. Price – Price equation – punctuated equilibrium quantum evolution – quasispecies model race (biology) – Red Queen hypothesis – recapitulation theory – recent African origin of modern humans – recombination – Bernhard Rensch – reinforcement (speciation) – reproductive coevolution in Ficus – reproductive isolation – r/K selection theory selection – selective breeding – selfish DNA – The Selfish Gene – sexual selection – signalling theory – sociobiology – social effects of evolutionary theory – species – speciation – species flock – sperm competition – stabilizing selection – strain (biology) – subspecies – survival of the fittest – symbiogenesis – sympatric speciation – synapomorphy – systematics – George Gaylord Simpson – G. Ledyard Stebbins Tiktaalik – timeline of evolution – trait (biological) – transgressive phenotype – transitional fossil – transposon – tree of life – triangle of U unit of selection variety (botany) – vertebrate paleontology (a.k.a. vertebrate paleobiology or paleozoology) – viral evolution – The Voyage of the Beagle – vestigiality Alfred Russel Wallace – Wallace effect – Wallace Line – Wallacea – George C. Williams (biologist) – Edward O. Wilson – Sewall Wright Y-chromosomal Adam – Y-DNA haplogroups by ethnic groups
https://en.wikipedia.org/wiki/Index_of_evolutionary_biology_articles
Genetics (from Ancient Greek γενετικός genetikos , “genite” and that from γένεσις genesis , “origin” [ 1 ] [ 2 ] [ 3 ] ), a discipline of biology , is the science of heredity and variation in living organisms . [ 4 ] Articles (arranged alphabetically) related to genetics include:
https://en.wikipedia.org/wiki/Index_of_genetics_articles
This is a list of home automation topics on Wikipedia. Home automation is the residential extension of building automation . It is automation of the home, housework or household activity. Home automation may include centralized control of lighting, HVAC (heating, ventilation and air conditioning), appliances, security locks of gates and doors and other systems, to provide improved convenience, comfort, energy efficiency and security.
https://en.wikipedia.org/wiki/Index_of_home_automation_articles
This is a list of information theory topics .
https://en.wikipedia.org/wiki/Index_of_information_theory_articles
This is an alphabetical list of articles pertaining specifically to mechanical engineering . For a broad overview of engineering, please see List of engineering topics . For biographies please see List of engineers . Acceleration – Accuracy and precision – Actual mechanical advantage – Aerodynamics – Agitator (device) – Air handler – Air conditioner – Air preheater – Allowance – American Machinists' Handbook – American Society of Mechanical Engineers – Ampere – Applied mechanics – Antifriction – Archimedes' screw – Artificial intelligence – Automaton clock – Automobile – Automotive engineering – Axle – Air Compressor Backlash – Balancing – Beale Number – Bearing – Belt (mechanical) – Bending – Biomechatronics – Bogie – Brittle – Buckling – Bus -- Bushing – Boilers & boiler systems BIW -- CAD – CAM – CAID – Calculator – Calculus – Car handling – Carbon fiber – Classical mechanics – Clean room design – Clock – Clutch – CNC – Coefficient of thermal expansion – Coil spring – Combustion – Composite material – Compression ratio – Compressive strength – Computational fluid dynamics – Computer – Computer-aided design – Computer-aided industrial design – Computer-numerically controlled – Conservation of mass – Constant-velocity joint – Constraint – Continuum mechanics – Control theory – Corrosion – Cotter pin – Crankshaft – Cybernetics – Damping ratio – Deformation (engineering) – Delamination – Design – Diesel Engine – Differential – Dimensionless number – Diode – Diode laser – Drafting – Drifting – Driveshaft – Dynamics – Design for Manufacturability for CNC machining – Elasticity – Elasticity tensor - Electric motor – Electrical engineering – Electrical circuit – Electrical network – Electromagnetism – Electronic circuit – Electronics – Energy – Engine – Engineering – Engineering cybernetics – Engineering drawing – Engineering economics – Engineering ethics – Engineering management – Engineering society – Exploratory engineering – ( Fits and tolerances)--- Factor of safety – False precision – Fast fracture – Fatigue – Fillet – Finite element analysis – Fluid mechanics – Flywheel – Force – Force density – Four-stroke cycle – Four wheel drive – Friction – Front wheel drive – Fundamentals of Engineering exam – Fusible plug – Fusion Deposition Modelling – forging Gas compressor – Gauge – Gauge (engineering) – Gauge, rail – Gear – Gear coupling – Gear ratio – Granular material – Heat engine – Heat transfer – Heating and cooling systems – Hinge – Hooke's law – Hotchkiss drive – HVAC – Hydraulics – Hydrostatics – Ideal machine – Ideal mechanical advantage – Imperial College London – Inclined plane – Independent suspension – Inductor – Industrial engineering – Inertia – Institution of Mechanical Engineers – Instrumentation – Integrated circuit – Invention – Joule – Junction Kelvin – Kinematic determinacy – Kinematics – Laser – Leaf spring – Lever – Liability – Life cycle cost analysis – Limit state design – Live axle – Load transfer – Locomotive – Lubrication – Machine – Machine learning – Magnetic circuit – Margin of safety – Mass transfer – Materials – Materials engineering – Material selection – Mechanical advantage – Mechanical Biological Treatment – Mechanical efficiency – Mechanical engineering – Mechanical equilibrium – Mechanical work – Mechanics – Mechanochemistry – Mechanosynthesis – Mechatronics – Micromachinery – Microprocessor – Microtechnology – modulus of rigidity -- Molecular assembler – Molecular nanotechnology – Moment – Moment of inertia – Motorcycle – Multi-link suspension Nanotechnology – Normal stress – Nozzle – (orientation)-- Overdrive – Oversteer – Pascal (unit) – Physics – Pinion – Piston – Pitch drop experiment – Plasma processing – Plasticity – Pneumatics – Poisson's ratio – Position vector – Potential difference – Power – Power stroke – Pressure – Prime mover – Process control – Product Lifecycle Management – Professional Engineer – Project management – Pulley – Pump – Quality – Quality control -- quality assurance Rack and pinion – Rack railway – Railcar – Rail gauge – Railroad car – Railroad switch – Rail tracks – Reaction kinetics – Rear wheel drive – Refrigeration – Reliability engineering – Relief valve – RepRap Project – Resistive force – Resistor – Reverse engineering – Rheology – Rigid body – Robotics – Roller chain – Rolling – Rotordynamics – Safety engineering – Screw theory – Seal – Semiconductor – Series and parallel circuits – Shear force diagrams – Shear pin – Shear strength – Shear stress – Simple machine – Simulation – Six-stroke engine – Slide rule – Society of Automotive Engineers – Solid mechanics – Solid modeling – Sprung mass – Statics – Steering – Steam Systems – Stress–strain curve – Structural failure – Student Design Competition – Surveying – Suspension – Switch – Technical drawing – Technology – Tensile strength – Tensile stress – Testing Adjusting Balancing – Theory of elasticity – Thermodynamics – Toe – Torque – Torsion beam suspension – Torsion spring – Toughness – Tramway track – Transmission – Truck – Truck (railway) – Turbine – Tribology – touch screen – tear – Tire manufacturing -- Understeer – Unibody – Unsprung weight – Verification and Validation – Valve – Vector – Vertical strength – Viscosity – Volt – Vibration – Velocity diagrams – Weapon - Wear – Wedge – Weight transfer – Wheel – Wheel and axle – Wheelset – x bar charts Yield (strength) – Young's modulus – Zeroth law of thermodynamics
https://en.wikipedia.org/wiki/Index_of_mechanical_engineering_articles
This is a list of topics in molecular biology . See also index of biochemistry articles . 2-amino-4-deoxychorismate dehydrogenase - 2-dehydropantolactone reductase (B-specific) - 2-methylacyl-CoA dehydrogenase - 2-nitropropane dioxygenase - 2-oxobutyrate synthase - (2,3-dihydroxybenzoyl)adenylate synthase - 2,4-Dihydroxy-1,4-benzoxazin-3-one-glucoside dioxygenase - 2010107G12Rik - 27-hydroxycholesterol 7alpha-monooxygenase - 3' end - 3' flanking region - 3-hydroxy-2-methylpyridinecarboxylate dioxygenase - 3-Ketosteroid 9alpha-monooxygenase - 3-oxoacyl-(acyl-carrier-protein) reductase (NADH) - (3,5-dihydroxyphenyl)acetyl-CoA 1,2-dioxygenase - 3(or 17)a-hydroxysteroid dehydrogenase - 3110001I22Rik - 3alpha-hydroxyglycyrrhetinate dehydrogenase - 4932414N04Rik - 3alpha-hydroxysteroid dehydrogenase (A-specific) - 3alpha,7alpha,12alpha-trihydroxy-5beta-cholestanoyl-CoA 24-hydroxylase - 3alpha,7alpha,12alpha-trihydroxycholestan-26-al 26-oxidoreductase - 4-Cresol dehydrogenase (hydroxylating) - 4-Hydroxycyclohexanecarboxylate dehydrogenase - 4-hydroxyphenylacetaldehyde oxime monooxygenase - 4-hydroxyphenylpyruvate oxidase - 4-Nitrophenol 4-monooxygenase - 4933425L06Rik - 5' end - 5' flanking region - 5-pyridoxate dioxygenase - 6-endo-hydroxycineole dehydrogenase - 7-deoxyloganin 7-hydroxylase - 7beta-hydroxysteroid dehydrogenase (NADP+) - 8-oxocoformycin reductase - 12beta-hydroxysteroid dehydrogenase - 25-hydroxycholesterol 7α-hydroxylase - abietadiene hydroxylase - acido-1 RNA motif - acrylamide gels - act 1 adaptor protein - actino-ugpB RNA motif - actinomyces-1 RNA motif - adenine - adenosine deaminase deficiency - adenovirus - adenylyl-(glutamate—ammonia ligase) hydrolase - agarose gel electrophoresis - agarose gel - akaryocyte - Alagille syndrome - alkaline lysis - allele - amino acids - amino terminus - amp resistance - amplification - amplicon - anchor sequence - animal model - anneal - anthranilate adenylyltransferase - anti-sense strand - antibiotic resistance - antibody - antisense - antisense strand - AP-1 site - apo-beta-carotenoid-14',13'-dioxygenase - apoptosis - apovitellenin-1 - archease - arenicin - ArgJ protein family - ascorbate 2,3-dioxygenase - assembled epitope - ataxia-telangiectasia - ATG or AUG - ATP cone - Atrial septal defect 1 - autoimmune lymphoproliferative syndrome - autoradiography - autosomal dominant - autosome - avidin - B3/B4 tRNA-binding domain - B5 protein domain - BAC - back mutation - bacteria - bacterial artificial chromosome - bacteriophage - bacteriophage lambda - bacteriophage scaffolding proteins - band shift assay - base - base pair - benzoyl-CoA 2,3-dioxygenase - benzyl benzoate/disulfiram - benzyl-2-methyl-hydroxybutyrate dehydrogenase - beta-carotene 3-hydroxylase - beta-cyclopiazonate dehydrogenase - beta-glucan-transporting ATPase - beta2-adaptin C-terminal domain - binding site - biological organisation - biological process - Biomolecular gradient - Biomolecule Stretching Database - biotin - birth defect - blotting - blunt end - bone marrow transplantation - box - BP - BRCA1 - BRCA2 - Brix (database) - BSD domain - BURP domain - C terminus - Can f 1 - cancer - candidate gene - Canonical sequence - cap - cap site - carbon-monoxide dehydrogenase (cytochrome b-561) - carboxyl terminus - carcinoma - carnitine dehydratase - carrier - carveol dehydrogenase - Catalog of MCA Control Patterns - CAT assay - CAT RNA-binding domain - catalase-related immune-responsive domain - CCAAT box - Cd2+-exporting ATPase - cDNA - cDNA clone - cDNA library - CDP-acylglycerol O-arachidonoyltransferase - cell - centimorgan - centromere - chain terminator - channel-conductance-controlling ATPase - chaperone protein - chlordecone reductase - chloroplast protein-transporting ATPase - cholestanetriol 26-monooxygenase - cholesterol 7alpha-monooxygenase - chromosome - chromosomal translocation - chromosome walking - CIROP gene - CIS - cistron - clone (genetics) - clone (noun) - clone (verb) - cloning - CmERG1 - coding sequence - coding strand - codon - codon usage bias - competent - complementary - conformational epitope - congenital - consensus sequence - conservative substitution - conserved - contig - coproporphyrinogen dehydrogenase - cortisone alpha-reductase - cosmid - costunolide synthase - CpG - craniosynostosis - crp domain - Cu2+-exporting ATPase - cyclodeaminase domain - cyclohexanol dehydrogenase - cystic fibrosis - cytogenetic map - cytosine - D-arabinitol 2-dehydrogenase - D-arabinose 1-dehydrogenase (NAD(P)+) - database search - degeneracy (biology) - deletion - denaturation - denaturing gel - deoxyribonuclease (DNase) - deoxyribonucleic acid - deoxyribonucleotide - deoxyuridine phosphorylase - diabetes mellitus - dideoxy sequencing - dideoxyribonucleotide - diethyl 2-methyl-3-oxosuccinate reductase - dihydrochelirubine 12-monooxygenase - dimethyl sulfide:cytochrome c2 reductase - diploid - direct repeat - directionality - DLG2-AS1 - DNA ligase - DNA Bank - DNA polymerase - DNA replication - DNA sequencing - DNase - dominant - dot blot - double helix - downstream (DNA) - downstream (transduction) - drimenol cyclase - ds - duplex - E. coli - Ecotin - EIF-W2 protein domain - electrophoresis - electroporation - ELFV dehydrogenase - Ellis–van Creveld syndrome - end labeling - endonuclease - enhancer - enterobacter ribonuclease - enzyme - epitope - ethidium bromide - evolutionary clock - evolutionary footprinting - exon - exonuclease - exosome complex - expression - expression clone - expression vector - extended ELM2 domain - familial Mediterranean fever - farnesol dehydrogenase - Fat storage-inducing transmembrane protein 2 - FDC-SP - FHIPEP protein family - fibroblasts - fluorescence in situ hybridization - fluorophore-assisted carbohydrate electrophoresis - footprinting - formylmethanofuran dehydrogenase - Fragile site, folic acid type, rare, fra(2)(q13) - Fragile X syndrome - frameshift mutation - fructose 5-dehydrogenase - fucoidanase - fungal fruit body lectin family - fusion protein - galactosyl-N-acetylglucosaminylgalactosylglucosyl-ceramide b-1,6-N-acetylglucosaminyltransferase - galactosylgalactosylglucosylceramidase - GalP (protein) - GATA zinc finger - gel electrophoresis - gel shift - gel shift assay - gene - gene amplification - gene conversion - gene expression - gene mapping - gene pool - gene therapy - gene transfer - genetic code - genetic counseling - genetic map - genetic marker - genetic screening - genetically modified mouse - genome - genomic blot - genomic clone - genomic library - genotype - geranylgeraniol 18-hydroxylase - germ line - germacrene A alcohol dehydrogenase - gluconate 2-dehydrogenase - glutamate permease - glycerol-3-phosphate-transporting ATPase - glycoprotein - glycosylation - Golgi apparatus - GRE - guanine - guanine-transporting ATPase - haemagglutination activity domain - haemolysin expression modulating protein family - hairpin - haploid - haploinsufficiency - HdeA family - helix-loop-helix - helminth protein - hematopoietic stem cell - hemophilia - heteroduplex DNA - heterozygous - highly conserved sequence - Hirschsprung's disease - histone - HLA-Y - hnRNA - holoprosencephaly - homologous recombination - homology - homozygous - host strain (bacterial) - HspQ protein domain - human artificial chromosome - Human Genome Project - human immunodeficiency virus - HumHot - Huntington's disease - hybridization - hybridoma - hydrophilicity plot - hydroxydechloroatrazine ethylaminohydrolase - immunoblot - immunoprecipitation - immunotherapy - IMPDH/GMPR family - in situ hybridization - in vitro translation - indoleacetaldoxime dehydratase - inducer - infologs - inherited - initiation codon - insert - insertion - insertion sequence - intellectual property rights - intergenic - interleukin 40 - intron - inverted repeat - IscR stability element - isopiperitenol dehydrogenase - juglone 3-monooxygenase - junk DNA - k+-transporting ATPase - karyotype - KduI/IolB isomerase family - kilobase - kinase - Klenow fragment - Knock-down - knock-out - knock-out experiment - knockout - Kozak sequence L-amino-acid alpha-ligase - L-ornithine N5 monooxygenase - lambda - Lamprin - Laser capture microdissection - latarcin - leucine zipper - leukemia - leukotriene-B4 20-monooxygenase - library - licodione synthase - ligase - linear epitope - linkage - linker protein - linoleate diol synthase - lipofectin - lipopolysaccharide kinase (Kdo/WaaP) family - lipopolysaccharide-transporting ATPase - lithocholate 6beta-hydroxylase - locus - LOC100507195 - LOD score - Long intergenic non-protein coding rna 1157 - lymphocyte - lysine—tRNA(Pyl) ligase - M13 phage - m7G(5')pppN diphosphatase - malformation - maltose-transporting ATPase - manganese-transporting ATPase - mannose-6-phosphate 6-reductase - mapping - marker - melanoma - melting - menaquinol oxidase (H+-transporting) - Johann Mendel - Mendelian inheritance - message - messenger RNA - metaphase - methylphenyltetrahydropyridine N-monooxygenase - methylsterol monooxygenase - methyltetrahydroprotoberberine 14-monooxygenase - microarray technology - microsatellite - MIMT1 - minusheet perfusion culture system - Mir-188 microRNA precursor family - Mir-615 microRNA precursor family - Mir-675 microRNA precursor family - missense mutation - mitochondrial DNA - mobility shift - molecular weight size marker - monoclonal antibody - monosaccharide-transporting ATPase - monosomy - morphine 6-dehydrogenase - mouse model - mRNA - multicistronic message - multicopy plasmid - multiple cloning site - multiple endocrine neoplasia, type 1 - mutation - myristoyl-CoA 11-(E) desaturase - myristoyl-CoA 11-(Z) desaturase - N terminus - N-acetylhexosamine 1-dehydrogenase - N-acylmannosamine 1-dehydrogenase - N-formylmethionylaminoacyl-tRNA deformylase - N-isopropylammelide isopropylaminohydrolase - Na+-transporting two-sector ATPase - NADH:ubiquinone reductase (Na+-transporting) - native gel - nematode Her-1 - neolactotetraosylceramide alpha-2,3-sialyltransferase - nested PCR - neurofibromatosis - NH41 - nick (DNA) - nick translation - NIDDM1 - Niemann-Pick disease, type C - nitrate-transporting ATPase - NMNH (Dihydronicotinamide Mononucleotide) - non-coding DNA - non-coding strand - non-directiveness - nonconservative substitution - nonpolar-amino-acid-transporting ATPase - nonsense codon - nonsense mutation - nontranslated RNA - Northern blot - NT - nuclear run-on - nuclease - nuclease protection assay - nucleoplasmin ATPase - nucleoside - nucleoside-triphosphate diphosphatase - nucleotide - Nucleotide universal IDentifier - nucleus - oligo - oligodeoxyribonucleotide - oligonucleotide - oligosaccharide-transporting ATPase - oncogene - oncovirus - open reading frame - operator - operon - origin of replication - ornithine(lysine) transaminase - osteomimicry p53 - package - palindromic sequence - palmitoyl acyltransferase - Parkinson's disease - Partial cleavage stimulation factor domain - pBR322 - PCR - pedigree - peptide - peptide-transporting ATPase - peptide bond - phage - phagemid - phenotype - phenylacetaldoxime dehydratase - PhIP-Seq - phosphatase, alkaline - phosphatidylcholine 12-monooxygenase - phosphatidylcholine desaturase - phosphatidylinositol a-mannosyltransferase - phosphodiester bond - phospholipid acyltransferase - phosphonate-transporting ATPase - phosphorylation - physical map - plant calmodulin-binding domain - plasmid - plastoquinol/plastocyanin reductase - point mutation - poly-A track - polyA tail - polyacrylamide gel - polyclonal antibodies - polydactyly - polymerase - polymerase chain reaction - polymorphism - polynucleotide kinase - polypeptide - polyvinyl-alcohol dehydrogenase (acceptor) - positional cloning - positional sequencing - post-transcriptional regulation - post-translational modification - post-translational processing - post-translational regulation - PRE - precursor mRNA - primary immunodeficiency - primary transcript - primer - primer extension - probe - processivity - progesterone 5alpha-reductase - promoter - pronucleus - prostate cancer - protease - proteasome - proteasome ATPase - protein - Protein translocation - proto-oncogene - pseudobaptigenin synthase - pseudogene - pseudoknot - pseudorevertant - pulse sequence database - pulsed field gel electrophoresis - purine - PyrC leader - PyrD leader - pyrimidine random primed synthesis - reading frame - recessive - recognition sequence - recombinant DNA - recombination - recombination-repair - relaxed DNA - repetitive DNA - replica plating - reporter gene - repression - repressor - residue - response element - restriction - restriction endonuclease - restriction enzyme - restriction fragment - restriction fragment length polymorphism (RFLP) - restriction fragments - restriction map - restriction site - reticulocyte lysate - retrovirus - reverse transcriptase - reverse transcription - revertant - ribonuclease - ribonuclease - ribonucleic acid - riboprobe - ribose-seq - ribosomal-protein-alanine N-acetyltransferase - ribosomal binding sequence - ribosome - ribosyldihydronicotinamide dehydrogenase (quinone) - ribozyme - risk communication - RNA polymerase - RNA splicing - RNAi - RNase - RNase protection assay - rRNA - rRNA (guanine-N2-)-methyltransferase - RT-PCR - Run-on - runoff transcript S1 end mapping - S1 nuclease - satellite DNA - screening - SDS-PAGE - secondary structure - selection - selenium responsive proteins - sense strand - sequence - sequence motif - sequence polymorphism - sequence-tagged site - sequential epitope - severe combined immunodeficiency - sex chromosome - sex-linked - Shine-Dalgarno sequence - shotgun cloning - shotgun cloning or sequencing - shotgun sequencing - shuttle vector - Siah interacting protein N-terminal domain - sickle-cell disease - side chain - sigma factor - signal peptidase - signal sequence - silent mutation - single nucleotide polymorphism - siRNA - site-directed mutagenesis - site-specific recombination - Slc22a21 - slot blot - SNP - Slc22a21 - SMCR2 - snRNA - snRNP - solution hybridization - somatic cells - Southern blot - southwestern blot - SP6 RNA polymerase - SpAB protein domain - spectral karyotype - splicing - Simple Sequence Repeats (SSR) - SPR domain - SQ2397 - SRG1 RNA - ST7-AS2 - ST7-OT3 - stable transfection - start codon - stem-loop - sticky end - stomoxyn - stop codon - streptavidin - stringency - structural motif - sub-cloning - substitution - succinate—citramalate CoA-transferase - suicide gene - sulfate-transporting ATPase - suPARnostic - supercoil - SurE, survival protein E - Syb-prII-1 - syndrome - T7 RNA polymerase - taq polymerase - TATA box - taurochenodeoxycholate 6α-hydroxylase - taxadiene 5alpha-hydroxylase - taxane 10beta-hydroxylase - TAZ zinc finger - Tbf5 protein domain - technology transfer - template - termination codon - terminator - tertiary structure - tet resistance - TGF beta Activation - thymine - tissue-specific expression - tm - trans - trans-feruloyl-CoA hydratase - transcript - transcription - transcription factor - transcription/translation reaction - transcriptional start site - transfection - transformation (genetics) - transformation (with respect to bacteria) - transfection (with respect to cultured cells) - transgene - transgenic - transient transfection - transition - translation - transposition - transposon - transversion - triplet - trisomy - tRNA - tRNA (adenine-N1-)-methyltransferase - tRNA (guanine-N1-)-methyltransferase - tRNA-dihydrouridine synthase - TUG-UBL1 protein domain - tumor suppressor - tumor suppressor gene - UbiD protein domain - ubiquitin—calmodulin ligase - UDP-3-O-N-acetylglucosamine deacetylase - UDP-4-amino-4,6-dideoxy-N-acetyl-alpha-D-glucosamine transaminase - undecaprenyl-phosphate 4-deoxy-4-formamido-L-arabinose transferase - untranslated RNA - upstream - upstream activator sequence - upstream DNA - upstream (transduction) - uracil - uracil/thymine dehydrogenase - ureidoglycolate hydrolase - VAMAS6 - vanillin synthase - VanY protein domain - Var1 protein domain - vax2os1 - vector - VEK-30 protein domain - vinorine hydroxylase - vitamin B12-transporting ATPase - vitamin D binding protein domain III - vitelline membrane outer layer protein I (VMO-I) - WAC protein domain - Western blot - Wfdc15a - WHEP-TRS protein domain - WIF domain - wildtype - wobble position - Wolfram syndrome - WWE protein domain - XPC-binding - XPG I protein domain - Xyloglucan endo-transglycosylase - YAC (yeast artificial chromosome) - Ycf9 protein domain - YchF-GTPase C terminal protein domain - Ydc2 protein domain - YDG SRA protein domain - YecM bacterial protein domain - YjeF N terminal protein domain - YopH, N-terminal - YopR bacterial protein domain - Y Y Y - Zeaxanthin 7,8-dioxygenase - Zfp14 zinc finger protein - Zfp28 zinc finger protein - zinc finger - Zinc finger and scan domain containing 30 - Zinc finger containing ubiquitin peptidase 1 - Zinc finger nfx1-type containing 1 - Zinc finger protein 93 - Zinc finger protein 101 - Zinc finger protein 175 - Zinc finger protein 222 - Zinc finger protein 230 - Zinc finger protein 280b - Zinc finger protein 296 - Zinc finger protein 414 - Zinc finger protein 433 - Zinc finger protein 490 - Zinc finger protein 530 - Zinc finger protein 556 - Zinc finger protein 562 - Zinc finger protein 574 - Zinc finger protein 577 - Zinc finger protein 585b - Zinc finger protein 586 - Zinc finger protein 730 - Zinc finger protein 770 - Zinc finger protein 773 - Zinc finger protein 780a - Zinc finger protein 780b - Zinc finger protein 791 - Zinc finger protein 836 - Zinc finger protein 846
https://en.wikipedia.org/wiki/Index_of_molecular_biology_articles
This is an index of articles relating to pesticides .
https://en.wikipedia.org/wiki/Index_of_pesticide_articles
This is an alphabetical list of articles pertaining specifically to quality engineering . For a broad overview of engineering, please see List of engineering topics . For biographies please see List of engineers .
https://en.wikipedia.org/wiki/Index_of_quality_engineering_articles
This is an alphabetical list of articles pertaining specifically to structural engineering . For a broad overview of engineering, please see List of engineering topics . For biographies please see List of engineers . A-frame – Aerodynamics – Aeroelasticity – Air-supported structure – Airframe – Aluminium – Analytical method – Angular frequency – Angular speed – Architecture – Architectural engineering – Arch – Arch bridge Base isolation – Beam – Beam axle – Bending – Bifurcation theory – Biomechanics – Boat Building – Body-on-frame – Box girder bridge – Box truss – Bridge engineering – Buckling – Building – Building construction – Building engineering Cable – Cable-stayed bridge – Cantilever – Cantilever bridge – Carbon-fiber-reinforced polymer – Casing – Casting – Catastrophic failure – Center of mass – Chaos theory – Chassis – Chimneys – Coachwork – Coefficient of thermal expansion – Coil spring – Columns – Composite material – Composite structure – Compression – Compressive stress – Concrete – Concrete cover – Construction – Construction engineering – Construction management – Continuum mechanics – Corrosion – Crane – Creep – Crumple zone – Curvature Dam – Damper – Damping ratio – Dead and live loads – Deflection – Deformation – Direct stiffness method – Dome – Double wishbone suspension – Duhamel's integral – Dynamical system – Dynamics Earthquake – Earthquake engineering – Earthquake engineering research – Earthquake engineering structures – Earthquake loss – Earthquake performance evaluation – Earthquake simulation – Elasticity theory – Elasticity – Energy principles in structural mechanics – Engineering mechanics – Euler method – Euler–Bernoulli beam equation Falsework – Fatigue – Fibre reinforced plastic – Finite element analysis – Finite element method – Finite element method in structural mechanics – Fire safety – Fire protection – Fire protection engineering – First moment of area – Flexibility method – Floating raft system – Floor – Fluid mechanics – Footbridges – Force – Formwork – Foundation engineering – Fracture – Fracture mechanics – Frame – Frequency – Fuselage Girder – Grout Hoist – Hollow structural section – Hooke's law – Hull – Hurricane-proof building – Hyperboloid structure Institution of Structural Engineers Joint Lattice tower – Lever – Leaf spring – Limit state design – Linear elasticity – Linear system – Linkage – Live axle – Load – Load factor MacPherson strut – Masonry – Mast – Material science – Modulus of elasticity – Mohr–Coulomb theory – Monocoque – Moment – Moment distribution – Moment of inertia – Mortar – Moulding Newton method – Newtonian mechanics – Non-linear system – Numerical analysis – Non-persistent joint Offshore engineering – Oscillation Permissible stress design – Pile – Plastic analysis – Plastic bending – plasticity – Poisson's ratio – Portland cement – Portal frame – Precast concrete – Prestressed concrete – Pressure vessel Radius of gyration – Ready-mix concrete – Rebar – Reinforced concrete – Response spectrum – Retaining wall – Rigid frame – Rotation Second moment of area – Seismic analysis – Seismic loading – Seismic performance – Seismic retrofit – Seismic risk – Shear – Shear flow – Shear modulus – Shear strain – Shear strength – Shear stress – Shear wall – Shipbuilding – Ship Construction – Shock absorbers – Shotcrete – Shrinkage – Simple machine – Skyscraper – Slab – Solid mechanics – Space frame – Statics – Statically determinate – Statically indeterminate – Statistical method – Steel – Stiffness – Strand jack – Strength of materials – Stress analysis – Stress–strain curve – Strut – Strut bar – Structural analysis – Structural design – Structural dynamics – Structural failure – Structural health monitoring – Structural load – Structural mechanics – Structural steel – Structural system – Subframe – Superleggera – Suspension (disambiguation page) – Suspension bridge Tall building – Tensile architecture – Tensile strength – Tensile stress – Tensile structure – Tension – Timber – Timber framing – Thermal conductivity – Thermal shock – Thermodynamics – Thermoplastic – Truss – Truss bridge – Torsion – Torsion beam suspension – Torsion box – Tower – Tubular bridge – Tuned mass damper Unit dummy force method – Unsprung weight Vehicle dynamics – Vessel – Very large floating structures – Vibration – Vibration control – Virtual work Wall – Wear – Wedge – Welding – Wheel and axle Yield strength – Young's modulus
https://en.wikipedia.org/wiki/Index_of_structural_engineering_articles
The Index to Organism Names (ION) is an extensive compendium of scientific names of taxa at all ranks in the field of zoology , compiled from the Zoological Record (later supplemented with content from Sherborn's Index Animalium ) by its operators as a publicly accessible internet resource. Initially developed by BIOSIS, its ownership then passed to Thomson Reuters and is currently with Clarivate Analytics . ION was initially developed as a freely available, web accessible component of a larger project, "TRITON" (the Taxonomy Resource and Index To Organism Names system) by BIOSIS, the then publishers of the Zoological Record ("ZR") and Biological Abstracts , in approximately 2000. As originally released it covered all animal names ( sensu lato ) reported in Zoological Record since 1978, along with names from some other groups not covered by the Zoological Record contributed by several partner organizations (the latter were subsequently deprecated in the system). Its initially stated aim was to provide basic nomenclatural and hierarchy information, plus ZR volume occurrence counts (reflecting use in the literature) for animal names, to identify the taxonomic group to which an organism belongs, and to link to further information from ZR (or initially, other collaborating organization). [ 1 ] By 2006, the BIOSIS products had been purchased by Thomson Scientific, subsequently Thomson Reuters , who continued and extended the ION database (example archived search interface here [ 2 ] ) using the URL www.organismnames.com, where it continues to reside. The Intellectual Property and Science division of Thomson Reuters was subsequently acquired by Clarivate Analytics who continue to make ION available (as at mid 2019). In its initial release, the Index contained content from Zoological Record dating back to 1978, which was subsequently extended to the full span of the Zoological Record commencing in 1864. In 2011, Nigel Robinson of Thomson Reuters described an in-progress upgrade of the database to include an additional >200,000 names from a digitised version of Sherborn 's Index Animalium , extending the content of ION back to the commencement of official zoological nomenclature in 1758. [ 3 ] [ 4 ] As at 2019, the Index contained over 2 million newly published names from 1758 onwards (with a small gap around the period 1850-1864 corresponding to the difference between the end of coverage of Index Animalium and the commencement of the "Zoological Record"), out of a total complement of over 5 million name instances, each with an associated unique numeric identifier (ION LSID ). [ 5 ]
https://en.wikipedia.org/wiki/Index_to_Organism_Names
Indexing in reference to motion is moving (or being moved) into a new position or location quickly and easily but also precisely. When indexing a machine part, its new location is known to within a few hundredths of a millimeter (thousandths of an inch), or often even to within a few thousandths of a millimeter (ten-thousandths of an inch), despite the fact that no elaborate measuring or layout was needed to establish that location. In reference to multi-edge cutting inserts , indexing is the process of exposing a new cutting edge for use. Indexing is a necessary kind of motion in many areas of mechanical engineering and machining . An object that indexes , or can be indexed , is said to be indexable . Usually when the word indexing is used, it refers specifically to rotation . That is, indexing is most often the quick and easy but precise rotation of a machine part through a certain known number of degrees . For example, Machinery's Handbook , 25th edition, in its section on milling machine indexing, [ 1 ] says, "Positioning a workpiece at a precise angle or interval of rotation for a machining operation is called indexing." [ 2 ] In addition to that most classic sense of the word, the swapping of one part for another, or other controlled movements, are also sometimes referred to as indexing , even if rotation is not the focus. There are various examples of indexing that laypersons (non-engineers and non-machinists) can find in everyday life. These motions are not always called by the name indexing , but the idea is essentially similar: Indexing is vital in manufacturing , especially mass production , where a well-defined cycle of motions must be repeated quickly and easily—but precisely—for each interchangeable part that is made. Without indexing capability, all manufacturing would have to be done on a craft basis, and interchangeable parts would have very high unit cost because of the time and skill needed to produce each unit. In fact, the evolution of modern technologies depended on the shift in methods from crafts (in which toolpath is controlled via operator skill) to indexing-capable toolpath control. A prime example of this theme was the development of the turret lathe , whose turret indexes tool positions, one after another, to allow successive tools to move into place, take precisely placed cuts, then make way for the next tool. Indexing capability is provided in two fundamental ways: with or without Information technology (IT). Non-IT-assisted physical guidance was the first means of providing indexing capability, via purely mechanical means. It allowed the Industrial Revolution to progress into the Machine Age . It is achieved by jigs , fixtures , and machine tool parts and accessories, which control toolpath by the very nature of their shape, physically limiting the path for motion. Some archetypal examples, developed to perfection before the advent of the IT era, are drill jigs , the turrets on manual turret lathes , indexing heads for manual milling machines , rotary tables , and various indexing fixtures and blocks that are simpler and less expensive than indexing heads, and serve quite well for most indexing needs in small shops. [ 3 ] Although indexing heads of the pre-CNC era are now mostly obsolete in commercial manufacturing, the principle of purely mechanical indexing is still a vital part of current technology, in concert with IT, even as it has been extended to newer uses, such as the indexing of CNC milling machine toolholders or of indexable cutter inserts, whose precisely controlled size and shape allows them to be rotated or replaced quickly and easily without changing overall tool geometry. IT-assisted physical guidance (for example, via NC , CNC , or robotics ) has been developed since the World War II era and uses electromechanical and electrohydraulic servomechanisms to translate digital information into position control. These systems also ultimately physically limit the path for motion, as jigs and other purely mechanical means do; but they do it not simply through their own shape, but rather using changeable information.
https://en.wikipedia.org/wiki/Indexing_(motion)
Indexing software consists of computer applications that help to build an index (like this one: Index of branches of science ). [ 1 ] There are several methodologies for indexing: [ 1 ] [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indexing_software
India's quantum computer is the proposed and planned quantum computer to be developed by 2026. A quantum computer is a computer based on quantum phenomena and governed by the principles of quantum mechanics in physics . In the present time, India has a small scale quantum computer of 7 qubits developed at Tata Institute of Fundamental Research , Mumbai . [ 1 ] In the next five years, it is expected that India will invest around one billion dollars in the programs related to the development of the quantum computer. [ 2 ] The Government of India has launched an initiative called as National Quantum Mission to achieve the goal of the development of the India's quantum computer. [ 3 ] [ 4 ] India is one of the seven countries having dedicated National Quantum Mission to the development of quantum technologies in the country. [ 5 ] The union defence minister Rajnath Singh emphasized on the development of quantum computing during the ceremony of 16th foundation day of Indian Institute Technology, Mandi . [ 6 ] "The time to come is of quantum computing ." India started its journey towards the development of quantum computer in 2018 by launching Quantum Enabled Science and Technology (QuEST) program. The QuEST program funded 51 national quantum labs with a budget of 250 crores Indian rupees to develop the required infrastructures for the development of quantum technologies in India. [ 7 ] In 2020, the Government of India announced a budget of 8000 crore Indian rupees for the development of quantum technologies and its applications. In the same year, the government launched a National Mission on Quantum Technologies & Applications (NM-QTA) for a period of five years. The mission was to be implemented by the Department of Science & Technology (DST) of the government. [ 8 ] After the announcement of the mission, it delayed for four years with no further progress. On 19 April 2023, the government revised the budget to 6003.65 crore Indian rupees and launched National Quantum Mission for period from 2023-24 to 2030-31. Ajai Chowdhry , the co-founder of HCL was appointed as the chairman of the Mission Governing Board for the National Quantum Mission. [ 3 ] After the announcement of the mission in 2023, India became the seventh country after US , Austria , Finland , France , Canada and China to have dedicated national mission for the development of quantum technologies. The National Quantum Mission in India is one of the nine missions for national importance under the Prime Minister's Science and Technology Innovation Advisory Council (PM-STIAC). [ 5 ] According to Ajai Chowdhry, the chairman of the Mission Governing Body of the National Quantum Mission, India's first quantum computer will be of capacity to achieve computation of 6 qubits . It is expected to be built within the period of one year or few months. [ 3 ] The mission has planned to establish 20-50 qubits quantum computer in the next three years. And in the next five years, it is planned to build 50-100 qubits quantum computer. Similarly in the next ten years, the mission has planned to establish a quantum computer of capability to achieve computation of 50-1000 qubits. [ 3 ] The mission has further more plan to establish satellite-based secure quantum communications upto distances of 2,000 kilometres between ground stations within the country. Similarly it is also planned to enable long-distance secure quantum communications with other countries by both satellite and fibre-based. Apart from that it has planned to establish a multi-node quantum network to implement inter-city quantum key distribution (QKD) for covering distances of over 2,000 kilometres. [ 5 ] There is also planning for development of atomic clocks and magnetometers for precision navigation. [ 9 ] The National Quantum Mission has established four Thematic Hubs (T-Hubs) to propel research and innovation of quantum technologies in India to position the country in the race of global quantum technology. The Thematic Hubs have four verticals. They are quantum computing , quantum communication , quantum sensing & metrology and quantum materials & devices . The Indian Institute of Science in Bangalore is made Thematic Hub for quantum computing and Indian Institute of Technology Madras is selected for quantum communication. Similarly Indian Institute of Technology Bombay and Indian Institute of Technology Delhi are made Thematic Hubs for quantum sensing & metrology and quantum materials & devices, respectively. [ 10 ] The Ministry of Electronics and Information Technology (MeitY) in collaboration with Amazon Web Services (AWS) established a Quantum Computing Applications Lab to facilitate research and development related to quantum computing in the month of January in 2021. Similarly in the month of March, the Department of Science and Technology (Government of India) and 13 research groups from the Indian Institute of Science Education and Research (IISER) launched I-HUB Quantum Technology Foundation (I-HUB QTF) at Pune for the development of quantum technologies. On the 22nd day of the same month, the Indian Space Research Organisation (ISRO) successfully demonstrated free-space Quantum Communication over a distance of 300 metre. A number of indigenous key technologies were developed to achieve it. It used the indigenous NAVIC receiver for time synchronization between the transmitter and receiver modules, and gimbal mechanism systems. A live videoconferencing was demonstrated using Quantum Key Distribution (QKD) link. It was demonstrated at the campus of Space Applications Centre (SAC) in Ahmedabad . The demonstration was between the two line-of-sight buildings within the campus. It was conducted at night to prevent the interference of the direct sunlight in the demonstration. The experiment is considered as a major achievement of ISRO towards the goal of demonstrating Satellite Based Quantum Communication (SBQC). [ 11 ] [ 12 ] In the month of July, the Defense Institute of Advanced Technology (DIAT) and the Centre for Development of Advanced Computing (C-DAC) collaborated to develop quantum computers in India. In the month of August, Quantum Computer Simulator (QSim) Toolkit was launched for academicians, industry professionals, students and the scientific community in India. It was launched to provide the environment for the development of quantum technologies and allow researchers to write and debug quantum code necessary for quantum algorithms in the country. In the month of October, the Centre for Development of Telematics (C-DOT) unveiled a Quantum Key Distribution (QKD) solution to support more than 100 kilometers on standard optical fiber and launched a quantum communication lab. In the month of December, a quantum computing laboratory and an AI center was established by the Indian Army at its engineering college in the state of Madhya Pradesh which was backed by the National Security Council Secretariate (NSCS). [ 13 ] In April 2022, Indian scientists of DRDO and Indian Institute of Technology Delhi were successful in demonstrating a Quantum Key Distribution (QKD) link for more than 100 kilometers. The scientists used existing commercial-grade fiber-optic networks between Prayagraj and Vindhyachal in Uttar Pradesh for achieving the demonstration of the Quantum Key Distribution (QKD) link. [ 13 ] On 27 March 2023, the Union Telecom Minister Ashwini Vaishnava announced at India's first international quantum enclave that the country's first quantum computing based telecom network link is operational between the Sanchar Bhawan and the National Informatics Centre office at the CGO Complex in the national capital New Delhi . [ 14 ] According to professor R Vijayaraghavan of Tata Institute of Fundamental Research Mumbai , the institute has demonstrated 3-qubit quantum computer based on superconducting qubits . [ 15 ] On 28 August 2024, Indian scientists of DRDO Young Scientists Laboratory for Quantum Technologies (DYSL-QT) at Pune, and Tata Institute of Fundamental Research , Mumbai completed end-to-end testing of 6- qubit quantum processor. It was based on superconducting circuit technology. [ 16 ] A quantum computing device which uses 6 quantum bits (also known as qubits) for processing information is known as 6-qubit quantum processor. This project was completed by the collaborative efforts of the three organisations DYSL-QT, TIFR, and Tata Consultancy Services (TCS). [ 17 ] The control and measurement apparatus for the quantum processor was developed by the team of DYSL-QT at Pune. It uses a combination of off-the-shelf electronics and custom-programmed development boards. Similarly a novel ring-resonator architecture was invented by the team of Tata Institute of Fundamental Research at their campus. It was employed in designing and fabrication of the qubits. The contribution of the team of Tata Consultancy Services was in the development of the cloud-based interface for the quantum hardware. This successful testing of the 6-qubit quantum processor is considered as a significant milestone in the journey of quantum computing in India and positioned the country as a significant player in the race of global quantum technologies. [ 17 ] C-DAC is building a quantum computing center at its campus in Bangalore by the help of National Quantum Mission . It is called as Quantum Reference Facility . The project of the quantum reference facility has three components. These are importing components, assembling, and developing software and applications. It is expected that the quantum reference facility will be completed and fully operational in the next three years. [ 18 ] The Indian Institute of Technology Mandi is developing an indigenous room-temperature quantum computer at its Center for Quantum Science and Technologies (CQST) by the assistance of National Quantum Mission. The quantum computer will use photons for faster calculations. According to the official of the institute, it is expected that the room-temperature optical quantum computer will have "unique ability to analyse data and suggest solutions with 86 per cent accuracy without using traditional algorithms". Similarly "instead of CPU, the quantum computer will operate as a graphics processor (GPU) with a sophisticated user interface, quantum simulator and quantum processing capabilities in place". [ 19 ] In the race of the development of quantum technologies, some startups companies in India are emerging to boost research and development projects of quantum computing in India. Bangalore based startup company QpiAI was founded in 2019 for advancements in quantum computing and generative AI technologies. It was founded by Nagendra Nagaraja who is presently the CEO of the company. The company has planned to establish a 25-qubit quantum computer at its headquarter in Bangalore very soon within this year. [ 20 ] [ 21 ] Similarly another quantum computing startup company BosonQ Psi was also established in Bangalore. It is a simulation software company utilizing quantum computing. It was named after the famous Indian quantum physicist Satyendra Nath Bose and the fundamental quantity Psi. It is also onboard with the US-based IT company IBM 's quantum networks. [ 22 ] [ 23 ] The Government of India under its two flagship initiatives National Quantum Mission and National Mission on Interdisciplinary Cyber-Physical Systems has selected eight major startups companies for innovation of advanced technologies in the areas of quantum computing, communication, sensing, and advanced materials. These eight major startups companies are QNu Labs (Bengaluru), QPiAI India Private Limited (Bengaluru), Dimira Technologies Private Limited (IIT Mumbai), Prenishq Private Limited (IIT Delhi), QuPrayog Private Limited (Pune), Quanastra Private Limited (Delhi), Pristine Diamonds Private Limited (Ahmedabad) and Quan2D Technologies Private Limited (Bengaluru). [ 9 ] The eight startups companies have been given responsibilities for the development of various range of technologies. The startup company QNu Labs represents to the development of quantum communication. It specializes in developing quantum-safe heterogeneous networks that offers secure communication solutions preventing cyber threats. Similarly QPiAI India Private Ltd represents to the development of superconducting quantum computing . It is building a superconducting quantum computer which will contribute towards development of scalable and high performance quantum systems. [ 9 ] The startups companies Dimira Technologies Private Limited and Prenishq Private Limited are working on essential hardware development for the quantum computer. Dimira Technologies Private Limited is developing indigenous cryogenic cables which is a critical component for maintaining the low-temperature environments required for the quantum hardware. Similarly Prenishq Private Limited is developing precision diode-laser systems. These precision diode-laser systems are essential part for quantum computing and sensing technologies. The startups companies QuPrayog Private Limited and Quanastra Private Limited are working on quantum sensing technologies. QuPrayog Private Limited is working on the innovations of optical atomic clocks and related quantum metrology technologies. These technologies have potential applications in healthcare and precise timekeeping. Quanastra Private Limited is working on the creation of advanced cryogenic systems and superconducting detectors to support quantum sensing and communication efforts. [ 9 ] The startups companies Pristine Diamonds Private Limited in Ahmedabad and Quan2D Technologies Private Limited in Bangalore are developing Quantum Materials and Photon Detection . The Pristine Diamonds Private Limited is working towards designing diamond-based materials for quantum sensing which is a promising avenue in quantum materials science. Similarly the Quan2D Technologies Private Limited is developing superconducting nanowire single-photon detectors to enhance quantum communication capabilities. [ 9 ]
https://en.wikipedia.org/wiki/India's_quantum_computer
India Science Award is one of the highest and the most prestigious national recognition by the Government of India for outstanding contribution to science . The primary and essential criterion for the award is demonstrated and widely accepted excellence in science. The award covers all areas of research in science including engineering , medicine and agriculture . The prize money is ₹ 25 lakhs, and it also carries a citation and a gold medal. The award is announced and presented every year at the Indian Science Congress (ISC) . [ 1 ] The award was instituted by the 10th Prime Minister of India Shri Atal Bihari Vajpayee in 2003. [ 2 ] [ 3 ] The first award, for the year 2004, was given to a renowned chemist Prof CNR Rao , for his works in solid state and material chemistry , by Prime Minister Manmohan Singh at the inauguration of the 93rd Indian Science Congress on 3 January 2006. [ 4 ] [ 5 ] India Science Award was launched at the 90th Indian Science Congress on 3 January 2003, held at Bangalore University , by the Prime Minister of India. On 30 June 2003 the Ministry of Science and Technology (India) approved the framework and guidelines of the award. The meeting was attended by 20 eminent scientists, government officials, under the chairmanship of the Minister of Science and Technology. [ 6 ] India Science Award is given annually in recognition of distinguished achievements in science, including medicine, engineering and agriculture. The recipient is a scientist, of no age limit, who had made a groundbreaking scientific research that is widely demonstrated and accepted, and the work done primarily in India. Originality and innovatory outputs are more important than mere quantity. Contribution to scientific development of the country has a huge impression. Groups or institutions are not eligible to receive this award. There can only be a maximum of two winners of the prize in a given year if more than one nominee is eligible. [ 1 ] After 2010, the India Science Award was discontinued following its merger with the Shanti Swarup Bhatnagar Prize for Science and Technology . The budget of the Shanti Swarup Bhatnagar Prize for Science and Technology was accordingly increased. [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/India_Science_Award
India Stack refers to the project of creating a unified software platform to bring India 's population into the digital age. Its website describes its mission as follows: "India Stack is a set of open APIs that allows governments, businesses, startups and developers to utilize a unique digital Infrastructure to solve India’s hard problems towards presence-less, paperless, and cashless service delivery" [ 1 ] Of the four "distinct technology layers" mentioned on the same page, the first, the "Presenceless Layer" is the most controversial as it involves storing biometric data such as fingerprints for every citizen. Since such markers are widely being adopted to enable cashless payment, the issue arises of fraudulent use of biometrics. [ 2 ] The other layers are the Paperless Layer, which enables personal records to be associated with one's online identity; the Cashless Layer, a single interface to all national banks and online wallets; and the Consent Layer, which aims to maintain security and control of personal data. India Stack is the largest open API in the world. Since its deployment, India has been organizing hackathons to develop applications for the APIs. [ 3 ] India Stack is being implemented in stages, starting with the introduction in 2009 of the Aadhaar "Universal ID" numbers. These are linked to biometrics (fingerprints) and as time goes by, authentication by Aadhaar is required for access to more and more services and subsidies. This raises issues of privacy and surveillance, especially as much of the users' interface is via their mobile phones. [ 4 ] [ 5 ] The next stages were the introduction of eKYC (electronic Know Your Customer), which enables paperless and rapid verification of address, identity etc., followed by eSign , whereby users attach a legally valid electronic signature to a document, and Unified Payments Interface (UPI) enabling cashless payments, and most recently, DigiLocker , a platform for issuance and verification of documents and certificates. What raised the profile of Aadhaar and India Stack worldwide was the 2016 Indian banknote demonetisation whereby ₹ 500 and ₹1,000 notes were phased out, officially to eliminate forgeries and money-laundering, but with the secondary objective of hastening the transition to a cashless economy. Observers have argued that India Stack could fast-track the move to digital payment systems across the developed world and mark the end of cash. [ 6 ] However, various challenges related to user rights have been mounted: in August 2017, the Supreme Court of India unanimously ruled in favour of a petition applying for privacy to be declared a fundamental right [ 7 ] and other court matters followed. [ 8 ] Reputed consultancy firm Ernst & Young has said that India Stack has become the global benchmark for most countries. [ 9 ] To test all 4 layers- Presenceless, paperless, cashless and consent- of India stack, the India Stack team partnered with the largest FinTech alternative credit company in India, Capital Float (now axio) in 2016. [ 10 ] The objective of the pilot was to provide loans to customers in a few minutes and in the comfort of their own house. Additionally, the target customer was someone with no collateral and limited data trail . When a user opened the application, he/she had to give consent for Capital Float to access his/her data through the digital platform (Consent). Once the consent was given Capital Float used the Aadhaar infrastructure to authenticate the users (presenceless). The Aadhaar also helped with e-KYC, a mandatory requirement for all loan activities in India . Once the authentication was complete, Capital Float pinged the Aadhaar database to check for banking activity, and also used a mobile scraping technique to gather data from the customer's phone. These two steps helped Capital Float estimate the credit-worthiness of the customer. Once this was determined, the customer of the application saw loan offers on the screen. The customer could then select the offer and do an e-signature within the comforts of their home (paperless). This whole process could be completed in 45 seconds. Once the loan was approved it could directly be disbursed into the bank account of the customer (Cashless). The pull function of the UPI platform could also be utilized by Capital Float to get back the loan payments. After the Pilot, the India Stack team has approached other alternate lenders to utilize the platform to give loans to under-served populations. Capital Float is already in the process of expanding the pilot in to a full-fledged service offering. This pilot proved a clear value proposition of the India Stack platform for the fintech industry. First conference was held on 25 January 2023. NASSCOM president Debjani Ghosh highlighted that India Stack has enabled India to achieve financial inclusion for 80% of population in 6 years as compared to projected figure of 46 years. [ 11 ] Indian Minister said that the platform will be offered to other countries and private entrepreneurs free of cost. [ 12 ] Earlier India Stack Knowledge Exchange 2022 had taken place in July 2022, where Indiastack.global was launched by Indian PM Narendra Modi as a single repository of all major projects on India Stack. [ 13 ] International Monetary Fund endorsed this initiative by noting that: "Other emerging market and developing economies could learn from the experience." [ 14 ] In a paper titled India’s Approach to Open Banking: Some Implications for Financial Inclusion , authors Yan Carriere-Swallow, V. Haksar and Manasa Patnam observed that the digital infrastructures of India Stack has enabled rapid increase in digital payments. [ 15 ] Sri Lanka , Morocco , the Philippines , Guinea , Ethiopia , and the Togolese Republic have already started using technologies of India Stack and Tunisia , Samoa , Uganda , and Nigeria have shown their willingness. [ 16 ] [ 17 ] Financial Times in an article raised concerns around privacy and data protection. [ 18 ]
https://en.wikipedia.org/wiki/India_Stack
Indian Basket ( IB ), also known as Indian Crude Basket , is weighted average of Dubai and Oman ( sour ) and the Brent Crude ( sweet ) crude oil prices. It is used as an indicator of the price of crude imports in India and Government of India watches the index when examining domestic price issues. [ 1 ] The Indian basket of Crude Oil represents a derived basket comprising Sour grade (Oman and Dubai average) and Sweet grade (Brent Dated) of Crude oil processed in Indian refineries. During the year 2018-19, the ratio is 75.50 : 24.50 (Dubai : Brent respectively) [ 2 ] and during the year 2017-2018, the ratio was 74.77 : 25.23 (Dubai : Brent). The Indian Basket is weighted average of daily prices and is updated daily on the website of the Petroleum Planning and Analysis Cell of the Ministry of Petroleum and Natural Gas . [ 3 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Indian_Basket
The Indian Chemical Society is a scientific society dedicated in the field of chemistry from India . [ 1 ] [ 2 ] It was established in 1924 with Prafulla Chandra Ray as its founding president. [ 3 ] The same year the society started to publish its "Quarterly Journal of Indian Chemical Society" (1924– 1927) which is currently known as Journal of Indian Chemical Society . [ 4 ] [ 3 ] [ 5 ] To serve more efficiently towards the scientific objective of the society, on the 3rd day of March 2020 ICS has launched the Indian Chemical Society – North Branch . Prof. Jatinder K Ratan Archived 2021-04-22 at the Wayback Machine is the President and Dr. Shivendu Ranjan as Vice President of ICS-North Branch. ICS-North branch has Prof. Vickram Jeet Singh Archived 2020-08-05 at the Wayback Machine as Secretary and Dr. Nandita Dasgupta Archived 2021-05-12 at the Wayback Machine as Joint Secretary. This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indian_Chemical_Society
The Indian Financial System Code ( IFS Code or IFSC ) is an alphanumeric code that facilitates electronic funds transfer in India. A code uniquely identifies each bank branch participating in the three main Payment and settlement systems in India : the National Electronic Funds Transfer (NEFT), Real Time Gross Settlement (RTGS) and Immediate Payment Service (IMPS) systems. [ 1 ] The IFSC is an 11-character code with the first four alphabetic characters representing the bank name, and the last six characters (usually numeric, but can be alphabetic) representing the branch. The fifth character is 0 (zero) and reserved for future use. Bank IFS Code is used by the NEFT & RTGS systems to route the messages to the destination banks/branches. [ 2 ] The format of the IFS Code is shown below. Bank-wise lists of IFS Codes are available with all the bank-branches participating in inter bank electronic funds transfer. A list of bank-branches participating in NEFT/RTGS and their IFS Code is available on the website of the Reserve Bank of India . [ 3 ] All the banks have also been advised to print the IFS code of the branch on cheques issued by branches to their customers.
https://en.wikipedia.org/wiki/Indian_Financial_System_Code
The Indian Institute of Aeronautical Engineering is a college of engineering at Dehradun , Uttarakhand , northern India . It was founded in 1992 at 15/1, Kalidas Road, Dehradun. In 1995 it moved to 179 Kalidas Road, and in 2006 relocated near Jolly Grant Airport . The site has class rooms and hostels with all aircraft training facilities. The institute was founded by Mr. Mahendra Kumar, a Chartered Engineer and life member of bodies including the Institution of Engineers (India) , Aeronautical Society of India and Indian Institution of Industrial Engineering . His personal awards include the Glory of India Award . He is the Institute's Managing Director . [ 1 ] Indian Institute of Aeronautical Engineering trains its students by way of theoretical classes and practical training. The qualifications it offers include three-year and four-year Bachelor of Engineering degree courses; [ 2 ] [ 3 ] the first two years are taken on site, and the remainder at Perth College UHI in the United Kingdom.
https://en.wikipedia.org/wiki/Indian_Institute_of_Aeronautical_Engineering
Indian Institute for Aeronautical Engineering & Information Technology ( IIAEIT ) is an aerospace engineering college in Pune , India . [ 2 ] [ 3 ] [ 4 ] It also offers mechanical engineering and other technology related courses. It is known for its full-time face to face B.Tech Aerospace Engineering course (BTAE) which is jointly launched by D.Y Patil University and Aeronautical Engineering and Research Organisation (AERO). [ 5 ] It is a frontline training institute in the engineering disciplines, especially in the field of aeronautics . With a first rated faculty, backed by fully furnished laboratories, the institute is considered as one of the best in Pune , India. [ 6 ] IIAEIT has established itself into a chief forefront Institution with excellent infrastructure over the last twelve years. Apart from training students for the professional industries, the institution also prepares Aerospace students to master their engineering. [ 7 ] Aeronautical Engineering and Research Organisation (AERO) is a constituent part of Shastri Group of Institutes. It popularizes education, research and development by collaborating with several Industries and educational institutions in all fields of aviation in India as well as abroad. AERO offers B.Tech aerospace engineering course which is accomplished face-to-face. [ 8 ] [ 6 ] Shastri Group of Institutes was founded in Pune by Anshul Sharma in the year 2001. The group originated by preparing students for the examinations conducted by professional institutes . [ 8 ] With its front head flagship named Indian Institute for Aeronautical Engineering and Information Technology SGI is a part of PSD Shastri Educational Foundation which is engaged in several social services and objectives. It is mentored by an Advisory Board whose members are from Aviation , Services, Industries, Business and Educational fields. Aerospace Engineering deals with the design, development, and manufacturing of flying machines and launch vehicles. Aerospace engineering is a combination of aeronautical engineering and astronautical engineering. It is the primary field of engineering concerned with the development of Aircraft and Spacecraft. In the Indian Context previously only the Aeronautical Engineering discipline was available however with increased demand in the aviation sector and growing research and development the demand for Aerospace Engineers has increased across the globe. [ 9 ] The curriculum includes a total of 19 laboratory courses. A one-year project assessment in the seventh semester and eight elective courses in the final semester , industry visit in two semesters and professional training is must for all admitted students. The course offers 120 seats per centre. [ 5 ] Presently fresh admissions are kept on hold till further orders by the court. [ 10 ] [ 11 ] [ 7 ] The total percentage of marks are calculated for each semester by summing the products of marks obtained in each subject and its respective credits and then dividing it by the total credits. [ 12 ] where:
https://en.wikipedia.org/wiki/Indian_Institute_of_Aeronautical_Engineering_&_Information_Technology
Indian Institute Of Aeronautics (IIA) is one of the oldest aircraft maintenance training institutes in New Delhi, India. [ 1 ] It was founded by avid aviator Captain Ram Niwas Sinha in 1982. In 1983 IIA earned approval from the Director General of Civil Aviation (DGCA), Govt. of India. [ 1 ] IIA was transferred to Mundka , New Delhi from its initial approved base of Patna Airport in the year 2001. IIA is active in the field of Aircraft Maintenance Training through generations of trained technical manpower for the Airlines / Aviation Industry. [ 1 ] The organization forfeited its DGCA approval in year 2013 and applied went for European Union Aviation Safety Agency ( EASA ) Basic Aircraft Maintenance Training Approval in the year 2015. [ 2 ] It earned permission to run EASA Part-66 aircraft maintenance training program in Category B1.1 Aeroplane Turbine . Curriculum activities and AME Training Programme framed keeping in view the current technology in force in Aircraft Maintenance Engineering, are approved by European Union Aviation Safety Agency. Infrastructure and Training facilities of Indian Institute of Aeronautics comply with EASA requirements. [ 2 ] [ 1 ] IIA together with JRN Institute of Aviation Technology in New Delhi and Bharat Institute of Aeronautics at Patna formed the IIA Group of Institutions, which are operative under the same management under certification of DGCA and EASA approved aircraft maintenance training courses. [ 1 ] The training standards followed and practiced by IIA Group are equivalent to International Civil Aviation Organization (ICAO type-II). IIA Group also hold training with examination privileges for EASA Part-66 aircraft maintenance license. [ 2 ]
https://en.wikipedia.org/wiki/Indian_Institute_of_Aeronautics
The Indian Institute of Chemical Engineers ( IIChE ) [ 1 ] is the professional body for chemical engineers in India. The headquarter of IIChE is in the campus of Jadavpur University , Kolkata. The organization has 42 regional centers along with 172 student chapters spread throughout India. [ 2 ] The institution's membership comprises academics, professionals from the chemical industry, researchers, and students. IIChE also publishes scientific journal "Indian Chemical Engineer", which is published in two sections – A and B. Section A provides an international platform for presenting original research work, interpretative reviews and discussions on new developments in the expansive areas of Chemical Engineering and its allied fields. The journal invites papers describing novel theories and practical applications, including reports of experimental work – carefully executed and soundly interpreted. Section B features technical articles or overview of technology with a view to guiding practicing chemical engineers, news snippets on research developments, industry updates, issues of environment and health hazards, etc. This section also offers in-house news for those associated with the institute. This article about an organisation in India is a stub . You can help Wikipedia by expanding it . This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indian_Institute_of_Chemical_Engineers
The CSIR-Indian Institute of Chemical Technology is a national-level research center located in Hyderabad, Telangana, India under the Council of Scientific and Industrial Research (CSIR). IICT conducts research in basic and applied chemistry, biochemistry, bioinformatics , chemical engineering and provides science and technology inputs to the industrial and economic development of the country. [ 1 ] IICT has filed one of the maximum CSIR patents. [ 2 ] [ 3 ] [ 4 ] The research and development programmes of IICT relate to the development of technologies for pesticides , drugs , organic intermediates, fine chemicals , catalysts , polymers , organic coatings, use of low-grade coals, and value-added products from vegetable oils. Process design and mechanical engineering design form an integral part of technology development and transfer. IICT is also actively engaged in basic research in organic synthesis and catalyses. [ 5 ] An example of the institute's work is development of technology for accurate identification, of principal mosquito vector in rural endemic areas for designing suitable control measures of vector-borne diseases like malaria , filaria , Japanese encephalitis, dengue fever, etc. [ 6 ] In developing countries like India , classification and identification of the mosquito species from rural endemic areas are of paramount importance. The World Health Organization monograph which describes the taxonomic data in the form of a pictorial key is generally difficult to understand by a non-taxonomist. Keeping this difficulty in view, a novel software has been developed which is user-friendly and menu-driven. The package can be successfully used in mosquito control programs in rural areas. Rapid identification and greater accuracy are the salient features of the technology. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Indian_Institute_of_Chemical_Technology
The Indian Institution of Industrial Engineering (IIIE) is a non-profit organization and registered society for propagating the profession of industrial engineering in India . It was founded in 1957 and is a Registered Public Trust under the Bombay Public Trust Act, 1950. The headquarters is at Navi Mumbai . IIIE is a member organization of Engineering Council of India . [ 1 ] The IIIE has instituted many honors and awards for various achievements and contribution to the industrial engineering profession for individuals and Performance Excellence Awards for Organisations. A National Council consisting of twelve elected representative from among Corporate Members and six representatives from the Chapters is the Executive body of the Institution which is located in Navi Mumbai. The office bearers - A Chairman, two Vice-Chairmen, an Hon. Secretary, two Hon. Jt. Secretaries and an Hon. Treasurer for each year are elected by the National Council from among its members. President of the Institution is nominated by the National Council each year. There are three classes of Corporate Membership, viz. Fellow, Member and Associate Member. Other classes of membership are : (i) Honorary Membership conferred by the National Council in recognition of outstanding services in the fields of Industrial Engineering and Management Sciences (ii) Affiliate (iii) Graduate (iv) Student and (v) Institutional Membership. A member can join seminars, workshops, training programs, special lectures, industry visits and other professional activities of Institution. Fellows, Members, Associate Members and Graduate Members of the Institution shall be permitted to affix the appropriate symbols as given below to their names. Fellow - (F.I.I.E), Member - (M.I.I.E), Associate Member - (A.M.I.I.E), Graduate Member - (Grad. I.I.E) After Graduateship the student is entitled to use below title ASSOCIATE MEMBER OF INSTITUTE OF INDUSTRIAL ENGINEERING (A.M.I.I.E) This is equal to B.E (Bachelor's of Engineering), B.Tech (Bachelor's of Technology) and A.M.I.E (Associate member of Institute of Engineering) The Institution has Thirty Two chapters, distributed all over India, at Aurangabad , Allahabad , Bangalore , Baroda , Bhilai , Bhopal , Kolkata , Kozhikode , Chennai , Cochin , Coimbatore , Orissa , Durgapur , Hyderabad , Jamshedpur , Kanpur , Lucknow , Mumbai , Belapur , Nagpur , New Delhi , Pune , Ranchi , Rourkela , Tiruchirappalli , Baroda , Trivandrum , Visakhapatnam , Udaipur and Goa . The Institution has two recognized active students chapters in India: ITER and PSG College of Technology . Both are recognized and function under their respective home chapter. The Institution brings out a monthly Journal entitled "Industrial Engineering Journal".It publishes papers and articles relating to application of industrial engineering and management techniques, including research work. The Institution conducts Graduateship Examination as external Examination for enabling candidates to qualify for GRADUATE MEMBERSHIP of the institution. The graduateship examination conducted by the institution (AMIIE) is recognized by the Government of India as equivalent to bachelor's degree in industrial engineering from any recognized university . [ 2 ] IIIE Graduates are eligible for appearing the Graduate Aptitude Test in Engineering (GATE) conducted by IITs and IISc. [ 3 ] The Examinations are open only for the student Members of the Institution and whose membership continues to remain Valid. The requirements for eligibility to student Membership of the Institution: Person should not be less than 18 years of age and should possess one of the following qualifications: 1) A pass in the Higher Secondary (XII Standard) Examination under the 10 + 2 + 3 Scheme of Education, of a Statutory Board of Higher Secondary Education, the Intermediate Examination, recognized as equivalent thereto and has at least two years working experience in an organisation. 2) Completed and passed first two years of regular degree course in Engineering / Technology. 3) A Graduate in Science stream only and has at least one year working experience in an organisation. 4) A Diploma (3 yrs course) in Engineering/technology recognized by the concerned Director of Technical Education and at least one year working experience in an organisation. In case, a candidate has undergone 1 year service training as a part of the course recognised by Board of Apprenticeship Training, Govt. of India . This will be counted as working experience. 5) A Degree in Engineering / Technology of a University in India or equivalent qualification recognised by the All India Council for Technical Education (AICTE) as equivalent. The Examinations consist of three parts of written papers viz. the Preliminary, the Section A and the Section B. In addition, a project work has to be undertaken and Report submitted for acceptance. A student can appear for Section A papers only after completely passing the preliminary (or exempted from the preliminary). For appearance in a section, a student must have passed the previous section fully. The Student will have to carry out an approved project in an organisation under a project guide. A Project Report has to be submitted to the Board of Examinations after the completion of the Project, for its acceptance. The student is considered to have completed the Graduateship Examination only after his/her Project Report has been accepted by the Board of Examinations. Graduateship examination of Indian Institution of Industrial Engineering is recognised by Association of Indian Universities, Ministry of education & Social Welfare India at par with Bachelor's degree in Industrial Engineering of an Indian university. Indian Institution of Industrial Engineering is also a member of Engineering Council of India. The IIIE has instituted many Honors and Awards for various achievements and contributions to the Industrial Engineering profession for members, non-members, chapters and other bodies. All Honors and Awards are conferred by the IIIE National Council at its National Convention.
https://en.wikipedia.org/wiki/Indian_Institution_of_Industrial_Engineering
The Indian National Academy of Engineering (INAE) was founded in 1987. It consists of India's engineers, engineer - scientists and technologists covering the entire spectrum of engineering disciplines. The academy is registered under the Societies Registration Act 1860 and is an autonomous institution supported partly through grant-in-aid by Department of Science & Technology, [ 1 ] Government of India. As the only engineering Academy of the country, INAE represents India at the International Council of Academies of Engineering and Technological Sciences (CAETS) . INAE functions as an apex body and promotes the practice of engineering and technology and the related sciences for their application to solving problems of national importance. The academy also provides a forum for futuristic planning for the country's development, requiring engineering and technological inputs and brings together specialists from such fields as may be necessary for comprehensive solutions to the needs of the country. The Indian National Academy of Engineering was registered on 20 April 1987 on the recommendation of the Ministry of Civil Supplies. Jai Krishna was appointed its first President. The academy was formally inaugurated by the Prime Minister Rajiv Gandhi on 11 April 1988 at a Foundation Function in New Delhi.
https://en.wikipedia.org/wiki/Indian_National_Academy_of_Engineering
The Indian Ocean Tsunami Warning System , abbreviated as IOTWS , was set up to provide warning to inhabitants of nations bordering the Indian Ocean of approaching tsunamis . The tsunami warning system has been in use since the mid-2000s. A warning system for the Indian Ocean was prompted by the 2004 Indian Ocean earthquake and resulting tsunami, which left approximately 250,000 people dead or missing. Many analysts claimed that the disaster would have been mitigated if there had been an effective warning system in place, citing the well-established Hawaii -based Pacific Tsunami Warning Center , which operates in the Pacific Ocean . People in some areas would have had more than adequate time to seek safety if they were aware of the impending catastrophe. The only way to effectively mitigate the impact of a tsunami is through an early warning system . Other methods such as sea walls only work for a percentage of waves, but a warning system is effective for all waves originating outside a minimum distance from the coastline. The Indian Ocean Tsunami Warning System was agreed to in a United Nations conference held in January 2005 in Kobe , Japan as an initial step towards an International Early Warning Programme . Nanometrlolics (Ottawa, Canada) and RESULTS Marine Private Limited, India, delivered and successfully installed 17 Seismic VSAT stations with two Central Recording Station to provide the seismic event alert to the scientists through SMS and e-mail automatically within two minutes. The system became active in late June 2006 following the leadership of UNESCO . It consists of 25 seismographic stations relaying information to 26 national tsunami information centers, as well as six Deep-ocean Assessment and Reporting of Tsunami (DART) buoys. [ 1 ] However, UNESCO warned that further coordination between governments and methods of relaying information from the centers to the civilians at risk are required to make the system effective. [ 2 ] Sensor data is processed by the U.S. Pacific Tsunami Warning Center in Hawaii and the Japan Meteorological Agency , and alerts are forwarded to threatened countries and also made available to the general public. National governments warn citizens through a variety of means, including Cell Broadcast messages, SMS messages, radio and television broadcasts, sirens from dedicated platforms and mosque loudspeakers, and police vehicles with loudspeakers. [ 3 ] The system was not yet operational during the 2006 Pangandaran earthquake and tsunami . The Indonesian government did receive tsunami warnings from the warning centers but did not have a system to relay the alert to its citizens. At least 23,000 people evacuated the coast after the quake, either fearing a tsunami or because their homes had been destroyed. Waves as high as 7.39 m (24.2 ft) still resulted in about 700 fatalities and 9,000 injuries. In the 2012 Indian Ocean sequence , the system alerted the Indian islands on Andaman and Nicobar within eight minutes. [ 4 ] Some tsunami warning sirens in Aceh were delayed by about 20 minutes due to failure of the electrical grid caused by the proximity of the earthquake, and evacuation routes in Banda Aceh were jammed with traffic. [ 3 ] Of the 28 countries that ring the Indian Ocean, Australia, Indonesia and India are now responsible for spearheading tsunami warnings in the area. [ 5 ] The system had no means to predict tsunamis from volcanic eruptions. After the 2018 Sunda Strait tsunami , the Indonesian government installed sea level sensors to fill this gap. [ 6 ]
https://en.wikipedia.org/wiki/Indian_Ocean_Tsunami_Warning_System
Indian Salt Service is a Central Engineering Service of the Government of India . Under the administrative control of the Ministry of Commerce and Industry , it is one of the smallest Central services under the Government of India. The organized and uniform collection of tax revenue on salt in British India began under the British Raj . Both before and after that, various native rulers of the Indian Princely states (outside British India proper) collected such revenue in accordance with their own revenue and administrative requirements and resources. In 1856, the government appointed the young William Chichele Plowden , Secretary of the Board of Revenue of the North West Provinces , to report on the establishment of a uniform system of revenue realisation from salt within the British Provinces, and he recommended the extension of the excise system, the reduction of duty, and the introduction of a system of licensing as the measures to achieve this goal. [ 1 ] In 1876, separate departments under a Salt Commissioner were set up, and these operated at the level of each British Province and Presidency . It was with the passing of the Government of India Act 1935 , that within British India (which then included much of present-day Pakistan ) salt came under the exclusive control of the central government, with the Government of India taking over the task of collecting salt revenue and transferring it from the provincial salt agencies to the Central Excise and Revenue Department. [ 1 ] In 1944, the Government of India passed the Central Excises and Salt Act which unified and amended all laws dealing with duties on excise and salt. The Salt Department was originally a part of the Central Board of Revenue under the Ministry of Finance , but since a reorganisation of the ministries of India in 1957 it has come under the authority of the Ministry of Commerce and Industry . [ 1 ] According to the Union List of subjects under the Seventh Schedule of the Indian Constitution , the "manufacture, supply and distribution of salt by Union agencies; regulation and control of manufacture, supply and distribution of salt by other agencies", is the responsibility of the Government of India. [ 2 ] The posts of Salt Controller, Deputy Salt Controller and Assistant Salt Controller were re-categorized as Salt Commissioner, Deputy Salt Commissioner and Assistant Salt Commissioner in 1952 and the Indian Salt Services were created in 1954 for the realisation of the entry under the Union List. [ 1 ] The Salt Service has both Group A and Group B wings. [ 3 ] [ 4 ] The Salt Service is one of the smallest services under the Government of India with a sanctioned strength of only 11 posts. [ 5 ] As a central engineering service, recruitment to the Indian Salt Service is conducted by the Union Public Service Commission . The Indian Salt Service is part of India's Salt Organization which is headquartered in Jaipur . The service is headed by the Salt Commissioner below whom are five Deputy Salt Commissioners and nine Assistant Salt Commissioners who man the agency with the help of other supporting staff. The Deputy Salt Commissioners head regional offices and the Assistant Salt Commissioners are in charge of divisional offices of the organisation. [ 6 ] The Service has four regional offices at Chennai , Mumbai , Ahmedabad and Kolkata and field offices in the salt producing states. [ 7 ] The Salt Service is tasked with several functions including monitoring and quality updation of salt, setting production targets, providing technical guidance to salt manufacturers and leasing and managing department lands for the same, collection of cess, fees and rents and the implementation of various schemes aimed at combating iodine deficiency and programs for promoting the growth of the salt industry in India. [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Indian_Salt_Service
The Indian Society for Ecological Economics ( INSEE ) was founded in 1998 and registered as a Society under the Societies Registration Act in January 1999. Headquartered in New Delhi , this is a regional society affiliated to the International Society for Ecological Economics (ISEE). The society publishes a bi-annual, open access peer-reviewed journal Ecology, Economy and Society–The INSEE Journal , books and other materials, and holds periodic meetings and conferences to facilitate a voice for ecological economists. The INSEE was initially presided over by Kanchan Chopra of the Institute of Economic Growth , and subsequently by C.H. Hanumantha Rao , Gopal K. Kadekodi , Narpat Singh Jodha, Jayanta Bandyopadhyay , Sudarshan Iyengar, Kanchan Chopra, Amita Shah, Sharachchandra Lele , Pranab Mukhopadhyay, K.N. Ninan , and Shreekant Gupta. The present President of INSEE is Nilanjan Ghosh . The work of INSEE has broadly addressed issues of sustainable development , urbanization , climate change and disasters, global commons and environment . The INSEE journal and the biennial conferences have contributed significantly to the development and environment discourse in India . [ 1 ] [ 2 ] The conceptual approach of INSEE spans the disciplinary divide of the sub-fields of ecological economics and environmental economics and has "...remained both conceptually and methodologically open, and relatively free of this divide", [ 3 ] from a position that does not "...typecast any specific definition of ecological economics". [ 4 ] As noted by Ghosh and others (2016): [ 4 ] "Ecological economics has been acknowledged by the Society to subsume the neoclassical framework of environmental economics, apart from considering the broader body of the literature emerging at the interface of economics, ecological sciences, hydrology, geology, geography, sociology, political science, anthropology etc." Ecology, Economy and Society–The INSEE Journal is a bi-annual journal published by the Indian Society for Ecological Economics. The first two issues appeared in 2018 and two issues have been published each year since then. The journal today publishes papers on ecological economics , sustainable development and multi-disciplinary subjects related to ecology, economy, and society. [ 5 ] Besides regular academic and review papers, the journal also carries book reviews and commentaries on related topics. INSEE holds biennial conferences in different locations:
https://en.wikipedia.org/wiki/Indian_Society_for_Ecological_Economics
Astronomy has a long history in the Indian subcontinent , stretching from pre-historic to modern times . Some of the earliest roots of Indian astronomy can be dated to the period of Indus Valley civilisation or earlier. [ 1 ] [ 2 ] Astronomy later developed as a discipline of Vedanga , or one of the "auxiliary disciplines" associated with the study of the Vedas [ 3 ] dating 1500 BCE or older. [ 4 ] The oldest known text is the Vedanga Jyotisha , dated to 1400–1200 BCE (with the extant form possibly from 700 to 600 BCE). [ 5 ] Indian astronomy was influenced by Greek astronomy beginning in the 4th century BCE [ 6 ] [ 7 ] [ 8 ] and through the early centuries of the Common Era, for example by the Yavanajataka [ 6 ] and the Romaka Siddhanta , a Sanskrit translation of a Greek text disseminated from the 2nd century. [ 9 ] Indian astronomy flowered in the 5th–6th century, with Aryabhata , whose work, Aryabhatiya , represented the pinnacle of astronomical knowledge at the time. The Aryabhatiya is composed of four sections, covering topics such as units of time, methods for determining the positions of planets, the cause of day and night, and several other cosmological concepts. [ 10 ] Later, Indian astronomy significantly influenced Muslim astronomy , Chinese astronomy , European astronomy and others. [ 11 ] Other astronomers of the classical era who further elaborated on Aryabhata's work include Brahmagupta , Varahamihira and Lalla . An identifiable native Indian astronomical tradition remained active throughout the medieval period and into the 16th or 17th century, especially within the Kerala school of astronomy and mathematics . Some of the earliest forms of astronomy can be dated to the Indus Valley Civilisation or earlier. [ 1 ] [ 2 ] Some cosmological concepts are present in the Vedas , as are notions of the movement of heavenly bodies and the course of the year. [ 3 ] The Rig Veda is one of the oldest pieces of Indian literature. Rig Veda 1-64-11 & 48 describes time as a wheel with 12 parts and 360 spokes (days), with a remainder of 5, making reference to the solar calendar. [ 12 ] As in other traditions, there is a close association of astronomy and religion during the early history of the science, astronomical observation being necessitated by spatial and temporal requirements of correct performance of religious ritual. Thus, the Shulba Sutras , texts dedicated to altar construction, discusses advanced mathematics and basic astronomy. [ 13 ] Vedanga Jyotisha is another of the earliest known Indian texts on astronomy, [ 14 ] it includes the details about the Sun, Moon, nakshatras , lunisolar calendar . [ 15 ] [ 16 ] The Vedanga Jyotisha describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. According to the Vedanga Jyotisha, in a yuga or "era", there are 5 solar years, 67 lunar sidereal cycles, 1,830 days, 1,835 sidereal days and 62 synodic months. [ 17 ] Greek astronomical ideas began to enter India in the 4th century BCE following the conquests of Alexander the Great . [ 6 ] [ 7 ] [ 8 ] [ 9 ] By the early centuries of the Common Era, Indo-Greek influence on the astronomical tradition is visible, with texts such as the Yavanajataka [ 6 ] and Romaka Siddhanta . [ 9 ] Later astronomers mention the existence of various siddhantas during this period, among them a text known as the Surya Siddhanta . These were not fixed texts but rather an oral tradition of knowledge, and their content is not extant. The text today known as Surya Siddhanta dates to the Gupta period and was received by Aryabhata . The classical era of Indian astronomy begins in the late Gupta era, in the 5th to 6th centuries. The Pañcasiddhāntikā by Varāhamihira (505 CE) approximates the method for determination of the meridian direction from any three positions of the shadow using a gnomon . [ 13 ] By the time of Aryabhata the motion of planets was treated to be elliptical rather than circular. [ 18 ] Other topics included definitions of different units of time, eccentric models of planetary motion, epicyclic models of planetary motion, and planetary longitude corrections for various terrestrial locations. [ 18 ] The divisions of the year were on the basis of religious rites and seasons ( Ṛtú ). [ 19 ] The duration from mid March–mid May was taken to be spring ( vasanta ), mid May–mid July: summer ( grishma ), mid July–mid September: rains ( varsha ), mid September–mid November: autumn ( sharada ), mid November–mid January: winter ( hemanta ), mid January–mid March: the dews ( shishira ). [ 19 ] In the Vedānga Jyotiṣa , the year begins with the winter solstice. [ 20 ] Hindu calendars have several eras : J. A. B. van Buitenen (2008) reports on the calendars in India: The oldest system, in many respects the basis of the classical one, is known from texts of about 1000 BCE. It divides an approximate solar year of 360 days into 12 lunar months of 27 (according to the early Vedic text Taittirīya Saṃhitā 4.4.10.1–3) or 28 (according to the Atharvaveda , the fourth of the Vedas, 19.7.1.) days. The resulting discrepancy was resolved by the intercalation of a leap month every 60 months. Time was reckoned by the position marked off in constellations on the ecliptic in which the Moon rises daily in the course of one lunation (the period from New Moon to New Moon) and the Sun rises monthly in the course of one year. These constellations ( nakṣatra ) each measure an arc of 13° 20 ′ of the ecliptic circle. The positions of the Moon were directly observable, and those of the Sun inferred from the Moon's position at Full Moon, when the Sun is on the opposite side of the Moon. The position of the Sun at midnight was calculated from the nakṣatra that culminated on the meridian at that time, the Sun then being in opposition to that nakṣatra . [ 19 ] Among the devices used for astronomy was gnomon , known as Sanku , in which the shadow of a vertical rod is applied on a horizontal plane in order to ascertain the cardinal directions, the latitude of the point of observation, and the time of observation. [ 39 ] This device finds mention in the works of Varāhamihira, Āryabhata, Bhāskara, Brahmagupta, among others. [ 13 ] The Cross-staff , known as Yasti-yantra , was used by the time of Bhaskara II (1114–1185 CE). [ 39 ] This device could vary from a simple stick to V-shaped staffs designed specifically for determining angles with the help of a calibrated scale. [ 39 ] The clepsydra ( Ghatī-yantra ) was used in India for astronomical purposes until recent times. [ 39 ] Ōhashi (2008) notes that: "Several astronomers also described water-driven instruments such as the model of fighting sheep." [ 39 ] The armillary sphere was used for observation in India since early times, and finds mention in the works of Āryabhata (476 CE). [ 40 ] The Goladīpikā – a detailed treatise dealing with globes and the armillary sphere was composed between 1380 and 1460 CE by Parameśvara . [ 40 ] On the subject of the usage of the armillary sphere in India, Ōhashi (2008) writes: "The Indian armillary sphere ( gola-yantra ) was based on equatorial coordinates, unlike the Greek armillary sphere, which was based on ecliptical coordinates, although the Indian armillary sphere also had an ecliptical hoop. Probably, the celestial coordinates of the junction stars of the lunar mansions were determined by the armillary sphere since the seventh century or so. There was also a celestial globe rotated by flowing water." [ 39 ] An instrument invented by the mathematician and astronomer Bhaskara II (1114–1185 CE) consisted of a rectangular board with a pin and an index arm. [ 39 ] This device – called the Phalaka-yantra – was used to determine time from the Sun's altitude. [ 39 ] The Kapālayantra was an equatorial sundial instrument used to determine the Sun's azimuth . [ 39 ] Kartarī-yantra combined two semicircular board instruments to give rise to a 'scissors instrument'. [ 39 ] Introduced from the Islamic world and first finding mention in the works of Mahendra Suri – the court astronomer of Firuz Shah Tughluq (1309–1388 CE) – the astrolabe was further mentioned by Padmanābha (1423 CE) and Rāmacandra (1428 CE) as its use grew in India. [ 39 ] Invented by Padmanābha , a nocturnal polar rotation instrument consisted of a rectangular board with a slit and a set of pointers with concentric graduated circles. [ 39 ] Time and other astronomical quantities could be calculated by adjusting the slit to the directions of α and β Ursa Minor . [ 39 ] Ōhashi (2008) further explains that: "Its backside was made as a quadrant with a plumb and an index arm. Thirty parallel lines were drawn inside the quadrant, and trigonometrical calculations were done graphically. After determining the sun's altitude with the help of the plumb, time was calculated graphically with the help of the index arm." [ 39 ] Ōhashi (2008) reports on the observatories constructed by Jai Singh II of Amber : The Mahārāja of Jaipur, Sawai Jai Singh (1688–1743 CE), constructed five astronomical observatories at the beginning of the eighteenth century. The observatory in Mathura is not extant, but those in Delhi, Jaipur , Ujjain , and Banaras are. There are several huge instruments based on Hindu and Islamic astronomy. For example, the samrāt.-yantra (emperor instrument) is a huge sundial which consists of a triangular gnomon wall and a pair of quadrants toward the east and west of the gnomon wall. Time has been graduated on the quadrants. [ 39 ] The seamless celestial globe invented in Mughal India , specifically Lahore and Kashmir , is considered to be one of the most impressive astronomical instruments and remarkable feats in metallurgy and engineering. All globes before and after this were seamed, and in the 20th century, it was believed by metallurgists to be technically impossible to create a metal globe without any seams , even with modern technology. It was in the 1980s, however, that Emilie Savage-Smith discovered several celestial globes without any seams in Lahore and Kashmir. The earliest was invented in Kashmir by Ali Kashmiri ibn Luqman in 1589–90 CE during Akbar the Great 's reign; another was produced in 1659–60 CE by Muhammad Salih Tahtawi with Arabic and Sanskrit inscriptions; and the last was produced in Lahore by a Hindu metallurgist Lala Balhumal Lahuri in 1842 during Jagatjit Singh Bahadur 's reign. 21 such globes were produced, and these remain the only examples of seamless metal globes. These Mughal metallurgists developed the method of lost-wax casting in order to produce these globes. [ 41 ] According to David Pingree , there are a number of Indian astronomical texts dated to the sixth century CE or later with a high degree of certainty. There is substantial similarity between these and pre-Ptolemaic Greek astronomy. [ 42 ] Pingree believes that these similarities suggest a Greek origin for certain aspects of Indian astronomy. One of the direct proofs for this approach is the fact quoted that many Sanskrit words related to astronomy, astrology and calendar are either direct phonetic borrowings from the Greek language, or translations, assuming complex ideas, like the names of the days of the week which presuppose a relation between those days, planets (including Sun and Moon) and gods. [ citation needed ] With the rise of Greek culture in the east , Hellenistic astronomy filtered eastwards to India, where it profoundly influenced the local astronomical tradition. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 43 ] For example, Hellenistic astronomy is known to have been practised near India in the Greco-Bactrian city of Ai-Khanoum from the 3rd century BCE. Various sun-dials, including an equatorial sundial adjusted to the latitude of Ujjain have been found in archaeological excavations there. [ 44 ] Numerous interactions with the Mauryan Empire , and the later expansion of the Indo-Greeks into India suggest that transmission of Greek astronomical ideas to India occurred during this period. [ 45 ] The Greek concept of a spherical Earth surrounded by the spheres of planets, further influenced the astronomers like Varahamihira and Brahmagupta . [ 43 ] [ 46 ] Several Greco-Roman astrological treatises are also known to have been exported to India during the first few centuries of the present era. The Yavanajataka is a Sanskrit text of the 3rd century CE on Greek horoscopy and mathematical astronomy. [ 6 ] Rudradaman 's capital at Ujjain "became the Greenwich of Indian astronomers and the Arin of the Arabic and Latin astronomical treatises; for it was he and his successors who encouraged the introduction of Greek horoscopy and astronomy into India." [ 47 ] Later in the 6th century, the Romaka Siddhanta ("Doctrine of the Romans"), and the Paulisa Siddhanta ("Doctrine of Paul ") were considered as two of the five main astrological treatises, which were compiled by Varāhamihira in his Pañca-siddhāntikā ("Five Treatises"), a compendium of Greek, Egyptian, Roman and Indian astronomy. [ 48 ] Varāhamihira goes on to state that "The Greeks, indeed, are foreigners, but with them this science (astronomy) is in a flourishing state." [ 9 ] Another Indian text, the Gargi-Samhita , also similarly compliments the Yavanas (Greeks) noting they, though barbarians, must be respected as seers for their introduction of astronomy in India. [ 9 ] Indian astronomy reached China with the expansion of Buddhism during the Later Han (25–220 CE). [ 49 ] Further translation of Indian works on astronomy was completed in China by the Three Kingdoms era (220–265 CE). [ 49 ] However, the most detailed incorporation of Indian astronomy occurred only during the Tang dynasty (618–907 CE) when a number of Chinese scholars – such as Yi Xing – were versed both in Indian and Chinese astronomy . [ 49 ] A system of Indian astronomy was recorded in China as Jiuzhi-li (718 CE), the author of which was an Indian by the name of Qutan Xida – a translation of Devanagari Gotama Siddha – the director of the Tang dynasty's national astronomical observatory. [ 49 ] Fragments of texts during this period indicate that Arabs adopted the sine function (inherited from Indian mathematics) instead of the chords of arc used in Hellenistic mathematics . [ 50 ] Another Indian influence was an approximate formula used for timekeeping by Muslim astronomers . [ 51 ] Through Islamic astronomy, Indian astronomy had an influence on European astronomy via Arabic translations. During the Latin translations of the 12th century , Muhammad al-Fazari 's Great Sindhind (based on the Surya Siddhanta and the works of Brahmagupta ), was translated into Latin in 1126 and was influential at the time. [ 52 ] Many Indian works on astronomy and astrology were translated into Middle Persian in Gundeshapur the Sasanian Empire and later translated from Middle Persian into Arabic. [ citation needed ] In the 17th century, the Mughal Empire saw a synthesis between Islamic and Hindu astronomy, where Islamic observational instruments were combined with Hindu computational techniques. While there appears to have been little concern for planetary theory, Muslim and Hindu astronomers in India continued to make advances in observational astronomy and produced nearly a hundred Zij treatises. Humayun built a personal observatory near Delhi , while Jahangir and Shah Jahan were also intending to build observatories but were unable to do so. After the decline of the Mughal Empire, it was a Hindu king, Jai Singh II of Amber , who attempted to revive both the Islamic and Hindu traditions of astronomy which were stagnating in his time. In the early 18th century, he built several large observatories called Yantra Mandirs in order to rival Ulugh Beg 's Samarkand observatory and in order to improve on the earlier Hindu computations in the Siddhantas and Islamic observations in Zij-i-Sultani . The instruments he used were influenced by Islamic astronomy, while the computational techniques were derived from Hindu astronomy. [ 53 ] [ 54 ] Some scholars have suggested that knowledge of the results of the Kerala school of astronomy and mathematics may have been transmitted to Europe through the trade route from Kerala by traders and Jesuit missionaries. [ 55 ] Kerala was in continuous contact with China, Arabia and Europe. The existence of circumstantial evidence [ 56 ] such as communication routes and a suitable chronology certainly make such a transmission a possibility. However, there is no direct evidence by way of relevant manuscripts that such a transmission took place. [ 55 ] In the early 18th century, Jai Singh II of Amber invited European Jesuit astronomers to one of his Yantra Mandir observatories, who had bought back the astronomical tables compiled by Philippe de La Hire in 1702. After examining La Hire's work, Jai Singh concluded that the observational techniques and instruments used in European astronomy were inferior to those used in India at the time – it is uncertain whether he was aware of the Copernican Revolution via the Jesuits. [ 57 ] He did, however, employ the use of telescopes . In his Zij-i Muhammad Shahi , he states: "telescopes were constructed in my kingdom and using them a number of observations were carried out". [ 58 ] Following the arrival of the British East India Company in the 18th century, the Hindu and Islamic traditions were slowly displaced by European astronomy, though there were attempts at harmonising these traditions. The Indian scholar Mir Muhammad Hussain had travelled to England in 1774 to study Western science and, on his return to India in 1777, he wrote a Persian treatise on astronomy. He wrote about the heliocentric model, and argued that there exists an infinite number of universes ( awalim ), each with their own planets and stars, and that this demonstrates the omnipotence of God, who is not confined to a single universe. [ 59 ] The last known Zij treatise was the Zij-i Bahadurkhani , written in 1838 by the Indian astronomer Ghulam Hussain Jaunpuri (1760–1862) and printed in 1855, dedicated to Bahadur Khan . The treatise incorporated the heliocentric system into the Zij tradition. [ 60 ] Jantar (means yantra, machine); mantar (means calculate). Jai Singh II in the 18th century took great interest in science and astronomy. He made various Jantar Mantars in Jaipur , Delhi , Ujjain , Varanasi and Mathura . The Jaipur instance has 19 different astronomical calculators. These comprise live and forward-calculating astronomical clocks (calculators) for days, eclipses, visibility of key constellations which are not year-round northern polar ones thus principally but not exclusively those of the zodiac. Astronomers abroad were invited and admired complexity of certain devices. As brass time-calculators are imperfect, and to help in their precise re-setting so as to match true locally experienced time, there remains equally his Samrat Yantra, the largest sundial in the world. It divides each daylit hour as to solar 15-minute, 1-minute and 6-second subunits. [ 61 ] Other notable include: Models of the Kerala school (active 1380 to 1632) involved higher order polynomials and other cutting-edge algebra; many neatly were put to use, principally for predicting motions and alignments within the Solar System. [ 67 ] [ 68 ] [ 69 ] During 1920, astronomers like Sisir Kumar Mitra , C.V. Raman and Meghnad Saha worked on various projects such as sounding of the ionosphere through ground-based radio and the Saha ionisation equation . Homi J. Bhaba and Vikram Sarabhai made significant contributions. [ 70 ] A. P. J. Abdul Kalam also known as Missile Man of India assisted in development and research for the Defence Research and Development Organisation and the Indian Space Research Organisation's (ISRO) civilian space programme and launch vehicle technology. [ 71 ] [ 72 ] [ 73 ] Bhaba established the Tata Institute of Fundamental Research and Vikram Sarabhai established the Physical Research Laboratory . These organisations researched cosmic radiation and conducted studies of the upper atmosphere . [ 70 ] In 1950, the Department of Atomic Energy was founded with Bhaba as secretary and provided funding to space researches in the country. [ 70 ] The Indian National Committee for Space Research (INCOSPAR) was founded in 1962 on the urging of Sarabhai. [ 74 ] [ 75 ] ISRO succeeded INCOSPAR and the Department of Space (under Indira Gandhi ) was established, thereby institutionalising astronomical research in India. [ 75 ] [ 76 ] Organisations like SPARRSO in Bangladesh, [ 77 ] SUPARCO in Pakistan [ 78 ] and others were founded shortly after.
https://en.wikipedia.org/wiki/Indian_astronomy
The Indian rivers interlinking project is a proposed large-scale civil engineering project that aims to effectively manage water resources in India by linking rivers using a network of reservoirs and canals to enhance irrigation and groundwater recharge and reduce persistent floods in some parts and water shortages in other parts of the country. [ 1 ] [ 2 ] India accounts for 18% of global population and about 4% of the world's water resources . One of the solutions to solve the country's water woes is to link its rivers and lakes. [ 3 ] The interlinking project has been split into three parts: a northern Himalayan rivers interlink component, a southern peninsular component, and starting in 2005, an intrastate river-linking component. [ 4 ] The project is being managed by India's National Water Development Agency, which is part of the Ministry of Jal Shakti . NWDA has studied and prepared reports on 14 interlink projects for the Himalayan component, 16 for the peninsular component, and 37 intrastate river-linking projects. [ 4 ] Average rainfall in India is about 4,000 billion cubic metres, but most of the country's rainfall falls over a 4-month period —June through September. Furthermore, rain across the large nation is not uniform, with the east and north getting most rainfall and the west and south getting less. [ 5 ] [ 6 ] India also sees years of excess monsoons and floods, followed by below-average or late monsoons accompanied by droughts. This geographical and time variance in availability of natural water versus year-round demand for irrigation, drinking, and industrial water creates a demand–supply gap that has been worsening with India's rising population. [ 6 ] Proponents of the river interlinking projects claim the answer to India's water problem is to conserve the abundant monsoon water bounty, store it in reservoirs, and deliver this water—using the planned project—to areas and over times when water becomes scarce. [ 5 ] Beyond water security , the project is also seen to offer potential benefits to transport infrastructure through navigation and hydro power as well as broadening income sources in rural areas through fish farming . Opponents are concerned about well-known environmental, ecological, and social displacement impacts as well as unknown risks associated with tinkering with nature. [ 2 ] Others are concerned that some projects may have international impacts. [ 7 ] A proposal regarding the interlinking of rivers in India has a long history. During the British colonial rule , for example, the 19th century engineer Arthur Cotton proposed the plan to interlink major Indian rivers in order to hasten import and export of goods from its colony in indian_subcontinent , South Asia , as well as to address water shortages and droughts in southeastern India, now Andhra Pradesh and Odisha . [ 8 ] In the 1970s, Dr. K.L. Rao , a dams designer and former irrigation minister proposed "National Water Grid". [ 9 ] He was concerned about the severe shortages of water in the South and repetitive flooding in the North every year. He suggested that the Brahmaputra and Ganga basins are water surplus areas, and central and south India as water deficit areas. He proposed that surplus water be diverted to areas of deficit. When Rao made the proposal, several inter-basin transfer projects had already been successfully implemented in India, and Rao suggested that the success be scaled up. [ 9 ] In 1980, India's then Ministry of Water Resources came out with a report entitled "National Perspectives for Water Resources Development". This report split the water development project in two parts – the Himalayan and Peninsular components. The Congress Party came to power and it abandoned the plan. In 1982, India financed and set up a committee of nominated experts, through National Water Development Agency (NWDA) [ 1 ] to complete detailed studies, surveys and investigations in respect of reservoirs, canals and all aspects of feasibility of interlinking peninsular rivers and related water resource management. NWDA has produced many reports over 30 years, from 1982 through 2013. [ 1 ] However, the projects were not pursued. The river inter-linking idea was revived in 1999, after the National Democratic Alliance formed the Government of India , but this time with a major strategic shift. The proposal was modified to intra-basin development as opposed to inter-basin water transfer. [ 10 ] By 2004, the United Progressive Alliance (UPA) led by the Congress Party was in power, and it resurrected its opposition to the project concept and plans. Social activists campaigned that the project may be disastrous in terms of cost, potential environmental and ecological damage, water table and the dangers inherent with tinkering with nature. The central government of India, from 2005 through 2013, instituted a number of committees, rejected a number of reports, and financed a series of feasibility and impact studies, each with changing environmental law and standards. [ 10 ] [ 11 ] In February 2012, while disposing a Public Interest Litigation (PIL) lodged in the year 2002, the Supreme Court (SC) refused to give any direction for implementation of the Rivers Interlinking Project. SC stated that it involves policy decisions which are part of legislative competence of state and central governments. However, SC directed the Ministry of Water Resources to constitute an experts committee, the 'Special Committee on ILR' (SC ILR), to pursue the matter with the governments as no party had pleaded against the implementation of the Rivers Interlinking Project. [ 12 ] India receives about 4,000 cubic kilometers of rain annually, or about 1 million gallons of fresh water per person every year. [ 2 ] However, the precipitation pattern in India varies dramatically across distance and over calendar months. Much of the precipitation in India, about 85%, is received during summer months through monsoons in the Himalayan catchments of the Ganges-Brahmaputra-Meghna (GBM) basin. [ 13 ] The northeastern region of the country receives heavy precipitation, in comparison with the northwestern, western and southern parts. The uncertainty of the start date of the monsoons, sometimes marked by prolonged dry spells and fluctuations in seasonal and annual rainfall is a serious problem for the country. [ 1 ] The nation sees cycles of drought years and flood years, with large parts of the west and south experiencing more deficits and large variations, resulting in immense hardship, particularly for the poorest farmers and rural populations. Lack of irrigation water regionally leads to crop failures and farmer suicides . Despite abundant rains during July–September, some regions in other seasons see shortages of drinking water. Some years, the problem temporarily becomes too much rainfall and weeks of havoc from floods. [ 14 ] This excess-scarcity, regional disparity and flood-drought cycles have created the need for water resources management. [ 15 ] Rivers inter-linking is one proposal to address that need. [ 1 ] [ 2 ] Due to global warming , fossil fuels use is discouraged and carbon neutral , clean, and renewable energy sources like solar and wind power are encouraged which are intermittent and variable types of electricity generation. Pumped storage hydroelectric power plants are needed to store the surplus electricity generated during daylight time by the solar power plants and supply the required electricity during the night hours. Water security , energy security , and food security can be achieved by interlinking rivers by envisaging multipurpose freshwater coastal reservoirs . [ 15 ] Population increase in India is the other driver of the need for river inter-linking. India's population growth rate has been falling but still continues to increase by about 10 to 15 million people every year. The resulting demand for food must be satisfied with higher yields and better crop security, both of which require adequate irrigation of about 140 million hectares of land. [ 16 ] Currently, just a fraction of that land is irrigated, and most irrigation relies on monsoon. River interlinking is claimed to be a possible means of assured and better irrigation for more farmers, and thus better food security for a growing population. [ 1 ] In a tropical country like India with high evapotranspiration , food security can be achieved with water security which in turn is achieved with energy security to pump water to uplands from water surplus lower elevation river points up to sea level. [ 17 ] [ 18 ] When sufficient salt export is not taking place from a river basin to the sea in an attempt to harness the river water fully, it leads to river basin closure, and the available water in the downstream area of the river basin closer to the sea becomes saline and/ or alkaline water . Land irrigated with saline or alkaline water gradually turns into saline or alkali soils . [ 19 ] [ 20 ] [ 21 ] The water percolation in alkali soils is very poor leading to waterlogging problems. Proliferation of alkali soils would compel the farmers to cultivate rice or grasses only as the soil productivity is poor with other crops and tree plantations . [ 22 ] Cotton is the preferred crop in saline soils compared to many other crops. [ 23 ] Interlinking water surplus rivers with water deficit rivers is needed for the long-term sustainable productivity of the river basins and for mitigating the anthropogenic influences on the rivers by allowing adequate salt export to the sea in the form of environmental flows . India needs infrastructure for logistics and the movement of freight. Using connected rivers as navigation is a cleaner, low carbon footprint form of transport infrastructure , particularly for ores and food grains . [ 1 ] India currently stores only 30 days of rainfall, while developed nations strategically store 900 days worth of water demand in arid areas river basins, and reservoirs. India's dam reservoirs store only 200 cubic meters per person. India also relies excessively on groundwater, which accounts for over 50 percent of the irrigated area with 20 million tube wells installed. About 15 percent of India's food is being produced using rapidly depleting groundwater . The end of the era of massive expansion in groundwater use is going to demand greater reliance on surface water supply systems. Proponents of the project suggest India's water situation is already critical, and it needs sustainable development and management of surface water and groundwater usage. [ 24 ] Some proponents feel that India is not running out of water but water is running out of India. The rivers inter-linking feasibility reports completed by 2013, suggest the following investment needs and potential economic impact: # The cost conversion in US $ is at latest conversion price on the historical cost estimates in Indian rupees Some activists and scholars have, between 2002 and 2008, questioned the merits of Indian rivers inter-link projects, and questioned if appropriate study of benefits and risks to environment and ecology has been completed so far. Bandyopadhyay et al. claim there are knowledge gaps between the claimed benefits and potential threats from environment and ecological impact. [ 2 ] They also question whether the inter-linking project will deliver the benefits of flood control . Vaidyanathan claimed, in 2003, that there are uncertainty and unknowns about operations, how much water will be shifted and when, whether this may cause water logging, salinity/alkalinity and the resulting desertification in the command areas of these projects. [ 40 ] Other scholars have asked whether there are other technologies to address the cycle of droughts and flood havoc's, with less uncertainties about potential environmental and ecological impact. [ 41 ] Rivers may change their courses every (approximately) 100 years, so the interlinking may not be useful after 100 years. Interlinking may also lead to deforestation and cause ecological imbalances, widely expected to alter fish communities. [ 42 ] [ 43 ] [ 44 ] A study concluded that the project could reduce rainfall and change rainfall patterns in the region. [ 45 ] Water storage and distributed reservoirs are likely to displace people – a rehabilitation process that has attracted concern of sociologists and political groups. Further, the inter-link would create a path for aquatic ecosystems to be affected by movement of species from one river to another, which in turn may affect the livelihoods of people who rely on specific aquatic species for their income. Lakra et al., in their 2011 study, claim [ 46 ] large dams, interbasin transfers and water withdrawal from rivers is likely to have negative as well as positive impacts on freshwater aquatic ecosystem. As regards to the impact on fish and aquatic biodiversity , there could be positive as well as negative impacts. India has a growing population, and large impoverished rural population that relies on monsoon-irrigated agriculture. Weather uncertainties, and potential climate change induced weather volatilities, raise concerns of social stability and impact of floods and droughts on rural poverty. The population of India is expected to grow further at a decelerating pace and stabilize around 1.5 billion by 2050, or another 300 million people – the size of United States – compared to the 2011 census. This will increase demand for reliable sources of food and improved agriculture yields – both of which, claims India's National Council of Applied Economic Research, [ 5 ] require significantly improve irrigation network than the current state. The average rainfall in India is about 4,000 billion cubic meters, of which annual surface water flow in India is estimated at 1,869 billion cubic meters. Of this, for topological and other reasons, only about 690 billion cubic meter of the available surface water can be utilised for irrigation, industrial, drinking and ground water replenishment purposes. In other words, about 1,100 billion cubic meter of water is available, on average, every year for irrigation in India. [ 5 ] This amount of water is adequate for irrigating 140 million hectares. As of 2007, about 60% of this potential was realized through irrigation network or natural flow of Indian rivers, lakes and adoption of pumps to pull ground water for irrigation. 80% of the water India receives through its annual rains and surface water flow, happens over a 4-month period – June through September. [ 5 ] [ 6 ] This spatial and time variance in availability of natural water versus year-round demand for irrigation, drinking and industrial water creates a demand-supply gap, that only worsens with India's rising population. Proponents claim the answers to India's water problem is to conserve the abundant monsoon water bounty, store it in reservoirs, and use this water in areas which have occasional inadequate rainfall, or are known to be drought-prone or in those times of the year when water supplies become scarce. [ 5 ] [ 47 ] In a 2007 article [ 7 ] the authors claim inter-linking of rivers to initially appear to be a costly proposition in ecological, geological, hydrological and economical terms, but in the long run the net benefits coming from it will far outweigh these costs or losses. However, they suggest that there is a lack of an international legal framework for the projects India is proposing. In at least some inter-link projects, neighbouring countries such as Bangladesh may be affected, and international concerns for the project must be negotiated. Cost of power generation by solar power projects would be below Rs. 1.0 per Kwh in few years. [ 48 ] [ 49 ] Availability of cheaper, clean and perennial/renewable power would favour more water lifting/pumping and tunnels in the river link projects rather than purely gravity links to economize on cost, reduce construction time and reduce land submergence by optimum use of existing reservoirs/less storage, etc. Tunnelling technology/methodology has also undergone drastic improvements to make them alternate choice to the gravity open canal links with shortest distance and cost effective manner. [ 50 ] BJP -led NDA government of Atal Bihari Vajpayee had propagated the idea of interlinking of rivers to deal with the problem of drought and different parts of the country at the same time. [ 11 ] The Congress general secretary Rahul Gandhi said in 2009 that the entire idea of interlinking of rivers was dangerous and that he was opposed to interlinking of rivers as it would have "severe" environmental implications. Jairam Ramesh , a cabinet minister in former UPA government, said the idea of interlinking India's rivers was a "disaster", putting a question mark on the future of the ambitious project. [ 51 ] Karunanidhi , whose DMK has been a key ally of the Congress-led UPA at the centre, wrote that linking rivers at the national level perhaps is the only permanent solution to the water scarcity problem in the country. Karunanidhi said the government should make an assessment of the project's feasibility starting with the south-bound rivers. DMK for 2014 general elections added Nationalisation and inter-linking of rivers to its manifesto. [ 52 ] Kalpasar Project is an irrigation project which envisages storing Narmada River water in an offshore freshwater reservoir located in Gulf of Khambhat sea for further pumping to arid Saurashtra region for irrigation use. The National perspective plan envisions about 150-million-acre feet (MAF) (185 billion cubic meters) of water storage along with building inter-links. [ 53 ] These storages and the interlinks will add nearly 170 million acre feet of water for beneficial uses in India, enabling irrigation over an additional area of 35 million hectares, generation of 40,000 MW capacity hydro power, flood control and other benefits. The total surface water available to India is nearly 1440-million-acre feet (1776 billion cubic meters) of which only 220-million-acre feet was being used in the year 1979. The rest is neither utilized nor managed, and it causes disastrous floods year after year. Up to 1979, India had built over 600 storage dams with an aggregate capacity of 171 billion cubic meters. These small storages hardly enable a seventh of the water available in the country to be utilized beneficially to its fullest potential. [ 53 ] From India-wide perspective, at least 946 billion cubic meters of water flow annually could be utilized in India, power generation capacity added and perennial inland navigation could be provided. Also some benefits of flood control would be achieved. The project claims that the development of the rivers of the sub-continent, each state of India, as well as its international neighbours stand to gain by way of additional irrigation, hydro power generation, navigation and flood control. [ 53 ] The project may also contribute to food security to the anticipated population peak of India. [ 53 ] The Ganga-Brahmaputra-Meghna is a major international drainage basin which carries more than 1,000 million acre feet out of total 1440 million acre feet in India. Water is a scarce commodity and several basins such as Cauvery, Yamuna, Sutlej, Ravi and other smaller inter-State/intra-State rivers are short of water. 99 districts of the country are classified as drought prone, an area of about 40 million hectare is prone to recurring floods. [ 53 ] The inter-link project is expected to help reduce the scale of this suffering and associated losses. The National Perspective Plan comprised, starting 1980s, of two main components: An intrastate component was added in 2005. Himalayan Rivers Development envisages construction of storage reservoirs on the main Ganga and the Brahmaputra and their principal tributaries in India and Nepal along with inter-linking canal system to transfer surplus flows of the eastern tributaries of the Ganga to the West apart from linking of the main Brahmaputra with the Ganga. [ 53 ] Apart from providing irrigation to an additional area of about 22 million hectares and generating about 30 million kilowatt of hydro-power, it will provide substantial flood control in the Ganga-Brahmaputra basin. The Scheme will benefit not only the States in the Ganga-Brahmaputra Basin, but also Nepal and Bangladesh, assuming river flow management treaties are successfully negotiated. [ 53 ] The Himalayan component would consist of a series of dams built along the Ganga and Brahmaputra rivers in India, Nepal and Bhutan for the purposes of storage. Canals would be built to transfer surplus water from the eastern tributaries of the Ganga to the west. This is expected to contribute to flood control measures in the Ganga and Brahmaputra river basins. It could also provide excess water for the Farakka Barrage to flush out the silt at the port of Kolkata . By 2015, fourteen inter-links under consideration for Himalayan component are as follows, with feasibility study status identified: [ 54 ] [ 55 ] This Scheme is divided in four major parts. This component will irrigate an additional 25 million hectares by surface waters, 10 million hectares by increased use of ground waters and generate hydro power, apart from benefits of improved flood control and regional navigation. [ 53 ] The main part of the project would send water from the eastern part of India to the south and west. [ 53 ] The southern development project (Phase I) would consist of four main parts. First, the Mahanadi , Godavari . Krishna and Kaveri rivers would all be inter-linked by canals. Reservoirs and dams would be built along the course of these rivers. These would be used to transfer surplus water from the Mahanadi and Godavari rivers to the south of India. Under Phase II, some rivers that flow west to the north of Mumbai and the south of Tapi would be inter-linked. The water would supply additional drinking water needs of Mumbai and provide irrigation in the coastal areas of Maharashtra . In Phase 3, the Ken and Chambal rivers would be inter-linked to serve regional water needs of Madhya Pradesh and Uttar Pradesh . Over Phase 4, a number of west-flowing rivers in the Western Ghats , would be inter-linked for irrigation purposes to east flowing rivers such as Kaveri and Krishna. The 800-km long Mahanadi - Godavari interlinking project would link River Sankosh originating from Bhutan to the Godavari in Andhra Pradesh through rivers like Teesta - Mahananda - Subarnarekha and Mahanadi. [ 56 ] The inter-links under consideration for Peninsular component are as follows, with respective status of feasibility studies: [ 57 ] [ 58 ] India approved and commissioned NWDA in June 2005 to identify and complete feasibility studies of intra-State projects that would inter-link rivers within that state. [ 60 ] The Governments of Nagaland, Meghalaya, Kerala, Punjab, Delhi, Sikkim, Haryana, Union Territories of Puducherry, Andaman & Nicobar islands, Daman & Diu and Lakshadweep responded that they have no intrastate river connecting proposals. Govt. of Puducherry proposed Pennaiyar – Sankarabarani link (even though it is not an intrastate project). The States Government of Bihar proposed 6 inter-linking projects, Maharashtra 20 projects, Gujarat 1 project, Orissa 3 projects, Rajasthan 2 projects, Jharkhand 3 projects and Tamil Nadu proposed 1 inter-linking proposal between rivers inside their respective territories. [ 60 ] Since 2005, NWDA completed feasibility studies on the projects, found 1 project infeasible, 20 projects as feasible, 1 project was withdrawn by Government of Maharashtra, and others are still under study. [ 61 ] On 16 September 2015, first linking was completed of rivers Krishna and Godavari . [ 62 ] It is still under review. But it isn't considered as a true river interlinking [ by whom? ] as it is just a small lift irrigation with few lines of pipes. [ citation needed ] NWDA had drafted Detailed Project Report (DPR) of Godavari-Cauvery link project consisting of three links; Godavari (Inchampalli/Janampet) – Krishna (Nagarjunasagar), Krishna (Nagarjunasagar) – Pennar (Somasila), Pennar (Somasila)-Cauvery (Grand Anicut) link projects which was circulated to involved States in March 2019. The concerns of involved states had been attended in September 2020. [ 63 ] The Indian Rivers Inter-link project is similar in scope and technical challenges as other major global river inter-link projects, such as: Other completed rivers inter-linking projects include the Marne-Rhine Canal in France, [ 79 ] [ 80 ] the All-American Canal and California State Water Project in the United States, South–North Water Transfer Project in China , etc. [ 81 ]
https://en.wikipedia.org/wiki/Indian_rivers_interlinking_project
The Indiana pi bill was bill 246 of the 1897 sitting of the Indiana General Assembly , one of the most notorious attempts to establish mathematical truth by legislative fiat . Despite its name, the main result claimed by the bill is a method to square the circle . The bill implies incorrect values of the mathematical constant π , the ratio of the circumference of a circle to its diameter . [ 1 ] The bill, written by a physician and an amateur mathematician, never became law due to the intervention of C. A. Waldo , a professor at Purdue University , who happened to be present in the legislature on the day it went up for a vote. The mathematical impossibility of squaring the circle using only straightedge and compass constructions , suspected since ancient times, had been proven 15 years previously, in 1882, by Ferdinand von Lindemann . Better approximations of π than those implied by the bill have been known since ancient times. In 1894, Indiana physician Edward J. Goodwin ( c. 1825 – 1902 [ 2 ] ), also called "Edwin Goodwin" by some sources, [ 3 ] believed that he had discovered a way of squaring the circle. [ 4 ] He proposed a bill to state representative Taylor I. Record, who introduced it in the House under the title "A Bill for an act introducing a new mathematical truth and offered as a contribution to education to be used only by the State of Indiana free of cost by paying any royalties whatever on the same, provided it is accepted and adopted by the official action of the Legislature of 1897". The text of the bill consists of a series of mathematical claims, followed by a recitation of Goodwin's previous accomplishments: ... his solutions of the trisection of the angle , doubling the cube and quadrature of the circle having been already accepted as contributions to science by the American Mathematical Monthly ... And be it remembered that these noted problems had been long since given up by scientific bodies as unsolvable mysteries and above man's ability to comprehend. (Goodwin's "solutions" were indeed published in the American Mathematical Monthly , with a disclaimer of "published by request of the author".) [ 5 ] Upon its introduction in the Indiana House of Representatives , the bill's language and topic caused confusion; a member proposed that it be referred to the Finance Committee, but the Speaker accepted another member's recommendation to refer the bill to the Committee on Swamplands, where the bill could "find a deserved grave". It was transferred to the Committee on Education, which reported favorably. [ 6 ] Following a motion to suspend the rules , the bill passed on February 6, 1897 [ 7 ] without a dissenting vote. [ 6 ] The news of the bill caused an alarmed response from Der Tägliche Telegraph , a German-language newspaper in Indianapolis, which viewed the event with less favor than its English-speaking competitors. [ 8 ] As this debate concluded, Purdue University professor C. A. Waldo arrived in Indianapolis to secure the annual appropriation for the Indiana Academy of Science . An assemblyman handed him the bill, offering to introduce him to the genius who wrote it. He declined, saying that he already met as many crazy people as he cared to. [ 6 ] [ 9 ] When it reached the Indiana Senate , the bill was not treated as kindly, for Waldo had talked to the senators previously. The Committee on Temperance to which it had been assigned had reported it favorably, but the Senate on February 12, 1897, postponed the bill indefinitely . It had been nearly passed, but opinion changed when one senator observed that the General Assembly lacked the power to define mathematical truth. [ 10 ] Influencing some of the senators was a report that major newspapers, such as the Chicago Tribune , were ridiculing the situation. [ 7 ] According to the Indianapolis News article of February 13, 1897: [ 11 ] ... the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a proposition was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried. [ 6 ] Although the bill has become known as the "pi bill", its text does not mention the name "pi" at all. Goodwin appears to have thought of the ratio between the circumference and diameter of a circle as distinctly secondary to his main aim of squaring the circle. Towards the end of Section 2, the following passage appears: Furthermore, it has revealed the ratio of the chord and arc of ninety degrees, which is as seven to eight, and also the ratio of the diagonal and one side of a square which is as ten to seven, disclosing the fourth important fact, that the ratio of the diameter and circumference is as five-fourths to four[.] [ 12 ] In other words, π = 4 1.25 = 3.2 {\textstyle \pi ={\frac {4}{1.25}}=3.2} , and 2 = 10 7 ≈ 1.429 {\textstyle {\sqrt {2}}={\frac {10}{7}}\approx 1.429} . Goodwin's main goal was not to measure lengths in the circle but to find a square with the same area as the circle . He knew that Archimedes ' formula for the area of a circle, which calls for multiplying the diameter by one-fourth of the circumference, is not considered a solution to the ancient problem of squaring the circle. This is because the problem is to construct the area using a compass and straightedge only. Archimedes did not give a method for constructing a straight line with the same length as the circumference. Goodwin was unaware of this central requirement; he believed that the problem with the Archimedean formula was that it gave wrong numerical results; a solution to the ancient problem should replace it with a "correct" formula. So, he proposed, without argument, his method: It has been found that a circular area is to the square on a line equal to the quadrant of the circumference, as the area of an equilateral rectangle is to the square on one side. [ 12 ] An " equilateral rectangle" is, by definition, a square . This is an assertion that the area of a circle is the same as that of a square with the same perimeter. This claim results in mathematical contradictions to which Goodwin attempts to respond. For example, right after the above quotation: The diameter employed as the linear unit according to the present rule in computing the circle's area is entirely wrong, as it represents the circle's area one and one-fifth times the area of a square whose perimeter is equal to the circumference of the circle. In the model circle above, the Archimedean area (accepting Goodwin's values for the circumference and diameter) would be 80. Goodwin's proposed rule leads to an area of 64. The area found by Goodwin's rule is π 4 {\textstyle {\tfrac {\pi }{4}}} times the true area of the circle, which, in many accounts of the pi bill, is interpreted as a claim that π = 4 {\textstyle \pi =4} , but there is no evidence in the bill that Goodwin intended to make such a claim. He repeatedly denied that the area of the circle has anything to do with its diameter. [ citation needed ] Reprinted in: Lennart Berggren, Jonathan Borwein, and Peter Borwein, Pi: A Source Book , 3rd ed. (New York, New York: Springer-Verlag, 2004), page 230. Edward J. Goodwin (1895) "(A) The trisection of an angle; (B) Duplication of the cube," American Mathematical Monthly , 2 : 337.
https://en.wikipedia.org/wiki/Indiana_pi_bill
Indic Computing means "computing in Indic ", i.e., Indian Scripts and Languages. It involves developing software in Indic Scripts/languages , Input methods , Localization of computer applications, web development , Database Management , Spell checkers , Speech to Text and Text to Speech applications and OCR in Indian languages . Unicode standard version 15.0 specifies codes for 9 Indic scripts in Chapter 12 titled "South and Central Asia-I, Official Scripts of India". The 9 scripts are Bengali , Devanagari , Gujarati , Gurmukhi , Kannada , Malayalam , Oriya , Tamil and Telugu . A lot of Indic Computing projects are going on. They involve some government sector companies, some volunteer groups and individual people. Indian Union Government made it mandatory for Mobile phone companies whose handsets manufactured, stored, sold and distributed in India to have support for displaying and typing text using fonts for all 22 languages . [ 1 ] This move has seen rise in use of Indian languages by millions of users. [ 2 ] The Department of Electronics and Information Technology , India initiated the TDIL [ 3 ] (Technology Development for Indian Languages) with the objective of developing Information Processing Tools and Techniques to facilitate human-machine interaction without language barrier; creating and accessing multilingual knowledge resources; and integrating them to develop innovative user products and services. In 2005, it started distributing language software tools developed by Government/Academic/Private companies in the form of CD for non commercial use. Some of the outcome of TDIL program deployed on Indian Language Technology Proliferation & Deployment Centre. This Centre disseminate all the linguistic resources, tools & applications which have been developed under TDIL funding. This programme took to exponential expansion under the leadership of Dr. Swaran Lata who also created international foot-print of the programme. She has now retired. C-DAC is an India based government software company which is involved in developing language related software. It is best known for developing InScript Keyboard , the standard keyboard for Indian languages. It has also developed lot of Indic language solutions including Word Processors, typing tools, text to speech software, OCR in Indian languages etc. The work developed out of CDAC, Bangalore (earlier known as NCST, Bangalore) became BharateeyaOO. [ 4 ] OpenOffice 2.1 had support for over 10 Indian languages. BOSS linux was developed by the Centre for Development of Advanced Computing ( CDAC ) to promote use of open-source software in India. Indlinux organisation helped organise the individual volunteers working on different indic language versions of Linux and its applications. Sarovar.org is India 's first portal to host projects under Free/Open source licenses. It is located in Trivandrum , India and hosted at Asianet data center. Sarovar.org is customised, installed and maintained by Linuxense as part of their community services and sponsored by River Valley Technologies. Sarovar.org is built on Debian Etch and GForge and runs off METTLE. Pinaak is a non-government charitable society devoted to Indic language computing. It works for software localization, developing language software, localizing open source software, enriching online encyclopedias etc. In addition to this Pinaak works for educating people about computing, ethical use of Internet and use of Indian languages on Internet. Ankur Group is working toward supporting Bengali language ( Bengali ) on Linux operating system including localized Bengali GUI, Live CD , English-to-Bengali translator, Bengali OCR and Bengali Dictionary etc. [ 5 ] SMC is a free software group, working to bridge the language divide in Kerala in the technology front and is today the biggest language computing community in India. [ 6 ] With the advent of Unicode inputting Indic text on computer has become very easy. A number of methods exist for this purpose, but the main ones are:- Inscript is the standard keyboard for Indian languages. Developed by C-DAC and standardized by Government of India. Nowadays it comes inbuilt in all major operating systems including Microsoft Windows (2000, XP, Vista, 7), Linux and Macintosh . This is a typing method in which, for instance, the user types text in an Indian language using Roman characters and it is phonetically converted to equivalent text in Indian script in real time. This type of conversion is done by phonetic text editors, word processors and software plugins. Building up on the idea, one can use phonetic IME tools that allow Indic text to be input in any application. Some examples of phonetic transliterators are Xlit, Google Indic Transliteration , BarahaIME , Indic IME , Rupantar , SMC's Indic Keyboard and Microsoft Indic Language Input Tool. SMC 's Indic Keyboard has support for as many as 23 languages whereas Google Indic Keyboard only supports 11 Indian languages. [ 6 ] They can be broadly classified as: This layout was developed when computers had not been invented or deployed with Indic languages, and typewriters were the only means to type text in Indic scripts. Since typewriters were mechanical and could not include a script processor engine, each character had to be placed on the keyboard separately, which resulted in a very complex and difficult to learn keyboard layout. With the advent of Unicode , the Remington layout was added to various typing tools for sake of backward compatibility, so that old typists did not have to learn a new keyboard layout. Nowadays this layout is only used by old typists who are used to this layout due to several years of usage. One tool to include Remington layout is Indic IME . A font that is based on the Remington keyboard layout is Kruti Dev . Another online tool that very closely supports the old Remington keyboard layout using Kruti Dev is the Remington Typing tool. IBus Sharada Braille, which supports seven Indian languages was developed by SMC . [ 6 ] Mobile/Hand/cell phone basic models have 12 keys like the plain old telephone keypad. Each key is mapped to 3 or 4 English letters to facilitate data entry in English. For inputting Indian languages with this kind of keypad, there are two ways to do so. First is the Multi-tap Method and second uses visual help from the screen like Panini Keypad . The primary usage is SMS . 140 characters size used for English/Roman languages can be used to accommodate only about 70 language characters when Unicode [ 7 ] Proprietary compression is used some times to increase the size of single message for Complex script languages like Hindi. A research study [ 8 ] of the available methods and recommendations of proposed standard was released by Broadband Wireless Consortium of India (BWCI). English is used to type in Indian languages. QuillPad [ 9 ] IndiSMS [ 10 ] In native methods, the letters of the language are displayed on the screen corresponding to the numeral keys based on the probabilities of those letters for that language. Additional letters can be accessed by using a special key. When a word is partially typed, options are presented from which the user can make a selection. [ 11 ] Most smart phones have about 35 keys catering primarily to English language. Numerals and some symbols are accessed with a special key called Alt. Indic input methods are yet to evolve for these types of phones, as support of Unicode for rendering is not widely available. Inscript is being adopted for smart phone usage. For Android phones which can render Indic languages, Swalekh Multilingual Keypad [ 12 ] Multiling Keyboard app [ 13 ] [ 14 ] are available. Gboard offers support for several Indian languages. [ 15 ] Localization means translating software, operating systems, websites etc. various applications in Indian language. Various volunteers groups are working in this direction. A notable example is the Tamil version of Mandrake linux(defunct since 2011). Tamil speakers in Toronto (Canada) released Mandrake , a Linux software, in coming out with a Tamil version. [ 16 ] It can be noted that all the features can be accessed in Tamil. By this, the prerequisite of English knowledge for using computers has been eliminated, for those who know Tamil. IndLinux is a volunteer group aiming to translate the Linux operating system into Indian languages. By the efforts of this group, Linux has been localized almost completely in Hindi and other Indian languages. Nipun is an online translation system aimed to translate various application in Hindi . It is part of Akshargram Network . GoDaddy has localised its website in Hindi , Marathi and Tamil and also noted that 40% of the call volume for IVR is in Indian Languages. [ 17 ] Indic blogging refers to blogging in Indic languages. Various efforts have been done to promote blogging in Indian languages. Some Social networks are started in Indian languages. [ 18 ] Gherkin , a popular Domain-specific language has support for Gujarati, Hindi, Kannada, Punjabi, Tamil, Telugu and Urdu [ 19 ] Natural Language processing in Indian languages is on rise. There are several libraries such as iNLTK, StanfordNLP are available. [ 20 ] Google offers improved translation feature for Hindi, Bengali, Marathi, Tamil, Telugu, Gujarati, Punjabi, Malayalam and Kannada, [ 15 ] with offline support as well. [ 21 ] Microsoft also offers translation for some of these languages. In a symposium jointly organized by FICCI and TDIL , Mr. Ajay Prakash Sawhney, Secretary, Ministry of Electronics and IT, Government of India said that India Language Stack can help overcome the barriers of communication. [ 22 ] Transliteration tools allow users to read a text in a different script. As of now, Aksharamukha is the tool that allows most Indian scripts. Google also offers Indic Transliteration . Text from any of these scripts can be converted to any other scripts and vice versa. Whereas Google and Microsoft allow transliteration from Latin letters to Indic scripts. Apple Inc. added support for major Indian languages in Siri . [ 23 ] Amazon's Alexa has support for Hindi and recognises major Indian languages partially. [ 24 ] Google Assistant also has support for major Indian languages. [ 25 ] AI based Virtual Assistants Google Assistant provides support to various Indian languages. According to GoDaddy , Hindi , Marathi and Tamil languages accounted for 61% of India's internet traffic. [ 17 ] Less than 1% of online content is in Indian languages. The newly created top apps have support for multiple Indian languages and/or promote Indian language content. 61% of the Indian users of WhatsApp primarily use their native languages to communicate with it. [ 26 ] A recent study revealed that adoption of Internet is highest among local languages such as Tamil, Hindi, Kannada, Bengali, Marathi, Telugu, Gujarati and Malayalam. It estimates that Marathi, Bengali, Tamil, and Telugu will form 30% of the total local-language user base in the country. Currently, Tamil at 42% has the highest Internet adoption levels, followed by Hindi at 39% and Kannada at 37%. [ 27 ] Intex also reported that 87% of its regional language usage came from Hindi, Bengali, Tamil, Gujarati and Marathi speakers. [ 2 ] Lava mobiles reported that Tamil and Malayalam are the most popular on their phones, more than even Hindi. [ 2 ]
https://en.wikipedia.org/wiki/Indic_computing
In the law of the European Union , indicative limit values , more exactly indicative occupational exposure limit values (IOELVs), are human exposure limits to hazardous substances specified by the Council of the European Union based on expert research and advice. They are not binding on member states but must be taken into consideration in setting national occupational exposure limits . Some member states have pre-existing national limits lower than the IOELV and are not required to revise these upwards. In practice, most member states adopt the IOELV but there are some variances upwards and downwards. [ 1 ] [ 2 ] A system of IOELVs was first introduced in 1980 by directive 80/1107/EEC but the first list of 27 substances was not created until directive 91/322/EC in 1991. Member states had until 31 December 1993 to implement national limits. This first list was amended by directive 2006/15/EC in 2006 which transferred 10 of the 27 to a different regulatory regime. A second list was defined in directive 96/94/EEC but this was repealed by directive 2000/39/EC. [ 3 ] In 1998, directive 80/1107/EEC was repealed and replaced by a new regime under the chemical agents directive (directive 98/24/EC). The directive defines occupational exposure limit value as "the limit of the time- weighted average of the concentration of a chemical agent in the air within the breathing zone of a worker over a specified reference period." [ 4 ] Article 3 of the directive led to the creation of the Scientific Committee on Occupational Exposure Limit Values to advise the European Commission . [ 5 ] [ 6 ] There have subsequently been two directives establishing further lists of IOELVs: 2000/39/EC and 2006/15/EC. As of 2008 [update] , the IOELVs under directive 91/322/EC remain in force but under review. [ 7 ] A third list of IOELVs, now under directive 98/24/EC, is expected in the first half of 2008. [ 8 ]
https://en.wikipedia.org/wiki/Indicative_limit_value
Indicator bacteria are types of bacteria used to detect and estimate the level of fecal contamination of water. They are not dangerous to human health but are used to indicate the presence of a health risk. Each gram of human feces contains approximately ~100 billion ( 1 × 10 11 ) bacteria. [ 1 ] These bacteria may include species of pathogenic bacteria, such as Salmonella or Campylobacter , associated with gastroenteritis . In addition, feces may contain pathogenic viruses , protozoa and parasites . Fecal material can enter the environment from many sources including waste water treatment plants , livestock or poultry manure, sanitary landfills, septic systems , sewage sludge , pets and wildlife. If sufficient quantities are ingested, fecal pathogens can cause disease. The variety and often low concentrations of pathogens in environmental waters makes them difficult to test for individually. Public agencies therefore use the presence of other more abundant and more easily detected fecal bacteria as indicators of the presence of fecal contamination. Aside from bacteria being found in fecal matter, it can also be found in oral and gut contents. [ 2 ] The US Environmental Protection Agency (EPA) lists the following criteria for an organism to be an ideal indicator of fecal contamination: [ citation needed ] None of the types of indicator organisms that are currently in use fit all of these criteria perfectly, however, when cost is considered, use of indicators becomes necessary. Commonly used indicator bacteria include total coliforms, or a subset of this group, fecal coliforms , which are found in the intestinal tracts of warm blooded animals. Total coliforms were used as fecal indicators by public agencies in the US as early as the 1920s. These organisms can be identified based on the fact that they all metabolize the sugar lactose, producing both acid and gas as byproducts. Fecal coliforms are more useful as indicators in recreational waters than total coliforms which include species that are naturally found in plants and soil; however, there are even some species of fecal coliforms that do not have a fecal origin, such as Klebsiella pneumoniae . Perhaps the biggest drawback to using coliforms as indicators is that they can grow in water under certain conditions. Escherichia coli ( E. coli ) and enterococci are also used as indicators. Indicator bacteria can be cultured on media which are specifically formulated to allow the growth of the species of interest and inhibit growth of other organisms. Typically, environmental water samples are filtered through membranes with small pore sizes and then the membrane is placed onto a selective agar. It is often necessary to vary the volume of water sample filtered in order to prevent too few or too many colonies from forming on a plate. Bacterial colonies can be counted after 24 to 48 hours depending on the type of bacteria. Counts are reported as colony forming units per 100 mL (cfu/100 mL). One technique for detecting indicator organisms is the use of chromogenic compounds, which are added to conventional or newly devised media used for isolation of the indicator bacteria. These chromogenic compounds are modified to change color or fluorescence by the addition of either enzymes or specific bacterial metabolites. This enables for easy detection and avoids the need for isolation of pure cultures and confirmatory tests. [ 3 ] Immunological methods using monoclonal antibodies can be used to detect indicator bacteria in water samples. Precultivation in select medium must preface detection to avoid detection of dead cells. ELISA antibody technology has been developed to allow for readable detection by the naked eye for rapid identification of coliform microcolonies . Other uses of antibodies in detection use magnetic beads coated with antibodies for the concentration and separation of the oocysts and cysts as described below for immunomagnetic separation (IMS) methods. [ 3 ] Immunomagnetic separation involves purified antigens biotinylated and bound to streptoavidin-coated paramagnetic particles. The raw sample is mixed with the beads, then a specific magnet is used to hold the target organisms against the vial wall and the non-bound material is poured off. This method can be used to recover specific indicator bacteria. [ 3 ] Gene sequence-based methods depend on the recognition of exclusive gene sequences particular to specific strains of organisms. Polymerase chain reaction (PCR) and fluorescence in situ hybridization (FISH) are gene sequence-based methods currently being used to detect specific strains of indicator bacteria. [ 3 ] World Health Organization Guidelines for Drinking Water Quality state that as an indicator organism Escherichia coli provides conclusive evidence of recent fecal pollution and should not be present in water meant for human consumption. [ 4 ] In the U.S., the EPA Total Coliform Rule states that a public water system is out of compliance if more than 5 percent of its monthly water samples contain coliforms. [ 5 ] Early studies showed that individuals who swam in waters with geometric mean coliform densities above 2300/100 mL for three days had higher illness rates. [ 6 ] In the 1960s, these numbers were converted to fecal coliform concentrations assuming 18 percent of total coliforms were fecal. Consequently, the National Technical Advisory Committee in the US recommended the following standard for recreational waters in 1968: 10 percent of total samples during any 30-day period should not exceed 400 fecal coliforms/100 mL or a log mean of 200/100 mL (based on a minimum of 5 samples taken over not more than a 30-day period). [ 7 ] Despite criticism, EPA recommended this criterion again in 1976, however, the Agency initiated numerous studies in the 1970s and 1980s to overcome the weaknesses of the earlier studies. In 1986, EPA revised its bacteriological ambient water quality criteria recommendations to include E. coli and enterococci. [ 7 ] Canada's National Agri-Environmental Standards Initiative's approach to characterizing risks associated with fecal water pollution bacterial water quality at agricultural sites is to compare these sites with those at reference sites away from human or livestock sources. This approach generally results in lower levels if E. coli being used as a standard or “benchmark” based on a study that indicated pathogens were detected in 80% of water samples with less than 100 cfu E. coli per 100 mL. [ 8 ] Most cases of bacterial gastroenteritis are caused by food-borne enteric microorganisms, such as Salmonella and Campylobacter ; however, it is also important to understand the risk of exposure to pathogens via recreational waters. This is especially the case in watersheds where human or animal wastes are discharged to streams and downstream waters are used for swimming or other recreational activities. Other important pathogens other than bacteria include viruses such as rotavirus , hepatitis A and hepatitis E and protozoa like giardia , cryptosporidium and Naegleria fowleri . [ 9 ] Due to the difficulties associated with monitoring pathogens in the environment, risk assessments often rely on the use of indicator bacteria. In the 1950s, a series of epidemiological studies were done in the US to determine the relationship between water quality of natural waters and the health of bathers. The results indicated that swimmers were more likely to have gastrointestinal symptoms, eye infections, skin complaints, ear, nose, and throat infections and respiratory illness than non-swimmers and in some cases, higher coliform levels correlated to higher incidence of gastrointestinal illness, although the sample sizes in these studies were small. Since then, studies have been done to confirm causative relations between swimming and certain health outcomes. A review of 22 studies in 1998 [ 10 ] confirmed that the health risks for swimmers increased as the number of indicator bacteria increased in recreational waters and that E. coli and enterococci concentrations correlated best with health outcomes among all the indicators studied. The relative risk (RR) of illness for swimmers in polluted freshwater versus swimmers in unpolluted water was between 1–2 for the majority of the data sets reviewed. The same study concluded that bacterial indicators were not well correlated to virus concentrations. [ 10 ] Survival of pathogens in waste materials, soil, or water, depends on many environmental factors including temperature, pH, organic matter content, moisture, exposure to light, and the presence of other organisms. [ 11 ] Fecal material can be directly deposited, washed into waters by overland runoff, transported through the ground, or discharged to surface waters via sewer lines, pipes, or drainage tiles. Risk of exposure to humans requires: Die-off rates of bacteria in the environment are often exponential, therefore, direct deposition of fecal material into waters generally contribute higher concentrations of pathogens than material that must be transported overland or through the subsurface. In general, children, the elderly, and immunocompromised individuals require a lower dose of a pathogenic organism in order to contract an infection. Presently there are very few studies which are able to quantify the amount of time people are likely to spend in recreational waters and how much water they are likely to ingest. In general, children swim more often, stay in the water longer, submerge their heads more often, and swallow more water. This makes people more fearful of water in the sea as more bacteria will be growing on and around them. Quantitative microbiological risk assessments (QMRAs) combine pathogen concentrations in water with dose-response relationships and data reflecting potential exposure to estimate the risk of infection. Data on water exposure are generally collected using questionnaires, but may also be determined from actual measurements of water ingested, or estimated from previously published data. Respondents are asked to report the frequency and timing and location of exposures, detailed information about the amount of water swallowed and head submersion, and basic demographic characteristics such as age, gender, socioeconomic status and family composition. Once sufficient data are collected and determined to be representative of the general population, they are usually fit with distributions, and these distribution parameters are then used in the risk assessment equations. Monitoring data representing occurrence of pathogens, direct measurement of pathogen concentrations, or estimations deriving pathogen concentrations from indicator bacteria concentrations, are also fit with distributions. Dose is calculated by multiplying the concentration of pathogens per volume by volume. Dose-responses can also be fit with a distribution. [ 12 ] The more assumptions that are made, the more uncertain estimates of risk related to pathogens will be. However, even with considerable uncertainty, QMRAs are a good way to compare different risk scenarios. In a study comparing estimated health risks from exposures to recreational waters impacted by human and non-human sources of fecal contamination, QMRA determined that the risk of gastrointestinal illness from exposure to waters impacted by cattle were similar to those impacted by human waste, and these were higher than for waters impacted by gull, chicken, or pig faeces. [ 13 ] Such studies could be useful to risk managers for determining how best to focus their limited resources, however, risk managers must be aware of the limitations of data used in these calculations. For example, this study used data describing concentrations of Salmonella in chicken feces published in 1969. [ 14 ] Methods for quantifying bacteria, changes in animal housing practices and sanitation, and many other factors may have changed the prevalence of Salmonella since that time. Also, such an approach often ignores the complicated fate and transport processes that determine bacteria concentrations from the source to the point of exposure. In the US, individual states are allowed to develop their own water quality standards based on EPA's recommendations under the Clean Water Act of 1977. Once water quality standards are approved, states are tasked with monitoring their surface waters to determine where impairments occur, and watershed plans called Total Maximum Daily Loads (TMDLs) are developed to direct water quality improvement efforts including changes to allowable bacteria loading by point sources and recommendations for changes to practices that reduce nonpoint-source contributions to bacteria loads. Also, many states have beach monitoring programs to warn swimmers when high levels of indicator bacteria are detected. [ 15 ]
https://en.wikipedia.org/wiki/Indicator_bacteria
An indicator diagram is a chart used to measure the thermal, or cylinder, performance of reciprocating steam and internal combustion engines and compressors. [ 1 ] An indicator chart records the pressure in the cylinder versus the volume swept by the piston, throughout the two or four strokes of the piston which constitute the engine, or compressor, cycle. The indicator diagram is used to calculate the work done and the power produced in an engine cylinder [ 2 ] or used in a compressor cylinder. The indicator diagram was developed by James Watt and his employee John Southern to help understand how to improve the efficiency of steam engines . [ 3 ] In 1796, Southern developed the simple, but critical, technique to generate the diagram by fixing a board so as to move with the piston, thereby tracing the "volume" axis, while a pencil , attached to a pressure gauge , moved at right angles to the piston, tracing "pressure". [ 4 ] The indicator diagram constitutes one of the earliest examples of statistical graphics . It may be significant that Watt and Southern developed the indicator diagram at roughly the same time that William Playfair (a former Boulton & Watt employee who continued an amicable correspondence with Watt) published The Commercial and Political Atlas, a book often cited as the first to employ statistical graphics. [ 5 ] The gauge enabled Watt to calculate the work done by the steam while ensuring that its pressure had dropped to zero by the end of the stroke, thereby ensuring that all useful energy had been extracted. The total work could be calculated from the area between the "volume" axis and the traced line. The latter fact had been realised by Davies Gilbert as early as 1792 and used by Jonathan Hornblower in litigation against Watt over patents on various designs. Daniel Bernoulli had also had the insight about how to calculate work. [ 6 ] Watt used the diagram to make radical improvements to steam engine performance and long kept it a trade secret. Though it was made public in a letter to the Quarterly Journal of Science in 1822, [ 7 ] it remained somewhat obscure, John Farey, Jr. only learned of it on seeing it used, probably by Watt's men, when he visited Russia in 1826. In 1834, Émile Clapeyron used a diagram of pressure against volume to illustrate and elucidate the Carnot cycle , elevating it to a central position in the study of thermodynamics . [ 8 ] Later instruments for steam engine ( illus. ) used paper wrapped around a cylindrical barrel with a pressure piston inside it, the rotation of the barrel coupled to the piston crosshead by a weight- or spring-tensioned wire. [ 9 ] In 1869 the British marine engineer Nicholas Procter Burgh wrote a full book on the indicator diagram explaining the device step by step. He had noticed that "a very large proportion of the young members of the engineering profession look at an indicator diagram as a mysterious production." [ 10 ] Indicators developed for steam engines were improved for internal combustion engines with their rapid changes in pressure, resulting from combustion, and higher speeds. In addition to using indicator diagrams for calculating power they are used to understand the ignition, injection timing and combustion events which occur near dead-center, when the engine piston and indicator drum are hardly moving. Much better information during this part of the cycle is obtained by offsetting the indicator motion by 90 degrees to the engine crank, giving an offset indicator diagram. The events are recorded when the velocity of the drum is near its maximum and are shown against crank-angle instead of stroke. [ 11 ]
https://en.wikipedia.org/wiki/Indicator_diagram
Indicator organisms are used as a proxy to monitor conditions in a particular environment, ecosystem, area, habitat, or consumer product. Certain bacteria , fungi and helminth eggs are being used for various purposes. Certain bacteria can be used as indicator organisms in particular situations, such as when present in bodies of water. Indicator bacteria themselves may not be pathogenic but their presence in waste may indicate the presence of other pathogens. [ 1 ] Similar to how there are various types of indicator organisms, there are also various types of indicator bacteria. The most common indicators are total coliforms, fecal coliforms, E. coli , and enterococci. [ 2 ] The presence of bacteria commonly found in human feces, termed coliform bacteria (e.g. E. coli ), in surface water is a common indicator of faecal contamination . The means by which pathogens found in fecal matter can enter recreational bodies of water include, but are not limited to, sewage, septic systems, urban runoff , coastal recreational waste, and livestock waste. [ 2 ] For this reason, sanitation programs often test water for the presence of these organisms to ensure that drinking water systems are not contaminated with feces. This testing can be done using several methods which generally involve taking samples of water, or passing large amounts of water through a filter to sample bacteria, then testing to see if bacteria from that water grow on selective media such as MacConkey agar . MacConkey agar will only allow the growth of gram-negative bacteria and the bacteria will grow differently according to how it metabolizes lactose or its lack of ability to metabolize it. [ 3 ] Alternatively, the sample can be tested to see if it utilizes various nutrients in ways characteristic of coliform bacteria. [ 4 ] Coliform bacteria selected as indicators of faecal contamination must not persist in the environment for long periods of time following efflux from the intestine, and their presence must be closely correlated with contamination by other faecal organisms. Indicator organisms need not be pathogenic. [ 5 ] Non-coliform bacteria, such as Streptococcus bovis and certain clostridia , may also be used as an index of faecal contamination. [ 6 ] The presence of indicator bacteria is measured in a variety of ecosystems and sometimes alongside other measurements. In the Great Lakes, a study was conducted testing for both fecal indicator bacteria (FIB) concentrations and pathogen gene markers. [ 7 ] The FIB measured in this study included fecal coliform bacteria, E. coli , and enterococci. [ 7 ] FIB were collected via membrane filtration and serial dilution methods, producing samples which could be cultured and used to run PCR and amplify the pathogenic genes in question. [ 7 ] Among the 22 sampling locations, 165 samples were analyzed and E. coli concentrations were found to range from less than 2 to 26,000 CFU/100mL, enterococci ranged from less than 2 to 31,000 CFU/100mL, and fecal coliform bacteria ranged from less than 2 to 950 CFU/100mL. [ 7 ] Another example of indicator bacteria being measured for safety purposes is in Malibu, CA. The state of California requires that beaches with greater than 50,000 visitors a year be monitored for FIB. [ 8 ] High FIB concentrations, exceeding what is considered acceptable by the EPA were observed in Malibu Lagoon and other Malibu beaches. [ 8 ] Measurement of high levels of FIB leads to a search to determine what the source(s) is/are. Potential sources of FIB in the Malibu area include waste from sewage treatment systems, runoff from local developments, and wildlife waste. [ 8 ] Common FIB were measured including enterococci which presented itself in levels as high as 242,000 MPN/100mL within onsite wastewater treatment systems. [ 8 ] The measurement of FIB is widespread and used for the purpose of providing safe waters. In Texas, the occurrence and distribution of FIB, in particular fecal coliforms and E. coli , were measured in streams that receive discharge from the Dallas Fort Worth International Airport and the surrounding area. [ 9 ] These streams receiving the waste are home to aquatic life, used for recreational purposes, and as fishing sites. [ 9 ] Various standards exist in order to ensure the safety of all organisms present in the ecosystem, including humans. E. coli is used as an indicator of unsafe or below standard water quality for recreational use in Texas. [ 10 ] The standards for E. coli levels that declare contact recreation unsafe are a geometric mean of over 126 cfu/100mL or over a fourth of the samples measuring levels greater than 394cfu/100mL. [ 10 ] Various sites were tested, some found to exceed acceptable levels of E. coli and therefore did not support recreational use. [ 9 ] This is yet another example of how testing for indicator bacteria is used to determine whether bodies of water are safe for various uses, particularly recreational use. Penicillium species, Aspergillus niger and Candida albicans are used in the pharmaceutical industry for microbial limit testing, bioburden assessment, method validation, antimicrobial challenge tests, and quality control testing. [ 11 ] When used in this capacity, Penicillium and A. niger are compendial mold indicator organisms. [ 11 ] Molds such as Trichoderma , Exophiala , Stachybotrys , Aspergillus fumigatus , Aspergillus versicolor , Phialophora , Fusarium , Ulocladium and certain yeasts are used as indicators of indoor air quality . [ 12 ] [ 13 ] [ 14 ] Metagenomic techniques allow for the sequencing of whole populations of microorganisms in a single operation.  With metagenomic sequencing, it is possible to use the entire community of fungal organisms, or mycobiome in the soil or water of a given area as a biological indicator [ 15 ] of anthropogenic activity, such as sewage overflow from an urban area or fertilizer and pesticide runoff from an agricultural one. Composition of fungal communities has been found to be a good indicator of environmental properties like pH, altitude and water temperature. Chauvet [ 16 ] used this approach to take ecosystem-wide measurements of these variables using a network of monitoring stations at 27 streams in Southwestern France. Cudowski et al . [ 17 ] sampled fungi in the water of the Augustow canal in eastern Poland. They took many standard measures of water quality -- temperature, oxygen saturation, pH, and dissolved nitrogen, organic carbon and sulfur levels. They identified species with microscopic methods and RFLP analysis. They found 38 fungal species, including 12 hyphomycetiae and 13 potential pathogens, belonging either to the dermatophytes or to relatives of C. albicans .  Cudowski et al. found that they could determine whether a sample of water had been taken from the natural (lake-like) or artificial part of the canal. They also found that the three major groups of fungi that they found, hyphomycetes, dermatophytes and Candida relatives, could predict many of their water quality measurements, which formed two clusters in a redundancy analysis. Bouffand et al . [ 18 ] used Arbuscular Mycorhizzal Fungi (AMF), an asexual clade of fungi that form symbiotic relationships with plant root systems, as indicators to assess soil function and biodiversity in many sites across Europe.  They took soil samples in various climatic zones (atlantic, continental, mediterranean, alpine) and three land use regimes (arable, grassland, forestry), and sequenced the DNA of the fungi the soil contained. They found eight indicator species for soil pH: four that were only present when pH was less than 5, three for pH > 5 and one for pH > 7.  They found eight indicators of land use: two for forests, five for farm- and grassland, and one for both.  They also found one indicator fungus that was present when soil organic carbon was high, and another present when it was low. The eggs from helminths (parasitic worms) are a commonly used indicator organism to assess the safety of sanitation and wastewater reuse systems (such schemes are also called reuse of human excreta ). [ 19 ] : 55 This is because they are the most resistant pathogens of all types of pathogens (pathogens can be viruses , bacteria , protozoa and helminths). [ 20 ] It means they are relatively hard to destroy through conventional treatment methods. They can survive for 10–12 months in tropical climates. [ 20 ] These eggs are also called ova in the literature. [ 21 ] Helminth eggs that are found in wastewater and sludge stem from soil-transmitted helminths (STHs) which include Ascaris lumbricoides (Ascaris), Anclostoma duodenale , Necator americanus (hookworm), and Trichuris trichiura (whipworm). [ 22 ] Ascaris and whipworm that are identified in reusable wastewater systems can cause certain diseases and complications if ingested by humans and pigs. [ 23 ] Hookworms will plant and hatch their larvae into the soil where they grow until maturity. Once the hookworm eggs are fully developed, they infect organisms by crawling through the organism’s skin. [ 24 ] The presence or absence of viable helminth eggs ("viable" meaning that a larva would be able to hatch from the egg) in a sample of dried fecal matter, compost or fecal sludge is often used to assess the efficiency of diverse wastewater and sludge treatment processes in terms of pathogen removal. [ 19 ] : 55 In particular, the number of viable Ascaris eggs is often taken as an indicator for all helminth eggs in treatment processes as they are very common in many parts of the world and relatively easy to identify under the microscope. However, the exact inactivation characteristics may vary for different types of helminth eggs. [ 25 ] The technique used for testing depends on the type of sample. [ 21 ] When the helminth ova are in sludge, processes such as alkaline-post stabilization, acid treatment, and anaerobic digestion are used to reduce the amount of helminth ova in areas where there is a large amount. These methods make it possible for helminth ova to be within the healthy requirements of ≤1 helminth ova per liter. Dehydration is used to inactivate helminth ova in fecal sludge. This type of inactivation occurs when feces is stored between 1-2 years, a high total solids content (>50-60%) is present, items such as leaves, lime, earth, etc. are added, and at a temperature of 30°C or higher. [ 24 ]
https://en.wikipedia.org/wiki/Indicator_organism
The indie design movement is made up of independent designers, artists, and craftspeople who design and make a wide array of products − without being part of large, industrialised businesses. The indie design movement can be seen as being an aspect of the general indie movement and DIY culture . The designs created generally include works and art pieces that are individual to that creative individual. Such products may include jewellery and other fashion accessories , ceramics , clothing, glass art , metalwork , furniture, cosmetics, handicrafts , and diverse artworks . Self-employed indie designers are supported by shoppers who are seeking niche and often handmade products as opposed to those mass-produced by manufacturing and retail corporations. [ citation needed ] Indie designers often sell their items directly to buyers by way of their own online shops, craft fairs , street markets and a variety of online marketplaces, such as Etsy . [ 1 ] However, they may also engage in consignment and/or wholesale relationships with retail outlets, both online and offline. [ citation needed ] In recent years some large manufacturing and/or retail fashion and other lifestyle corporations have sold products which appear to closely resemble or directly copy innovative original works of indie designers and artists. [ 2 ] This has caused some controversy.
https://en.wikipedia.org/wiki/Indie_design
In general usage the word indigen is treated as a variant of the word indigene, meaning a native. However, it was used in a strictly botanical sense for the first time in 1918 by Liberty Hyde Bailey ((1858–1954) an American horticulturist, botanist and cofounder of the American Society for Horticultural Science) and described as a plant " of known habitat ". [ 1 ] Later, in 1923, Bailey formally defined the indigen as: " ... a species of which we know the nativity, - one that is somewhere recorded as indigenous . " The term was coined to contrast with cultigen which he defined in the 1923 paper as: " ... the species, or its equivalent, that has appeared under domestication, – the plant is cultigenous. " [ 2 ] This botany article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indigen
The expression Indigenous Caucus (or Caucus of Indigenous Peoples ) refers to informal groups of Indigenous peoples representatives with ad hoc rules of engagement in certain United Nations and other intergovernmental fora. An Indigenous Caucus participates as observer to the World Intellectual Property Organization (WIPO), particularly its Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC). [ 1 ] Indigenous Peoples participate in the Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore as observers for their respective organizations, but also collectively through the informal Indigenous Caucus, "averaging around 25 to 30 people per session." [ 2 ] According to WIPO IGC's Practical Guide for Participants: Upon invitation of the Chairperson of the WIPO Indigenous Caucus, accredited observers who represent indigenous and local communities meet at the Indigenous Consultative Forum in order to prepare for the IGC on the day preceding its session. A meeting room is made available by WIPO for that purpose. Any accredited observer who represents indigenous and local communities and who wishes to attend the Consultative Forum is requested to advise the WIPO Secretariat in advance in order to be granted a badge at his or her arrival at the WIPO premises. [ 3 ] The Indigenous Caucus also participated [ 4 ] in the Diplomatic Conference on Intellectual Property, Genetic Resources and Associated Traditional Knowledge which concluded the GRATK Treaty in May 2024 in Geneva, Switzerland. [ 5 ] During the Conference, the Caucus served as a coordinating forum, enabling its participants with "pivotal roles in liaising with their respective country delegations and others they could connect with." [ 6 ] In addition, in the middle of the Diplomatic Conference, "a rotating member of the Indigenous Caucus" was allowed to participate in the reduced negotiating group, where other participants were representatives of WIPO's regional country groups. [ 7 ] As a participant to the Caucus related during the Diplomatic Conference: The process gets truncated from now on with a much smaller group of representatives, including a rotating member of the Indigenous Caucus, participating in informal negotiations before wording is brought back to the two committees and then to a legal drafting group and into the final plenary session by Friday [24 May]. [ 8 ] The United Nations Forum on Business and Human Rights (UNFBHR) also has a Caucus of Indigenous Peoples. [ 9 ] Africa Eurasia North America Oceania South America
https://en.wikipedia.org/wiki/Indigenous_Caucus
Indigenous architecture refers to the study and practice of architecture of, for, and by Indigenous peoples . This field of study and practice in Australia , Canada , the circumpolar regions, New Zealand , the United States , and many other regions where Indigenous people have a built tradition or aspire translate or to have their cultures translated in the built environment. This has been extended to landscape architecture , planning , placemaking , public art , urban design , and other ways of contributing to the design of built environments. The term usually designates culture-specific architecture: it covers both the vernacular architecture and contemporary architecture inspired by the enculture, even when the latter includes features brought from outside. The traditional or vernacular architecture of Indigenous Australians , including Aboriginal Australians and Torres Strait Islanders , varied to meet the lifestyle , social organisation, family size, cultural and climatic needs and resources available to each community. [ 1 ] The types of forms varied from dome frameworks made of cane through spinifex -clad arc-shaped structures, to tripod and triangular shelters and elongated, egg-shaped, stone-based structures with a timber frame to pole and platform constructions. Annual base camp structures, whether dome houses in the rainforests of Queensland and Tasmania or stone-based houses in south-eastern Australia, were often designed for use over many years by the same family groups. Different language groups had differing names for structures. These included humpy , gunyah (or gunya), goondie, wiltja and wurley (or wurlie). Until the 20th century, many non-Indigenous people assumed that Indigenous Australian peoples lacked permanent buildings, likely because Europeans misinterpreted Indigenous lifeways ways during early contact. [ citation needed ] Labelling Indigenous Australian communities as ' nomadic ' allowed early settlers to justify the takeover of Traditional Lands claiming that they were not inhabited by permanent residents. [ citation needed ] Stone engineering was utilised by a number of Indigenous language groups. [ 2 ] Examples of Indigenous Australian stone structures come from Western Victoria's Gunditjmara peoples. [ 3 ] [ 4 ] [ 5 ] These builders utilised basalt rocks around Lake Condah to erect housing and complicated systems of stone weirs, fish, and eel traps, and gates in water-course creeks. The lava-stone homes had circular stone walls over a metre high and topped with a dome roof made of earth or sod cladding. Evidence of sophisticated stone engineering has been found in other parts of Australia. As late as 1894, a group of around 500 people still lived in houses near Bessibelle that were constructed out of stone with sod cladding on a timber-framed dome. Nineteenth-century observers also reported flat slab slate-type stone housing in South Australia's northeast corner. These dome-shaped homes were built on heavy limbs and used clay to fill in the gaps. In New South Wales’ Warringah area, stone shelters were constructed in an elongated egg shape and packed with clay to keep the interior dry. Housing for Indigenous people living in many parts of Australia has been characterised by an acute shortage of dwellings , poor quality construction , and housing stock ill-suited to Indigenous lifestyles and preferences. Rapid population growth , shorter lifetimes for housing stock, and rising construction costs have meant that efforts to limit overcrowding and provide healthy living environments for Indigenous people have been difficult for governments to achieve. Indigenous housing design and research is a specialised field within housing studies. There have been two main approaches to the design of Indigenous housing in Australia – Health and Culture. [ 6 ] [ 7 ] The cultural design model attempts to incorporate understandings of differences in Indigenous Australian cultural norms into housing design. There a large body of knowledge on Indigenous housing in Australia that promotes the provision and design of housing that supports Indigenous residents’ socio-spatial needs, domiciliary behaviours, cultural values and aspirations. The culturally specific needs for Indigenous housing have been identified as major factors in the success of housing and failing to recognise the varying and diverse cultural housing needs of Indigenous peoples have been cited as the reasons for Indigenous Australian housing failures by Western academics for decades. Western-style housing imposes conditions on Indigenous residents that may hinder the practice of cultural norms. If adjusting to living in a particular house strains relationships, then severe stress on the occupants may result. Ross noted, "Inappropriate housing and town planning have the capacity to disrupt the social organisation, the mechanisms for maintaining smooth social relations, and support networks." [ 8 ] A range of cultural factors are discussed in the literature. These include designing housing to accommodate aspects of customer behaviour such as avoidance behaviours, household group structures, sleeping and eating behaviours, cultural constructs of crowding and privacy, and responses to death. The literature indicates that each housing design should be approached independently to recognise the many Indigenous cultures with varying customs and practices that exist across Australia. The health approach to housing design developed as housing is an important factor affecting the health of Indigenous Australians. Substandard and poorly maintained housing along with non-functioning infrastructure can create serious health risks. [ 9 ] [ 10 ] The 'Housing for Health' approach developed from observations of the housing factors affecting Indigenous Australian peoples' health into a methodology for measuring, rating, and fixing 'household hardware' deemed essential for health. The approach is based on nine 'healthy housing principles' which are the: Defining what is 'Indigenous architecture' in the contemporary context is a debate in some spheres. [ original research? ] Many researchers and practitioners generally agree that Indigenous architectural projects are those which are designed with Indigenous clients or projects that imbue indigeneity through consultation, and advance Indigenous Australian agency. This latter category may include projects which are designed primarily for non-Indigenous users. Notwithstanding the definition, a range of projects have been designed for, by or with Indigenous users. The application of evidence-based research and consultation has led to museums, courts, cultural centres, keeping houses, prisons, schools, and a range of other institutional and residential buildings being designed to meet the varying and differing needs and aspirations of Indigenous users. [ peacock prose ] Notable Projects include: Indigenous architecture of the 21st century has been enhanced by university-trained Indigenous architects, landscape architects, and other design professionals who have incorporated different aspects of traditional Indigenous cultural references and symbolism, fused architecture with ethnoarchitectural styles and pursued various approaches to the questions of identity and architecture. [ 39 ] Drawing on their heritage, Indigenous designers, architects and built environment professionals from Australia often use a Country-centred design methodology, also referred to as “ Country centric design” , “ Country-led design” , “privileging Country in design”, [ 56 ] and “designing with Country”. [ 57 ] This methodology centres around the Indigenous experience of Country (capital C) and has been developed and used by generations of Indigenous peoples in Australia. The original Indigenous people of Canada developed complex building traditions thousands of years before the arrival of the first Europeans. Canada contained five broad cultural regions, defined by common climatic, geographical and ecological characteristics. Each region gave rise to distinctive building forms which reflected these conditions, as well as the available building materials, means of livelihood, and social and spiritual values of the resident peoples. A striking feature of traditional Canadian architecture was the consistent integrity between structural forms and cultural values. The wigwam , (otherwise known as wickiup or wetu ), tipi , and snow house were building forms perfectly suited to their environments and to the requirements of mobile hunting and gathering cultures. The longhouse , pit house and plank house were diverse responses to the need for more permanent building forms. The semi-nomadic peoples of the Maritimes, Quebec, and Northern Ontario, such as the Mi'kmaq , Cree , and Algonquin generally lived in wigwams '. The wood-framed structures, covered with an outer layer of bark, reeds, or woven mats; usually in a cone shape, although sometimes a dome. The groups changed locations every few weeks or months. They would take the outer layer of the structure with them, and leave the heavy wood frame in place. The frame could be reused if the group returned to the location at a later date. Further south, in what is today Southern Ontario and Quebec the Iroquois society lived in permanent agricultural settlements holding several hundred to several thousand people. The standard form of housing was the long house . These were large structures, several times longer than they were wide holding a large number of people. They were built with a frame of saplings or branches, covered with a layer of bark or woven mats. On the Canadian Prairies the standard form of life was a nomadic one, with the people often moving to a new location each day to follow the bison herds. Housing thus had to be portable, and the tipi was developed. The tipi consisted of a thin wooden frame and an outer covering of animal hides. The structures could be quickly erected, and were light enough to transport long distances. In the Interior of British Columbia the standard form of home was the semi-permanent pit house , thousands of relics of which, known as quiggly holes are scattered across the Interior landscape. These were structures shaped like an upturned bowl, placed on top of a 3-or-4-foot-deep (0.91 or 1.22 m) pit. The bowl, made of wood, would be covered with an insulating layer of earth. The house would be entered by climbing down a ladder at the centre of the roof. Some of the best architectural designs were made by settled people along the North American west coast. People like the Haida used advanced carpentry and joinery skills to construct large houses of red cedar planks. These were large square, solidly built houses. One advanced design was the six beam house , named for the number of beams that supported the roof, where the front of each house would be decorated with a heraldric pole that would be sometimes be brightly painted with artistic designs. In the far north, where wood was scarce and solid shelter essential for survival, several unique and innovative architectural styles were developed. One of the most famous is the igloo , a domed structure made of snow, which was quite warm. In the summer months, when the igloos melted, tents made of seal skin, or other hides, were used. The Thule adopted a design similar to the pit houses of the BC interior, but because of the lack of wood they instead used whale bones for the frame. In addition to meeting the primary need for shelter, structures functioned as integral expressions of their occupants' spiritual beliefs and cultural values. In all five regions, dwellings performed dual roles – providing both shelter and a tangible means of linking mankind with the universe. Building-forms were often seen as metaphorical models of the cosmos, and as such they frequently assumed powerful spiritual qualities which helped define the cultural identity of the group. The sweat lodge is a hut, typically dome-shaped and made with natural materials, used by Indigenous peoples of the Americas for ceremonial steam baths and prayer. There are several styles of structures used in different cultures; these include a domed or oblong hut similar to a wickiup , a permanent structure made of wood and earth, or even a simple hole dug into the ground and covered with planks or tree trunks. Stones are typically heated and then water poured over them to create steam. In ceremonial usage, these ritual actions are accompanied by traditional prayers and songs. As many more settlers arrived in Canada, Indigenous peoples were strongly motivated to relocate to newly created reserves, where the Canadian government encouraged Indigenous people to build permanent houses and adopt farming in place of their traditional hunting and trapping. Not familiar to this sedentary lifestyle, many of these people continued to using their traditional hunting grounds, but when much of southern Canada was settled in the late 1800s and early 1900s, this practice ceased ending their nomadic way of life. [ citation needed ] After World War II , Indigenous people were relative non-participants in the housing and economic boom Canada. [ citation needed ] Most remained on remote rural reserves often in crowded dwellings that mostly lacked basic amenities. [ citation needed ] As health services on Indigenous reserves increased during the 1950s and 1960s, life expectancy greatly improved including dramatic drop in the infant mortality, though this may have exacerbated the existing overcrowding problem. [ dubious – discuss ] [ citation needed ] Since the 1960s the living conditions in on-reserve housing in Canada have not improved significantly. Overcrowding remains a serious problem in many communities. Many houses are in serious need of repair and others still lack basic amenities. [ dubious – discuss ] Poor conditions of housing on reservations has contributed to many Indigenous people leaving reserves and migrating to urban areas of Canada, causing issues with homelessness, child poverty, tenancy, and transience. [ dubious – discuss ] [ citation needed ] Notable projects include: In Old Fiji, the architecture of villages was simple and practical to meet the physical and social need and to provide communal safety. The houses were square in shape and with pyramid like shaped roofs, [ 69 ] and the walls and roof were thatched and various plants of practical use were planted nearby, each village having a meeting house and a Spirit house. The spirit house was elevated on a pyramid like base built with large stones and earth, again a square building with an elongated pyramid like [ 69 ] roof with various scented flora planted nearby. The houses of Chiefs were of similar design and would be set higher than his subjects houses but instead of an elongated roof would have similar roof to those of his subjects homes but of course on a larger scale. With the introduction of communities from Asia aspects of their cultural architecture are now evident in urban and rural areas of Fiji's two main Islands Viti Levu and Vanua Levu . A village structure shares similarities today but built with modern materials and spirit houses (Bure Kalou) have been replaced by churches of varying design. The urban landscape of early Colonial Fiji was reminiscent of most British colonies of the 19th and 20th century in tropical regions of the world, while some of this architecture remains, the urban landscape is evolving in leaps and bounds with various modern aspects of architecture and design becoming more and more evident in the business , industrial and domestic sector, the rural areas are evolving at a much slower rate. Kanak cultures developed in the New Caledonia archipelago over a period of three thousand years. Today, France governs New Caledonia but has not developed a national culture. The Kanak claim for independence is upheld by a culture thought of as national by the Indigenous population. Kanaks have settled over all the islands officially indicated by France as New Caledonia and Dependencies. The archipelago includes the principal island, Grande Terre , Belep Islands to the north and Isle of Pines to the south. It is bordered on the east by the Loyalty Islands , consisting of three coral atolls ( Mare , Lifou , and Ouvea ). Kanak society is organised around clans, which are both social and spatial units. The clan could initially be made up of people related through a common ancestor, comprising several families. There can be between fifty and several hundred people in a clan. This basic definition of the clan has become modified over the years due to historical situations and places involving wars, disagreements, new arrivals etc. The clan structure, therefore, evolved as new people arrived and were given a place and a role in the social organisation of the clan, or through clan members leaving to join other clans. Traditionally a village is set up in the following manner. The Chief's hut (called La Grande Case) lies at the end of a long and wide central walkway which is used for gathering and performing ceremonies. The Chief's younger brother lives in a hut at the other end. The rest of the village lives in huts along the central walkway, which is lined with auracarias or palms. Trees lined the alleys which were used as shady gathering places. For Kanak people, space is divided between premises reserved for important men and other residences placed closer to the women and children. Kanak people generally avoided being alone in empty spaces. The inside of a Grande Case is dominated by the central pole (made out of houp wood), which holds up the roof and the rooftop spear, the flèche faîtière . Along the walls are various posts which are carved to represent ancestors. The door is flanked by two carved door posts (called Katana), who were the “sentinels who reported the arrival of strangers”. There also is a carved door step. The rooftop spear has three main parts: the spear facing up, which prevents bad spirits coming down onto ancestor. The face, which represents the ancestor. The spear on the bottom which keep bad spirits coming up to ancestor. The flèche faîtière or a carved rooftop spear, spire or finial is the home of ancestral spirits and is characterized by three major components. The ancestor is symbolized by a flat, crowned face in the centre of the spear. The ancestor's voice is symbolized by a long, rounded pole that is run through by conch shells . The symbolic connection of the clan, through the chief, is a base, which is planted into the case's central pole. Sharply pointed wood pieces fan out from either end of the central area, symbolically preventing bad spirits from being able to reach the ancestor. [ 70 ] It evokes, beyond a particular ancestor, the community of ancestors. [ 71 ] and represents the ancestral spirits, symbolic of transition between the world of the dead and the world of the living. [ 70 ] [ 72 ] The arrow or the spear normally has a needle at the end to insert threaded shells from bottom to top; one of the shells contains arrangements to ensure protection of the house and the country. During wars enemies attacked this symbolic finial. After the death of a Kanak chief, the flèche faîtière is removed and his family takes it to their home. Though it is allowed to be used again, as a sign of respect, it is normally kept at burial grounds of noted citizens or at the mounds of abandoned grand houses. [ 72 ] The form of the buildings varied from island to island, but were generally round in plan and conical in the vertical elevation. The traditional hut features represent the organization and lifestyle of the occupants. The hut is the endogenous Kanak architectural element and built entirely of plant material taken from the surrounding forest reserve. Consequently, from one area to another, the materials used are different. Inside the hut, a hearth is built on the floor between the entrance and the centre pole that defines a collective living space covered with pandanus leaf (ixoe) woven mats, and a mattress of coconut leaves (behno). The round hut is the translation of physical and material into Kanak cultures and social relations within the clan. Contemporary Kanak society has several layers of customary authority, from the 4,000–5,000 family-based clans to the eight customary areas ( aires coutumières ) that make up the territory. [ 73 ] Clans are led by clan chiefs and constitute 341 tribes, each headed by a tribal chief. The tribes are further grouped into 57 customary chiefdoms ( chefferies ), each headed by a head chief, and forming the administrative subdivisions of the customary areas. [ 73 ] The Jean-Marie Tjibaou Cultural Centre ( French : Centre Culturel Tjibaou ) designed by Italian architect Renzo Piano and opened in 1998 is the icon of the Kanak culture and contemporary Kanak architecture. The Centre was constructed on the narrow Tinu Peninsula , approximately 8 kilometres (5.0 mi) northeast of the centre of Nouméa , the capital of New Caledonia , celebrates the vernacular Kanak culture, amidst much political controversy over the independent status sought by Kanaks from French rule. It was named after Jean-Marie Tjibaou , the leader of the independence movement who was assassinated in 1989 and who had a vision of establishing a cultural centre which blended the linguistic and artistic heritage of the Kanak people. [ 74 ] [ 75 ] The Kanak building traditions and the resources of modern international architecture were blended by Piano. The formal curved axial layout, 250 metres (820 ft) long on the top of the ridge, contains ten large conical cases or pavilions (all of different dimensions) patterned on the traditional Kanak Grand Hut design. The building is surrounded by landscaping which is also inspired by traditional Kanak design elements. [ 75 ] [ 76 ] [ 77 ] Marie Claude Tjibaou, widow of Jean Marie Tjibaou and current leader of the Agency for the Development of Kanak Culture (ADCK), observed: "We, the Kanaks, see it as a culmination of a long struggle for the recognition of our identity; on the French Government’s part it is a powerful gesture of restitution." [ 75 ] The building plans, spread over an area of 8,550 square metres (92,000 sq ft) of the museum, were conceived to incorporate the link between the landscape and the built structures in the Kanak traditions. The people had been removed from their natural landscape and habitat of mountains and valleys and any plan proposed for the art centre had to reflect this aspect. Thus, the planning aimed at a unique building which would be, as the architect Piano stated, "to create a symbol and ...a cultural centre devoted to Kanak civilization, the place that would represent them to foreigners that would pass on their memory to their grand children". The model as finally built evolved after much debate in organized 'Building Workshops' in which Piano's associate, Paul Vincent and Alban Bensa, an anthropologist of repute on Kanak culture were also involved. The Kanak village planning principles placed the houses in groups with the Chief's house at the end of an open public alley formed by other buildings clustered along on both sides was adopted in the Cultural Centre planned by Piano and his associates. An important concept that evolved after deliberations in the 'building workshops, after Piano won the competition for building the art centre, also involved "landscaping ideas" to be created around each building. To this end, an interpretative landscape path was conceived and implemented around each building with series of vegetative cover avenues along the path that surrounded the building, but separated it from the lagoon. This landscape setting appealed to the Kanak people when the centre was inaugurated. Even the approach to the buildings from the paths catered to the local practices of walking for three quarters of the path to get to the entrance to the Cases. One critic of the building observed: "It was very intelligent to use the landscape to introduce the building. This is the way the Kanak people can understand". [ 78 ] The centre comprises an interconnected series of ten stylised grandes cases (chiefs' huts), which form three villages (covering an area of 6060 square metres). These huts have an exposed stainless-steel structure and are constructed of iroko, an African rot-resistant timber which has faded over time to reveal a silver patina evocative of the coconut palms that populate the coastline of New Caledonia. The Jean-Marie Tjibaou Cultural Centre draws materially and conceptually on its geopolitical environment, so that despite being situated on the outskirts of the capital city, it draws influence from the diverse Kanak communities residing elsewhere across Kanaky. The circling pathway that leads from the car park to the centre's entrance is lined with plants from various regions of Kanaky . Together, these represent the myth of the creation of the first human: the founding hero, Téâ Kanaké. Signifying the collaborative design process, the path and centre are organically interconnected so it is difficult to discern any discrete edges existing between the building and gardens. Similarly, the soaring huts appear unfinished as they open outward to the sky, projecting the architect's image of Kanak culture as flexible, diasporic, progressive and resistant to containment by traditional museological spaces. Other important architectural projects have included the construction of the Mwâ Ka , 12m totem pole, topped by a grande case (chief's hut) complete with flèche faîtière standing in a landscaped square opposite Musée de Nouvelle-Calédonie. Mwâ Ka means the house of mankind – in other words, a house where discussions are held. Its carvings are divided into eight cylindrical sections representing the eight customary regions of New Caledonia. Mounted on a concrete double-hulled pirogue, the Mwâ Ka symbolises the mast but also the central post of a case. At the back of the pirogue a wooden helmsman steers the ever forwards. The square's flowerbed arrangements depicting stars and moons are symbolic of navigation. The Mwâ Ka was conceived by the Kanak community to commemorate 24 September, the anniversary of the French annexation of New Caledonia in 1853. Initially a day of mourning, the creation of the Mwâ Ka (inaugurated in 2005) symbolised the end of the mourning period thus giving the date a new significance. The erection of the Mwâ Ka was a way of burying past suffering related to French colonisation and turning a painful anniversary into a day for celebrating Kanak identity and the new multi-ethnic identity of Kanaky. The first known dwellings of the ancestors of Māori were based on houses from their Polynesian homelands (Māori ancestors migrated from eastern Polynesia to the New Zealand archipelago {{circa | 1400 CE). On arriving, the Polynesians found they needed warmth and protection from a climate markedly different from the warm and humid tropical Polynesian islands. The early colonisers soon modified their construction techniques to suit the colder climate. Many traditional island building-techniques were retained, using new materials: raupo reed , toetoe grass, aka vines [ 79 ] and native timbers: totara , pukatea , and manuka . Archeological evidence suggests that the design of Moa-hunter sleeping houses in the very early years of settlement was similar to that of houses found in Tahiti and eastern Polynesia. These were rectangular, round, oval, or "boat-shaped" semi-permanent dwellings. These buildings were semi-permanent, as people moved around looking for food sources . Houses had wooden frames covered in reeds or leaves, with mats on earth floors. To help people keep warm, houses were small, with low doors, earth insulation and a fire inside. The standard building in a Māori settlement was a simple sleeping whare puni (house/hut) about 2 metres x 3 metres with a low roof, an earth floor, no window and a single low doorway. Heating was provided by a small open fire in winter. There was no chimney. Materials used in construction varied between areas, but raupo reeds, flax and totara-bark shingles for the roof were common. [ 80 ] Similar small whare , but with interior drains, were used to store kumara on sloping racks. Around the 15th century communities became bigger and more settled. People built wharepuni – sleeping houses with room for several families, and a front porch. Other buildings included pātaka (storehouses), sometimes decorated with carvings, and kāuta (cooking houses). [ 81 ] The classic phase of Māori culture (1350–1769) was characterized by a more developed tribal society expressing itself clearly in wood carving and architecture. The most spectacular building type was the whare-whakairo , or carved meeting house. This building was the focus of social and symbolic Maori assemblies, and made visible a long tribal history. The wall slabs depicted warriors, chiefs and explorers. The painted rafter patterns and tukutuku panels demonstrated the Māori love for land, forest and river. The whare-whakairo was a colourful synthesis of carved architecture, expressing reverence for ancestors and love of nature. In the classic period, a higher proportion of whare were located inside pā than was the case after contact with Europeans . The whare of a chief was similar but larger—often with full headroom in the centre, a small window and a partly enclosed front porch. In times of conflict the chief lived in a whare on the tihi (summit) of a hill pā . In colder areas, such as in the North Island central plateau, it was common for whare to be partly sunk into the ground for better insulation. Ngāti Porou ancestor, Ruatepupuke is said to have established the tradition of whare whakairo (carved meeting houses) on the East Coast. Whare whakairo are often named after ancestors and considered to embody that person. The house is seen as an outstretched body, and can be addressed like a living being. A wharenui (literally 'big house' alternatively known as meeting houses , whare rūnanga or whare whakairo (literally "carved house") is a communal house generally situated as the focal point of a marae . The present style of wharenui originated in the early to middle nineteenth century. The houses are often carved inside and out with stylised images of the iwi 's ancestors, with the style used for the carvings varying from iwi to iwi. The houses always have names, sometimes the name of an ancestor or sometimes a figure from Māori mythology. While a meeting house is considered sacred, it is not a church or house of worship, but religious rituals may take place in front of or inside a meeting house. On most marae, no food may be taken into the meeting house. [ 82 ] Food was not cooked in the sleeping whare but in the open or under a kauta (lean-to). Saplings with branches and foliage removed were used to store and dry item such as fishing nets or cloaks. Valuable items were stored in pole-mounted storage shelters called pataka . [ 83 ] [ 84 ] Other constructions were large racks for drying split fish. The marae was the central place of the village where culture can be celebrated and intertribal obligations can be met and customs can be explored and debated, where family occasions such as birthdays can be held, and where important ceremonies, such as welcoming visitors or farewelling the dead ( tangihanga ), can be performed. The building often symbolises an ancestor of the wharenui's tribe. So different parts of the building refer to body parts of that ancestor: [ 85 ] Other important components of the wharenui include: [ 85 ] Rau Hoskins defines Māori architecture as anything that involves a Māori client with a Māori focus. “I think traditionally Māori architecture has been confined to marae architecture and sometimes churches, and now Māori architecture manifests across all environments, so we have Māori immersion schools, Māori medical centres and health clinics, Māori tourism ventures, and papa kāinga or domestic Māori villages. So the opportunities that exist now are very diverse. The kaupapa (purpose or reason) for the building and client’s aspirations are the key to how the architecture manifests.” [ 88 ] From the 1960s, marae complexes were built in urban areas. In contemporary context these generally comprise a group of buildings around an open space, that frequently host events such as weddings, funerals, church services and other large gatherings, with traditional protocol and etiquette usually observed. They also serve as the base of one or sometimes several hapū. [ 89 ] The marae is still wāhi tapu , a 'sacred place' which carries cultural meaning. They included buildings such as wharepaku (toilets) and whare ora (health centres). Meeting houses were still one large space with a porch and one door and window in front. In the 1980s marae began to be built in prisons, schools and universities. Notable projects include: In Palau , there are many traditional meeting houses known as bais or abais . [ 111 ] In ancient times every village in Palau had a bai as it was the most important building in a village. [ 112 ] At the beginning of the 20th century, more than 100 bais were still in existence in Palau. [ 113 ] In bais governing elders are assigned seats along the walls, according to rank and title. [ 112 ] A bai has no dividing walls or furnishing and is decorated with depictions of Palauan legends. [ 112 ] Palau's oldest bai is Airai Bai which is over 100 years old. [ 114 ] Bais feature on the Seal of Palau and the flag of Koror . [ 115 ] The Bahay Kubo , Kamalig , or Nipa Hut , is a type of stilt house use by most of the lowland cultures of the Philippines . [ 116 ] [ 117 ] It often serves as an icon of broader Filipino culture, or, more specifically, Filipino rural culture. [ 118 ] Although there is no strict definition of the Bahay Kubo and styles of construction vary throughout the Philippine archipelago, [ 119 ] similar conditions in Philippine lowland areas have led to numerous characteristics "typical" of examples of Bahay Kubo. With few exceptions arising only in modern times, most Bahay Kubo are raised on stilts such that the living area has to be accessed through ladders. This naturally divides the bahay kubo into three areas: the actual living area in the middle, the area beneath it (referred to in Tagalog as the " Silong "), and the roof space (" Bubungan " in Tagalog), which may or may not be separated from the living area by a ceiling (" Kisame " in Tagalog). The traditional roof shape of the Bahay Kubo is tall and steeply pitched, ending in long eaves. [ 117 ] A tall roof created space above the living area through which warm air could rise, giving the Bahay Kubo a natural cooling effect even during the hot summer season. The steep pitch allowed water to flow down quickly at the height of the monsoon season while the long eaves gave people a limited space to move about around the house's exterior whenever it rained. [ 117 ] The steep pitch of the roofs are often used to explain why many Bahay Kubo survived the ash fall from the Mt. Pinatubo eruption, when more ’modern’ houses notoriously collapsed from the weight of the ash. [ 117 ] Raised up on hardwood stilts which serve as the main posts of the house, Bahay Kubo have a Silong (the Tagalog word also means "shadow") area under the living space for a number of reasons, the most important of which are to create a buffer area for rising waters during floods, and to prevent pests such as rats from getting up to the living area. [ 117 ] This section of the house is often used for storage, and sometimes for raising farm animals, [ 119 ] and thus may or may not be fenced off. The main living area of the Bahay Kubo is designed to let in as much fresh air and natural light as possible. Smaller Bahay Kubo will often have bamboo slat floors which allow cool air to flow into the living space from the silong below (in which case the Silong is not usually used for items which produce strong smells), and the particular Bahay Kubo may be built without a kisame (ceiling) so that hot air can rise straight into the large area just beneath the roof, and out through strategically placed vents there. The walls are always of light material such as wood, bamboo rods, or bamboo mats called " sawali ." As such, they tend to also let some coolness flow naturally through them during hot times, and keep warmth in during the cold wet season. The cube shape distinctive of the Bahay Kubo arises from the fact that it is easiest to pre-build the walls and then attach them to the wooden stilt-posts that serve as the corners of the house. The construction of a Bahay Kubo is therefore usually modular, with the wooden stilts established first, a floor frame built next, then wall frames, and finally, the roof. In addition, bahay kubo are typically built with large windows, to let in more air and natural light. The most traditional are large awning windows, held open by a wooden rod. [ 117 ] Sliding window sashes are also common, made either with plain wood or with wooden Capiz-shell frames, which allow some light to enter the living area even with the window sashes closed. In more recent decades inexpensive jalousie windows also became commonly used. In larger examples, the large upper windows may be augmented with smaller windows called ventanillas (Spanish for "little window) underneath, which can be opened to let in additional air, especially hot days. [ 117 ] Some (but not all) Bahay Kubo, especially one built for long-term residence, feature a Batalan "wet area" distinct from other sections of the house – usually jutting out somewhat from one of the walls. Sometimes at the same level as the living area and sometimes at ground level, the Batalan can contain any combination of cooking and dishwashing area, bathing area, and in some cases, a lavatory. The walls of the living area are made of light materials – with posts, walls, and floors typically made of wood or bamboo and other light materials. Topped by a thatched roof, often made out of nipa , anahaw , or some other locally plentiful plant. The Filipino term "Bahay Kubo" literally means "cube house", describing the shape of the dwelling. The term "Nipa Hut", introduced during the Philippines' American colonial era , refers to the nipa or anahaw thatching material often used for the roofs. Nipa huts were the customary houses of the Indigenous people of the Philippines before the Spaniards arrived. They are still used today, especially in rural areas. Different architectural designs are present among the different ethnolinguistic groups in the country, although all of them conform to being stilt houses , similar to those found in neighboring countries such as Indonesia , Malaysia , and other countries of Southeast Asia . The advent of the Spanish Colonial era introduced the idea of building more permanent communities with the Church and Government Center as a focal point. This new community setup made construction using heavier, more permanent materials desirable. Finding European construction styles impractical given local conditions, both Spanish and Filipino builders quickly adapted the characteristics of the Bahay Kubo and applied it to Antillean houses locally known as Bahay na Bato (Literally " stone house " in Tagalog). [ 119 ] The architecture of Samoa is characterised by openness, with the design mirroring the culture and life of the Samoan people who inhabit the Samoa Islands . [ 120 ] Architectural concepts are incorporated into Samoan proverbs , oratory and metaphors, as well as linking to other art forms in Samoa, such as boat building and tattooing . The spaces outside and inside of traditional Samoan architecture are part of cultural form, ceremony and ritual. Fale is the Samoan word for all types of houses, from small to large. In general, traditional Samoan architecture is characterized by an oval or circular shape, with wooden posts holding up a domed roof. There are no walls. The base of the architecture is a skeleton frame. Before European arrival and the availability of Western materials, a Samoan fale did not use any metal in its construction. The fale is lashed and tied together with a plaited sennit rope called ʻafa , handmade from dried coconut fibre. The ʻafa is woven tight in complex patterns around the wooden frame, and binds the entire construction together. ʻAfa is made from the husk of certain varieties of coconuts with long fibres, particularly the niu'afa ( afa palm). The husks are soaked in fresh water to soften the interfibrous portion. The husks from mature nuts must be soaked from four to five weeks, or perhaps even longer, and very mature fibre is best soaked in salt water, but the green husk from a special variety of coconut is ready in four or five days. Soaking is considered to improve the quality of the fibre. Old men or women then beat the husk with a mallet on a wooden anvil to separate the fibres, which, after a further washing to remove interfibrous material, are tied together in bundles and dried in the sun. When this stage is completed, the fibres are manufactured into sennit by plaiting, a task usually done by elderly men or matai , and performed at their leisure. This usually involves them seated on the ground rolling the dried fibre strands against their bare thigh by hand, until heavier strands are formed. These long, thin strands are then woven together into a three-ply plait, often in long lengths, that is the finished sennit. The sennit is then coiled in bundles or wound tightly in very neat cylindrical rolls. [ 121 ] Making enough lengths of afa for an entire house can take months of work. The construction of an ordinary traditional fale is estimated to use 30,000 to 50,000 feet of ʻafa . The lashing construction of the Samoan fale is one of the great architectural achievements of Polynesia . [ 122 ] A similar lashing technique was also used in traditional boat building, where planks of wood were 'sewn' together in parts. ʻAfa has many other uses in Samoan material culture, including ceremonial items, such as the fue fly whisk, a symbol of orator status. This lashing technique was also used in other parts of Polynesia, such case the magimagi of Fiji . The form of a fale , especially the large meeting houses, creates both physical and invisible spatial areas, which are clearly understood in Samoan custom, and dictate areas of social interaction. The use and function of the fale is closely linked to the Samoan system of social organisation, especially the Fa'amatai chiefly system. Those gathered at a formal gathering or fono are always seated cross-legged on mats on the floor, around the fale , facing each other with an open space in the middle. The interior directions of a fale , east, west, north and south, as well as the positions of the posts, affect the seating positions of chiefs according to rank, the place where orators (host or visiting party) must stand to speak or the side of the house where guests and visitors enter and are seated. The space also defines the position where the 'ava makers ( aumaga ) in the Samoa 'ava ceremony are seated and the open area for the presentation and exchanging of cultural items such as the 'ie toga fine mats. The front of a Samoan house is that part that faces the main thoroughfare or road through the village. The floor is quartered, and each section is named: Tala luma is the front side section, tala tua the back section, and tala , the two end or side sections. [ 123 ] The middle posts, termed matua tala are reserved for the leading chiefs and the side posts on the front section, termed pou o le pepe are occupied by the orators. The posts at the back of the house, talatua , indicate the positions maintained by the 'ava makers and others serving the gathering. [ 123 ] The immediate area exterior of the fale is usually kept clear, and is either a grassy lawn or sandy area if the village is by the sea. The open area in front of the large meeting houses, facing the main thoroughfare or road in a village, is called the malae , and is an important outdoor area for larger gatherings and ceremonial interaction. The word " fale " is also constructed with other words to denote social groupings or rank, such as the faleiva (house of nine) orator group in certain districts. The term is also used to describe certain buildings and their functions. The word for hospital is falema'i , "house of the ill". The simplest types of fale are called faleo'o , which have become popular as ecofriendly and low-budget beach accommodations in local tourism. Every family complex in Samoa has a fale tele , the meeting house, "big house". The site on which the house is built is called tulaga fale (place to stand). [ 123 ] The builders in Samoan architecture were also the architects, and they belonged to an exclusive ancient guild of master builders, Tufuga fau fale . The Samoan word tufuga denotes the status of master craftsmen who have achieved the highest rank in skill and knowledge in a particular traditional art form. The words fau-fale means house builder . There were Tufuga of navigation ( Tufuga fau va'a ) and Samoan tattooing ( Tufuga ta tatau ). Contracting the services of a Tufuga fau fale required negotiations and cultural custom. [ 124 ] The fale tele (big house), the most important house, is usually round in shape, and serves as a meeting house for chief council meetings, family gatherings, funerals or chief title investitures. The fale tele is always situated at the front of all other houses in an extended family complex. The houses behind it serve as living quarters, with an outdoor cooking area at the rear of the compound. [ 123 ] At the front is an open area, called a malae . The malae , (similar to marae concept in Māori and other Polynesian cultures), is usually a well-kept, grassy lawn or sandy area. The malae is an important cultural space where interactions between visitors and hosts or outdoor formal gatherings take place. The open characteristics of Samoan architecture are also mirrored in the overall pattern of house sites in a village, where all fale tele are situated prominently at the fore of all other dwellings in the village, and sometimes form a semicircle, usually facing seawards. In modern times, with the decline of traditional architecture and the availability of western building materials, the shape of the fale tele has become rectangular, though the spatial areas in custom and ceremony remain the same. Traditionally, the afolau (long house), a longer fale shaped like a stretched oval, served as the dwelling house or guest house. The faleo'o (small house), traditionally long in shape, was really an addition to the main house. It is not so well constructed and is situated always at the back of the main dwelling. [ 123 ] In modern times, the term is also used for any type of small and simple fale , which is not the main house of dwelling. Popular as a "grass hut" or beach fale in village tourism, many are raised about a meter off the ground on stilts, sometimes with an iron roof. In a village, families build a faleo'o beside the main house or by the sea for resting during the heat of the day or as an extra sleeping space at night if there are guests. The tunoa (cook house) is a flimsy structure, small in size, and not really to be considered as a house. In modern times, the cook house, called the umukuka , is at the rear of the family compound, where all the cooking is carried out in an earth oven, umu , and pots over the fire. In most villages, the umukuka is really a simple open shed made with a few posts with an iron roof to protect the cooking area from the weather. Construction of a fale , especially the large and important fale tele , often involves the whole extended family and help from their village community. The Tufuga fai fale oversees the entire building project. Before construction, the family prepares the building site. Lava, coral, sand or stone materials are usually used for this purpose. The Tufuga , his assistants ( autufuga ) and men from the family cut the timber from the forest. The main supporting posts, erected first, vary in number, size and length depending on the shape and dimensions of the house. Usually they are between 16 and 25 feet in length and six to 12 inches in diameter, and are buried about four feet in the ground. The term for these posts is poutu (standing posts); they are erected in the middle of the house, forming central pillars. Attached to the poutu are cross pieces of wood of a substantial size called so'a . The so'a extend from the poutu to the outside circumference of the fale and their ends are fastened to further supporting pieces called la'au fa'alava . The la'au fa'alava , placed horizontally, are attached at their ends to wide strips of wood continuing from the faulalo to the auau . These wide strips are called ivi'ivi . The faulalo is a tubular piece (or pieces) of wood about four inches in diameter running around the circumference of the house at the lower extremity of the roof, and is supported on the poulalo . The auau is one or more pieces of wood of substantial size resting on the top of the poutu . At a distance of about two feet between each are circular pieces of wood running around the house and extending from the faulalo to the top of the building. They are similar to the faulalo . The poulalo are spaced about three to four feet apart and are sunk about two feet in the ground. They average three to four inches in diameter, and extend about five feet above the floor of the fale . The height of the poulalo above the floor determines the height of the lower extremity of the roof from the ground. On the framework are attached innumerable aso , thin strips of timber (about half an inch by a quarter by 12 to 25 feet in length). They extend from the faulalo to the ivi'ivi , and are spaced from one to two inches apart. Attached to these strips at right angles are further strips, paeaso , the same size as aso. As a result, the roof of the fale is divided into an enormous number of small squares. [ 123 ] Most of the timber is grown in forests on family land. The timber was cut in the forest and carried to the building site in the village. The heavy work involved the builder's assistants, members of the family and help from the village community. The main posts were from the breadfruit tree ( ulu ), or ifi lele or pou muli if this wood was not available. The long principal rafters had to be flexible, so coconut wood ( niu ) was always selected. The breadfruit tree was also used for other parts of the main framework. In general, the timbers most frequently used in the construction of Samoan houses are:- Posts ( poutu and poulalo ): ifi lele , pou muli , asi , ulu , talia , launini'u and aloalovao . Fau : ulu , fau , niu , and uagani Aso and paeso : niuvao , ulu , matomo and olomea The auau and talitali use ulu and the so'a used both ulu and niu . The completed, domed framework is covered with thatch ( lau leaves), which is made by the women. The best quality of thatch is made with the dry leaves of the sugarcane . If sugarcane leaf was not available, the palm leaves of the coconut tree was used in the same manner. The long, dry leaves are twisted over a three-foot length of lafo , which are then fastened by a thin strip of the frond of the coconut being threaded through the leaves close up to the lafo stem. These sections of thatch are fastened to the outside of the framework of the fale beginning at the bottom and working up to the apex. They are overlapped, so each section advances the thatching about three inches. This means there is really a double layer of thatch covering the whole house. The sections are fastened to the aso at each end by afa . Provided the best quality of thatch is used and it has been truly laid, it will last about seven years. On an ordinary dwelling house, about 3000 sections of thatch are laid. Protection from sun, wind or rain, as well as from prying eyes, was achieved by suspending from the fau running round the house several of a sort of drop-down Venetian blind, called pola. The fronds of the coconut tree are plaited into a kind of mat about a foot wide and three feet long. A sufficient number of pola to reach from the ground to the top of the poulalo are fastened together with afa and are tied up or let down as the occasion demands. Usually, one string of these mats covers the space between two poulalo and so on round the house. They do not last for long, but being quickly made, are soon replaced. They afford ample protection from the elements, and it being possible to let them down in sections; seldom is the whole house is closed up. The natural foundations of a fale site are coral, sand, and lava, with sometimes a few inches of soil in some localities. Drainage is therefore good. The top layers of the flooring are smooth pebbles and stones. When occupied, the house floors are usually covered or partially covered with native mats. In Samoan mythology , an explanation of why Samoan houses are round is explained in a story about the god Tagaloa , also known as Tagaloalagi (Tagaloa of the Heavens). Following is the story, as told by Samoan historian Te'o Tuvale in An Account of Samoan History up to 1918 . Sápmi is the term for Sámi (also Saami) traditional lands. The Sámi people are the Indigenous people of the northern part of the Scandinavian Peninsula and large parts of the Kola Peninsula , which encompasses parts of far northern Norway , Sweden , Finland , and Russia , and the border area between south and middle Sweden and Norway. The Sámi are the only Indigenous people of Scandinavia recognized and protected under the international conventions of Indigenous people, and the northernmost Indigenous people of Europe. Sámi ancestral lands span an area of approximately 388,350 km 2 (150,000 sq. mi.) across the Nordic countries . There are a number of Sámi ethnoarchitectural forms; including the lavvu , goahti , the Finnish laavu . The differences between the goahti and the lavvu can be seen when looking at the top of structures. A lavvu will have its poles coming together, while the goahti will have its poles separate and not coming together. The turf version of the goahti will have the canvas replaced with wood resting on the structure covered with birch bark then peat to provide a durable construction. Lavvu (or Northern Sami : lávvu , Inari Sami : láávu , Skolt Sami : kååvas , Kildin Sami : koavas , Finnish : kota or umpilaavu , Norwegian : lavvo or sametelt , and Swedish : kåta ) is a structure built by the Sámi of northern Scandinavia. It has a design similar to a Native American tipi but is less vertical and more stable in high winds. It enables the Indigenous cultures of the treeless plains of northern Scandinavia and the high Arctic of Eurasia to follow their reindeer herds. It is still used as a temporary shelter by the Sámi, and increasingly by other people for camping. There are several historical references that describe the lavvu structure (also called a kota , or a variation on this name) used by the Sami. These structures have the following in common: [ 125 ] [ 126 ] [ 127 ] [ 128 ] [ 129 ] No historical record has come to light that describes the Sami using a single-pole structure claimed to be a lavvu, or any other Scandinavian variant name for the structure. The definition and description of this structure has been fairly consistent since the 17th century and possibly many centuries earlier. A goahti (also gábma , gåhte , gåhtie and gåetie , Norwegian : gamme , Finnish : kota , Swedish : kåta ), is a Sami hut or tent of three types of covering: fabric, peat moss or timber. The fabric-covered goahti looks very similar to a Sami lavvu , but often constructed slightly larger. In its tent version the goahti is also called a 'curved pole' lavvu, or a 'bread box' lavvu as the shape is more elongated while the lavvu is in a circular shape. The interior construction of the poles is thus: 1) four curved poles (8–12 feet (2.4–3.7 m) long), 2) one straight center pole (5–8 feet (1.5–2.4 m) long), and 3) approximately a dozen straight wall-poles (10–15 feet (3.0–4.6 m) long). All the pole sizes can vary considerably. The four curved poles curve to about a 130° angle. Two of these poles have a hole drilled into them at one end, with those ends being joined together by the long center pole that is inserted by the described poles. The other two curved poles are also joined at the other end of the long pole. When this structure is set up, a four-legged stand is formed with the long pole at the top and center of the structure. With the four-legged structure standing up to about five to eight feet in height, approximately ten or twelve straight "wall-poles" are laid up against the structure. The goahti covering, today made usually of canvas, is laid up against the structure and tied down. There can be more than one covering that covers the structure. The Sámi Parliament building was designed by the (non-Sámi) architects Stein Halvorsen & Christian Sundby, who won the Norwegian government's call for projects in 1995, and inaugurated in 2005. The government called for a building such that “the Sámi Parliament appears dignified” and “reflects Sámi architecture.” Another similar example of Sami-inspired architecture is the Várjjat Sámi Musea (Varanger Sami Museum, VSM) in Unjárga Municipality in Finnmark county, Norway . The work of Sámi artist and architect Joar Nango explores the intersections of traditional Indigenous construction methods, contemporary architecture, and new media. [ 130 ] [ 131 ] Within the body of Hawai'ian architecture are various subsets of styles; each are considered typical of particular historical periods. The earliest form of Hawaiian architecture originates from what is called ancient Hawaiʻi —designs employed in the construction of village shelters from the simple shacks of outcasts and slaves, huts for the fishermen and canoe builders along the beachfronts, the shelters of the working class makaʻainana , the elaborate and sacred heiau of kahuna and the palatial thatched homes on raised basalt foundation of the aliʻi . The way a simple grass shack was constructed in ancient Hawaiʻi was telling of who lived in a particular home. The patterns in which dried plants and lumber were fashioned together could identify caste , skill and trade, profession and wealth. Hawaiian architecture previous to the arrival of British explorer Captain James Cook used symbolism to identify religious value of the inhabitants of certain structures. Feather standards called kahili and koa adorned with kapa cloth and crossed at the entrance of certain homes called puloʻuloʻu indicated places of aliʻi ( nobility caste). Kiʻi enclosed within basalt walls indicated the homes of kahuna (priestly caste). Pueblo-style architecture imitates the appearance of traditional Pueblo adobe construction, though other materials such as brick or concrete are often substituted. If adobe is not used, rounded corners, irregular parapets , and thick, battered walls are used to simulate it. Walls are usually stuccoed and painted in earth tones. Multistory buildings usually employ stepped massing similar to that seen at Taos Pueblo . Roofs are always flat. Common features of the Pueblo Revival style include projecting wooden roof beams or vigas , which sometimes serve no structural purpose [3] , "corbels", curved—often stylized—beam supports and latillas , which are peeled branches or strips of wood laid across the tops of vigas to create a foundation (usually supporting dirt or clay) for a roof. [ 132 ] [ 133 ]
https://en.wikipedia.org/wiki/Indigenous_architecture
Indigenous statistics is a quantitative research method specific to Indigenous people . [ 1 ] It can be better understood as an Indigenous quantitative methodology. Indigenous quantitative methodologies include practices, processes, and research that are done through an Indigenous lens. [ 1 ] The purpose of indigenous statistics is to diminish the disparities and inequalities faced by Indigenous people globally. [ 1 ] Statistics are a reliable source of data, which can be used in the present and future. [ 2 ] This is a relatively new concept in the research world. [ 1 ] Statistics are the collection of quantitative data that is used to interpret and present data. [ 3 ] Indigenous refers to an ethnic group of people who are the earliest inhabitants or native to that land. [ 4 ] Connecting these two terms, researchers aim to provide fair and reliable data on Indigenous communities. [ 2 ] [ 1 ] By focusing on three central themes, which are situated in entering research through a solely Indigenous lens. The cultural framework of data, quantitative methodologies in data, and the situated activity amongst academic research. [ 1 ] Statistics are a collection of quantitative data. [ 5 ] Statistics are how data is interpreted and presented. [ 3 ] Statistics interpret our reality and influence the understanding of societies. [ 3 ] The purpose of Indigenous statistics is to have Indigenous people collect their own data in a fashion they find best suitable for their community. [ 2 ] [ 1 ] This is done by Indigenous researchers, or through the perspective of Indigenous communities. [ 2 ] Statistics, in turn, provide information used to determine theoretical and practical development and produce the notion of open data. [ 3 ] Indigenous statistics aims to make statistics a source of reliable information regarding Indigenous societies. Indigenous Peoples is a term used to define people with ancestral origins in the land they inhabit. Indigenous peoples are the earliest known inhabitants of the land they inhabit. [ 4 ] Open data is making statistics available to the public. [ 6 ] The data should be easily accessible, and this is often done through a web portal. [ 6 ] Scholars have criticized the way open data is collected today. [ 6 ] For instance, some have said that open data is not politically or economically benign. [ 6 ] Others have made critiques regarding elements of open data that are not as honest as they first appear, thereby affecting certain people differently. [ 6 ] The key concern is whether or not these initiatives bring forth value, impact, transparency, participation and foster economic development. [ 6 ] Many of the critiques of open data are not to abandon the movement but to find more sustainable ways that are equitable and transparent for all. [ 6 ] For example, open data has not always been the fairest to Indigenous populations. Open data may lead to data being used to perform misleading and prejudicial work or put non-Indigenous services managing Indigenous relations that misrepresent them due to cultural assumptions. [ 6 ] Indigenous people are also not accounted for in state datasets. They are restrained from informing their impacts and are not able to rely on this data for solutions. [ 6 ] Indigenous statistics, push to remove these barriers and minimize the risk of misrepresentation and misinformation being published on indigenous people . [ 6 ] Indigenous statistics is a relatively new concept, recently gaining attraction in the research world. [ 1 ] It aims to decolonize the data and provide fair statistics to Indigenous communities. [ 2 ] [ 1 ] Indigenous statistics critique the production of open data and conclusions being drawn from open data statistics. [ 2 ] [ 1 ] Indigenous Statistics is the first book to be published on quantitative Indigenous methodologies. [ 1 ] It was written by authors Maggie Walter and Chris Andersen. [ 1 ] It was published on September 15, 2013, by Routledge Taylor & Francis Group . [ 1 ] Indigenous Statistics offer a new lens in researching statistics, by critiquing the ways in which quantitative data have framed Indigenous people, and offers new forms of quantitative data to better represent the Indigenous populations. [ 1 ] The authors focus on three main topics. They first investigate the cultural framework of Indigenous statistics, how methodologies, not methods produce Indigenous statistics and how academic research is a situated activity. [ 1 ] The cultural framework within Indigenous statistics is one that focuses on the collection, analysis, and interpretation of statistics about Indigenous people. [ 1 ] Indigenous scholars claim that the representation of Indigenous people in statistics actually is a representation that reflects the dominant nation-state rather than the subjects being analyzed. [ 1 ] The objective here is to focus on the ways in which statistics produced by the nation-state may push and drive certain narratives about Indigenous people that may not be a true representation. [ 1 ] Indigenous statistics call for their own empowerment and control to produce and collect data according to their societies' own needs. [ 1 ] Approaching research through an Indigenous lens, is not one of the strict or clear-cut guidelines. [ 2 ] Taking an Indigenous research approach will look different based on the need of the research. [ 2 ] An initiative that took an Indigenous approach when conducting the cultural framework of their research is, The Te Atawhai o Te Ao Institute, which is based in Whanganui, New Zealand . [ 7 ] [ 2 ] This institute is dedicated to Indigenous based research that will generate and rediscover knowledge focused on health and the environment for the benefit of Indigenous people . [ 2 ] [ 7 ] Indigenous quantitative methodologies are methodologies in which the practices and processes are taken from an Indigenous standpoint. [ 1 ] This means that all aspects of the research and methodologies are influenced by an Indigenous lens. [ 1 ] Standpoints influenced by an Indigenous lens; Indigenous Social Position, Indigenous Epistemology, Indigenous Ontology, Indigenous Axiology. [ 1 ] The methodologies of collection, interpretation, and analysis produce statistics rather than the research method itself. [ 1 ] The focus is on the motives and reasoning behind certain research, more than what type of research is being conducted to find the information. [ 2 ] [ 1 ] Methodology is a central understanding of Indigenous statistics as it helps provide context for many steps of the research. [ 1 ] Methodologies help determine why and how research questions are asked instead of other questions. [ 1 ] How, when, and where the data was collected and how the data was interpreted and used. [ 1 ] Methodologies are important because they provide the user with insights from how the data was collected and governance over it, to the personal identity of the researcher and understanding of their objectives. [ 1 ] One way the census' can further improve on methodologies on Indigenous statistics is through Statistics Canada's "ethnic mobility" category. [ 1 ] Canada recognizes Indigenous people into three categories, First Nations, Metis, and Inuit. [ 1 ] These categories today are meant to be inclusive diverse representations of all Indigenous people, yet is the categories the Canadian government attempt to govern the diversity of Indigenous people, rather than reflecting the actual diversity amongst these communities. [ 1 ] For example, the term First Nation captures the image of dozens of different tribal societies sharing some similarities, but the Canadian Government recognizes them as one people entirely. [ 1 ] Indigenous statistics' sole purpose is to provide equitable and transparent data on Indigenous people that is fair and honest to them. [ 1 ] The focus on Academic research being a situated activity, is to draw attention to how research may mislead or misrepresent the statistics being presented. [ 1 ] Academic research either helps push the truth out or as in the past, be used to push specific narratives. Throughout history, there have been institutions that recorded and published qualitative data. [ 1 ] Academic research is situated within the dominant society of their nation-state, and the development of Indigenous statistics into the research world, looks to remove these systemic barriers and begin finding more equitable and fairways of conducting and publishing research on Indigenous groups. [ 1 ]
https://en.wikipedia.org/wiki/Indigenous_statistics
Indiglo is a product feature on watches marketed by Timex , incorporating an electroluminescent panel as a backlight for even illumination of the watch dial. The brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo , as the original watches featuring the technology emitted a green-blue light. Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including men's and women's watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995. The Indiglo name was later licensed to other companies, such as Austin Innovations Inc., for use on their electroluminescent products. [ 1 ] [ 2 ] From 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. [ 3 ] To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework — and used hands treated with super-luminova luminescent pigment for low-light legibility — rather than indiglo technology. When the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, [ 4 ] it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, [ 4 ] at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments. Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB , use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display. This product article is a stub . You can help Wikipedia by expanding it . This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indiglo
Indigo carmine , or 5,5′-indigodisulfonic acid sodium salt , is an organic salt derived from indigo by aromatic sulfonation , which renders the compound soluble in water. Like indigo, it produces a blue color , and is used in food and other consumables , cosmetics, and as a medical contrast agent and staining agent; it also acts as a pH indicator . It is approved for human consumption in the United States and European Union. [ 3 ] [ 4 ] It has the E number E132 , and is named Blue No. 2 by the US Federal Food, Drug, and Cosmetic Act . [ 5 ] Indigo carmine in a 0.2% aqueous solution is blue at pH 11.4 and yellow at 13.0. Indigo carmine is also a redox indicator , turning yellow upon reduction. Another use is as a dissolved ozone indicator [ 6 ] through the conversion to isatin-5-sulfonic acid. [ 6 ] This reaction has been shown not to be specific to ozone: it also detects superoxide , an important distinction in cell physiology. [ 7 ] It is also used as a dye in the manufacturing of pharmaceutical capsules . Indigotindisulfonate sodium , sold under the brand name Bludigo , is used as a contrast agent during surgical procedures. [ 2 ] It is indicated for use in cystoscopy in adults following urological and gynecological procedures. [ 2 ] [ 8 ] It was approved for medical use in the United States in July 2022. [ specify ] [ 2 ] [ 8 ] In obstetric surgery , it may be used to detect amniotic fluid leaks. In urologic surgery, intravenous indigo carmine can be used to highlight portions of the urinary tract . The dye is filtered rapidly by the kidneys from the blood, and colors the urine blue. However, the dye can cause a potentially dangerous acute increase in blood pressure in some cases. [ 9 ] Indigo carmine stain is not absorbed into cells, so it is applied to tissues to enhance the visibility of mucosa . This leads to its use for examination and diagnosis of benign and malignant lesions and growths on mucosal surfaces of the body. [ 10 ] Indigo carmine is one of the few blue food colorants. Others include the anthocyanidins and rare substances such as variagatic acid and popolohuanone . [ 11 ] Indigo carmine shows " genotoxicity , developmental toxicity or modifications of haematological parameters in chronic toxicity studies". Only at 17 mg/kg of body weight per day were effects on testes observed. [ 12 ]
https://en.wikipedia.org/wiki/Indigo_carmine
Indirect DNA damage occurs when a UV-photon is absorbed in the human skin by a chromophore that does not have the ability to convert the energy into harmless heat very quickly. [ 2 ] Molecules that do not have this ability have a long-lived excited state . This long lifetime leads to a high probability for reactions with other molecules—so-called bimolecular reactions. [ 2 ] Melanin [ dubious – discuss ] [ citation needed ] and DNA have extremely short excited state lifetimes in the range of a few femtoseconds (10 −15 s). [ 3 ] The excited state lifetime of compounds used in sunscreens such as menthyl anthranilate , avobenzone or padimate O is 1,000 to 1,000,000 times longer than that of melanin, [ 2 ] and therefore they may cause damage to living cells that come in contact with them. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The molecule that originally absorbs the UV-photon is called a "chromophore". Bimolecular reactions can occur either between the excited chromophore and DNA or between the excited chromophore and another species, to produce free radicals and reactive oxygen species . These reactive chemical species can reach DNA by diffusion and the bimolecular reaction damages the DNA ( oxidative stress ). Unlike direct DNA damage which causes sunburn, indirect DNA damage does not result in any warning signal or pain in the human body. The bimolecular reactions that cause the indirect DNA damage are illustrated in the figure: 1 O 2 is reactive harmful singlet oxygen : Unlike direct DNA damage , which occurs in areas directly exposed to UV-B light, reactive chemical species can travel through the body and affect other areas—possibly even inner organs. [ dubious – discuss ] The traveling nature of the indirect DNA damage can be seen in the fact that the malignant melanoma can occur in places that are not directly illuminated by the sun—in contrast to basal-cell carcinoma and squamous cell carcinoma , which appear only on directly illuminated locations on the body. [ dubious – discuss ] [ citation needed ]
https://en.wikipedia.org/wiki/Indirect_DNA_damage
In pharmacology , an indirect agonist or indirect-acting agonist is a substance that enhances the release or action of an endogenous neurotransmitter but has no specific agonist activity at the neurotransmitter receptor itself. Indirect agonists work through varying mechanisms to achieve their effects, including transporter blockade, induction of transmitter release, and inhibition of transmitter breakdown. Cocaine is a monoamine transporter blocker and, thus, an indirect agonist of dopamine receptors . [ 1 ] Cocaine binds the dopamine transporter (DAT), blocking the protein's ability to uptake dopamine from the synaptic cleft and also blocking DAT from terminating dopamine signaling. Blockage of DAT increases the extracellular concentration of dopamine , therefore increasing the amount of dopamine receptor binding and signaling. Dipyridamole inhibits reuptake of adenosine , resulting in greater extracellular concentrations of adenosine . Dipyridamole also inhibits the enzyme adenosine deaminase , the enzyme that catalyzes the breakdown of adenosine . Fenfluramine is an indirect agonist of serotonin receptors . [ 2 ] Fenfluramine binds to the serotonin transporter , blocking serotonin reuptake. However, fenfluramine also acts to induce non- exocytotic serotonin release; in a mechanism similar to that of methamphetamine in dopamine neurons, fenfluramine binds to VMAT2 , disrupting the compartmentalization of serotonin into vesicles and increasing the concentration of cytoplasmic serotonin available for drug-induced release. [ 3 ]
https://en.wikipedia.org/wiki/Indirect_agonist
Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen . Indirect calorimetry estimates the type and rate of substrate utilization and energy metabolism in vivo starting from gas exchange measurements (oxygen consumption and carbon dioxide production during rest and steady-state exercise). This technique provides unique information, is noninvasive, and can be advantageously combined with other experimental methods to investigate numerous aspects of nutrient assimilation, thermogenesis , the energetics of physical exercise, and the pathogenesis of metabolic diseases . [ 1 ] Indirect calorimetry measures O 2 and nitrogen consumption and CO 2 production. On the assumption that all the oxygen is used to oxidize degradable fuels and all the CO 2 thereby evolved is recovered, it is possible to estimate the total amount of energy produced from the chemical energy of nutrients and converted into the chemical energy of ATP , with some loss of energy during the oxidation process. [ 1 ] Respiratory indirect calorimetry (IC) is a noninvasive and highly accurate method of metabolic rate , which has an error of less than 1%. [ 2 ] It has high reproducibility and has been considered a gold standard method. [ 3 ] This method allows estimating BEE and REE as well as identification of energy substrates that are predominantly metabolized by the body at a specific moment. It is based on the indirect measurement of the heat produced by oxidation of macronutrients , which is estimated by monitoring O 2 consumption and CO 2 production for a certain period of time. [ 4 ] The calorimeter has a gas collector that adapts to the subject and through a unidirectional valve minute by minute collects and quantifies the volume and concentration of O 2 inspired and CO 2 expired by the subject. After a volume is met, Resting Energy Expenditure is calculated by the Weir formula and results are displayed in software attached to the system. [ 4 ] Another formula used is: [ 5 ] where RQ is the respiratory quotient (ratio of volume CO 2 produced to volume of O 2 consumed), e c {\displaystyle e_{c}} is 21.13 kilojoules (5.05 kcal), the heat released per litre of oxygen by the oxidation of carbohydrate, and e f {\displaystyle e_{f}} is 19.62 kilojoules (4.69 kcal), the value for fat. This gives the same result as the Weir formula at RQ = 1 (burning only carbohydrates), and almost the same value at RQ = 0.7 (burning only fat). Antoine Lavoisier noted in 1780 that heat production, in some cases, can be predicted from oxygen consumption [ citation needed ] , using multiple regression. Indirect calorimetry, as we know it, was developed around 1900 as an application of thermodynamics to animal life. [ 6 ] Although the development of indirect calorimetry dates back over 200 years, its greatest use has been in the last two decades with the development of total parenteral nutrition , interdisciplinary nutrition support teams, and the production of portable, reliable, relatively inexpensive calorimeters. [ 7 ] Four different gas collection and measurement techniques can be used to perform this test: Indirect calorimetry provides at least two pieces of information: a measure of energy expenditure or 24-hour caloric requirements as reflected by the Resting Energy Expenditure (REE) and a measure of substrate utilization as reflected in the Respiratory Quotient (RQ). Knowledge of the many factors that affect these values has led to a much broader range of applications. Studies of indirect calorimetry over the past 20 years have led to the characterization of the hypermetabolic stress response to injury and the design of nutritional regimens whose substrates are most efficiently assimilated in different disease processes and organ failure states. Indirect calorimetry has influenced everyday practices of medical and surgical care, such as the warming of burn unit and surgical suites and the weaning of patients from ventilators . [ 7 ]
https://en.wikipedia.org/wiki/Indirect_calorimetry
Indirect immunoperoxidase assay ( IPA ) is a laboratory technique used to detect and titrate viruses that do not cause measurable cytopathic effects and cannot be measured by classical plaque assays . These viruses include human coronavirus 229E and OC43 . [ 1 ] Susceptible cells are inoculated with serial logarithmic dilutions of samples in a 96-well plate. After viral growth, viral detection by IPA yields the infectious virus titer, expressed as tissue culture infectious dose (TCID50). This represents the dilution of a virus-containing sample at which half of a series of laboratory wells contain replicating viruses . This technique is a reliable method for the titration of human coronaviruses (HCoV) in biological samples (cells, tissues, or fluids). It is also reliable in the detection of antibodies to human cytomegalovirus . [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Indirect_immunoperoxidase_assay
The indirect land use change impacts of biofuels , also known as ILUC or iLUC (pronounced as i-luck), relates to the unintended consequence of releasing more carbon emissions due to land-use changes around the world induced by the expansion of croplands for ethanol or biodiesel production in response to the increased global demand for biofuels . [ 1 ] [ 2 ] As farmers worldwide respond to higher crop prices in order to maintain the global food supply-and-demand balance, pristine lands are cleared to replace the food crops that were diverted elsewhere to biofuels' production. Because natural lands, such as rainforests and grasslands , store carbon in their soil and biomass as plants grow each year, clearance of wilderness for new farms translates to a net increase in greenhouse gas emissions . Due to this off-site change in the carbon stock of the soil and the biomass, indirect land use change has consequences in the greenhouse gas (GHG) balance of a biofuel. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Other authors have also argued that indirect land use changes produce other significant social and environmental impacts, affecting biodiversity, water quality, food prices and supply , land tenure , worker migration, and community and cultural stability. [ 3 ] [ 5 ] [ 6 ] [ 7 ] The estimates of carbon intensity for a given biofuel depend on the assumptions regarding several variables. As of 2008, multiple full life cycle studies had found that corn ethanol , cellulosic ethanol and Brazilian sugarcane ethanol produce lower greenhouse gas emissions than gasoline . [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] None of these studies, however, considered the effects of indirect land-use changes, and though land use impacts were acknowledged, estimation was considered too complex and difficult to model. [ 2 ] [ 9 ] A controversial paper published in February 2008 in Sciencexpress by a team led by Searchinger from Princeton University concluded that such effects offset the (positive) direct effects of both corn and cellulosic ethanol and that Brazilian sugarcane performed better, but still resulted in a small carbon debt. [ 1 ] After the Searchinger team paper, estimation of carbon emissions from ILUC, together with the food vs. fuel debate, became one of the most contentious issues relating to biofuels , debated in the popular media , [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] scientific journals , [ 1 ] [ 2 ] [ 7 ] [ 24 ] [ 25 ] [ 26 ] op-eds and public letters from the scientific community , [ 4 ] [ 27 ] [ 28 ] and the ethanol industry, both American and Brazilian. [ 29 ] [ 30 ] [ 31 ] This controversy intensified in April 2009 when the California Air Resources Board (CARB) set rules that included ILUC impacts to establish the California Low-Carbon Fuel Standard that entered into force in 2011. In May 2009 U.S. Environmental Protection Agency (EPA) released a notice of proposed rulemaking for implementation of the 2007 modification of the Renewable Fuel Standard (RFS). [ 32 ] EPA's proposed regulations also included ILUC, causing additional controversy among ethanol producers. [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] EPA's February 3, 2010 final rule incorporated ILUC based on modelling that was significantly improved over the initial estimates. [ 38 ] [ 39 ] The UK Renewable Transport Fuel Obligation program requires the Renewable Fuels Agency (RFA) to report potential indirect impacts of biofuel production, including indirect land use change or changes to food and other commodity prices. [ 14 ] A July 2008 RFA study, known as the Gallager Review, found several risks and uncertainties, and that the "quantification of GHG emissions from indirect land-use change requires subjective assumptions and contains considerable uncertainty", and required further examination to properly incorporate indirect effects into calculation methodologies. [ 40 ] A similarly cautious approach was followed by the European Union . In December 2008 the European Parliament adopted more stringent sustainability criteria for biofuels and directed the European Commission to develop a methodology to factor in GHG emissions from indirect land use change. [ 41 ] [ relevant? ] Before 2008, several full life cycle ("Well to Wheels" or WTW) studies had found that corn ethanol reduced transport-related greenhouse gas emissions. In 2007 a University of California, Berkeley team led by Farrel evaluated six previous studies, concluding that corn ethanol reduced GHG emissions by only 13 percent. [ 8 ] [ 9 ] [ 12 ] However, 20 to 30 percent reduction for corn ethanol, and 85 to 85 percent for cellulosic ethanol , [ 9 ] [ 10 ] both figures estimated by Wang from Argonne National Laboratory , are more commonly cited. Wang reviewed 22 studies conducted between 1979 and 2005, and ran simulations with Argonne's GREET model . These studies accounted for direct land use changes. [ 11 ] [ 12 ] Several studies of Brazilian sugarcane ethanol showed that sugarcane as feedstock reduces GHG by 86 to 90 percent given no significant land use change. [ 9 ] [ 13 ] [ 14 ] Estimates of carbon intensity depend on crop productivity , agricultural practices, power sources for ethanol distilleries and the energy efficiency of the distillery. None of these studies considered ILUC, due to estimation difficulties. [ 2 ] [ 9 ] Preliminary estimates by Delucchi from the University of California, Davis , suggested that carbon released by new lands converted to agricultural use was a large percentage of life-cycle emissions. [ 9 ] [ 42 ] In 2008 Timothy Searchinger, a lawyer from Environmental Defense Fund , [ 43 ] concluded that ILUC affects the life cycle assessment and that instead of saving, both corn and cellulosic ethanol increased carbon emissions as compared to gasoline by 93 and 50 percent respectively. Ethanol from Brazilian sugarcane performed better, recovering initial carbon emissions in 4 years, while U.S. corn ethanol required 167 years and cellulosic ethanol required a 52 years payback period. [ 1 ] The study limited the analysis a 30-year period, assuming that land conversion emits 25 percent of the carbon stored in soils and all carbon in plants cleared for cultivation. Brazil, China , and India were considered among the overseas locations where land use change would occur as a result of diverting U.S. corn cropland, and it was assumed that new cropland in each of these regions correspond to different types of forest , savanna or grassland based on the historical proportion of each converted to cultivation in these countries during the 1990s. [ 1 ] Fargione and his team published a separate paper in the same issue of Science claiming that clearing lands to produce biofuel feedstock created a carbon deficit. This deficit applies to both direct and indirect land use changes. The study examined six conversion scenarios: Brazilian Amazon to soybean biodiesel , Brazilian Cerrado to soybean biodiesel, Brazilian Cerrado to sugarcane ethanol, Indonesian or Malaysian lowland tropical rainforest to palm biodiesel, Indonesian or Malaysian peatland tropical rainforest to palm biodiesel, and U.S. Central grassland to corn ethanol. [ 45 ] The carbon debt was defined as the amount of CO 2 released during the first 50 years of this process of land conversion. For the two most common ethanol feedstocks, the study found that sugarcane ethanol produced on natural cerrado lands would take about 17 years to repay its carbon debt, while corn ethanol produced on U.S. central grasslands would result in a repayment time of about 93 years. The worst-case scenario is converting Indonesian or Malaysian tropical peatland rainforest to palm biodiesel production, which would require about 420 years to repay. [ 45 ] The Searchinger and Fargione studies created controversy in both the popular media [ 4 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 21 ] and in scientific journals . Robert Zubrin observed that Searchinger's "indirect analysis" approach is pseudo-scientific and can be used to "prove anything". [ 46 ] Wang and Haq from Argonne National Laboratory claimed: the assumptions were outdated; they ignored the potential of increased efficiency, and no evidence showed that "U.S. corn ethanol production has so far caused indirect land use in other countries." They concluded that Searchinger demonstrated that ILUC "is much more difficult to model than direct land use changes". [ 2 ] In his response, Searchinger rebutted each technical objection and asserted that "... any calculation that ignores these emissions, however challenging it is to predict them with certainty, is too incomplete to provide a basis for policy decisions." [ 25 ] Another criticism, by Kline and Dale from Oak Ridge National Laboratory , held that Searchinger et al. and Fargione et al. "... do not provide adequate support for their claim that bioufuels cause high emissions due to land-use change", as their conclusions depends on a misleading assumption because more comprehensive field research found that these land use changes "... are driven by interactions among cultural, technological, biophysical, economic, and demographic forces within a spatial and temporal context rather than by a single crop market". [ 26 ] Fargione et al. responded in part that although many factors contributed to land clearing, this "observation does not diminish the fact that biofuels also contribute to land clearing if they are produced on existing cropland or on newly cleared lands". Searchinger disagreed with all of Kline and Dale's arguments. [ 26 ] The U.S. biofuel industry also reacted, claiming that the "Searchinger study is clearly a 'worst case scenario' analysis ..." and that this study "relies on a long series of highly subjective assumptions ..." [ 47 ] Searchinger rebutted each claim, concluding that NFA's criticisms were invalid. He noted that even if some of his assumptions are high estimates, the study also made many conservative assumptions. [ citation needed ] In February 2010, Lapola estimated that the planned expansion of Brazilian sugarcane and soybean biofuel plantations through 2020 would replace rangeland with a small direct land-use impact on carbon emissions. [ 48 ] [ 49 ] However, the expansion of the rangeland frontier into Amazonian forests, driven by cattle ranching, would indirectly offset the savings. [ 49 ] "Sugarcane ethanol and soybean biodiesel each contributes to nearly half of the projected indirect deforestation of 121,970 km 2 by 2020, creating a carbon debt that would take about 250 years to be repaid..." [ 48 ] The research also found that oil palm would cause the least land-use changes and associated carbon debt. The analysis also modeled livestock density increases and found that "a higher increase of 0.13 head per hectare in the average livestock density throughout the country could avoid the indirect land-use changes caused by biofuels (even with soybean as the biodiesel feedstock), while still fulfilling all food and bioenergy demands." [ 48 ] [ 49 ] The authors conclude that intensification of cattle ranching and concentration on oil palm are required to achieve effective carbon savings, recommending closer collaboration between the biofuel and cattle-ranching sectors. [ 48 ] [ 49 ] The main Brazilian ethanol industry organization (UNICA) commented that such studies missed the continuing intensification of cattle production already underway. [ 50 ] A study by Arima et al. published in May 2011, used spatial regression modeling to provide the first statistical assessment of ILUC for the Brazilian Amazon due to soy production. Previously, the indirect impacts of soy crops were only anecdotal or analyzed through demand models at a global scale, while the study took a regional approach. The analysis showed a strong signal linking the expansion of soybean fields in settled agricultural areas at the southern and eastern rims of the Amazon basin to pasture encroachments for cattle production on the forest frontier. The results demonstrate the need to include ILUC in measuring the carbon footprint of soy crops, whether produced for biofuels or other end-uses. [ 51 ] The Arima study is based on 761 municipalities located in the Legal Amazon of Brazil and found that between 2003 and 2008, soybean areas expanded by 39,100 km 2 in the basin's agricultural areas, mainly in Mato Grosso . The model showed that a 10% (3,910 km 2 ) reduction of soy in old pasture areas would have led to a reduction in deforestation of up to 40% (26,039 km 2 ) in heavily forested municipalities of the Brazilian Amazon. The analysis showed that the displacement of cattle production due to agricultural expansion drives land use change in municipalities located hundreds of kilometers away. The Amazonian ILUC is not only measurable, but its impact is significant. [ 51 ] On April 23, 2009, California Air Resources Board (CARB) approved the specific rules and carbon intensity reference values for the California Low-Carbon Fuel Standard (LCFS) that take effect January 1, 2011. [ 54 ] [ 55 ] CARB's rulemaking included ILUC. For some biofuels, CARB identified land use changes as a significant source of additional GHG emissions. [ 52 ] [ 56 ] It established one standard for gasoline and alternative fuels, and a second for diesel fuel and its replacements. [ 53 ] The public consultation process before the ruling, and the ruling itself were controversial, yielding 229 comments. [ 57 ] ILUC was one of the most contentious issues. On June 24, 2008, 27 scientists and researchers submitted a letter saying, "As researchers and scientists in the field of biomass to biofuel conversion, we are convinced that there simply is not enough hard empirical data to base any sound policy regulation in regards to the indirect impacts of renewable biofuels production. The field is relative new, especially when compared to the vast knowledge base present in fossil fuel production, and the limited analyses are driven by assumptions that sometimes lack robust empirical validation." [ 58 ] The New Fuels Alliance, representing more than two-dozen biofuel companies, researchers and investors, questioned the Board intention to include indirect land use change effects into account, wrote "While it is likely true that zero is not the right number for the indirect effects of any product in the real world, enforcing indirect effects in a piecemeal way could have very serious consequences for the LCFS.... The argument that zero is not the right number does not justify enforcing a different wrong number, or penalizing one fuel for one category of indirect effects while giving another fuel pathway a free pass." [ 58 ] [ 59 ] On the other hand, more than 170 scientists and economists urged that CARB "include indirect land use change in the lifecycle analyses of heat-trapping emissions from biofuels and other transportation fuels. This policy will encourage development of sustainable, low-carbon fuels that avoid conflict with food and minimize harmful environmental impacts.... There are uncertainties inherent in estimating the magnitude of indirect land use emissions from biofuels, but assigning a value of zero is clearly not supported by the science." [ 60 ] [ 61 ] Industry representatives complained that the final rule overstated the environmental effects of corn ethanol and criticized the inclusion of ILUC as an unfair penalty to domestic corn ethanol because deforestation in the developing world was tied to U.S. ethanol production. [ 22 ] [ 55 ] [ 62 ] [ 63 ] [ 64 ] [ 65 ] The 2011 limit for LCFS means that Mid-west corn ethanol failed, unless current carbon intensity was reduced. [ 54 ] [ 64 ] [ 66 ] [ 67 ] Oil industry representatives complained that the standard left oil refiners with few options, such as Brazilian sugarcane ethanol, with its accompanying tariff. [ 65 ] [ 67 ] CARB officials and environmentalists counter that time and economic incentives will allow produces to adapt. [ 65 ] [ 67 ] UNICA welcomed the ruling, [ 68 ] while urging CARB to reflect Brazilian practices better, lowering their estimates of Brazilian emissions. [ 31 ] [ 68 ] [ 69 ] The only Board member who voted against the ruling explained that he had a "hard time accepting the fact that we're going to ignore the comments of 125 scientists", referring to the letter submitted by a group of scientists questioning the ILUC penalty. "They said the model was not good enough ... to use at this time as a component part of such an historic new standard." [ 64 ] [ 66 ] CARB advanced the expected date for an expert working group to report on ILUC with refined estimates from January 2012 to January 2011. [ 62 ] [ 64 ] [ 66 ] In December 2009, the Renewable Fuels Association (RFA) and Growth Energy , two U.S. ethanol lobbying groups, filed a lawsuit challenging LCFS' constitutionality . The two organizations argued that LCFS violated both the Supremacy Clause and the Commerce Clause , jeopardizing the nationwide ethanol market. [ 70 ] [ 71 ] [ 72 ] The Energy Independence and Security Act of 2007 (EISA) established new renewable fuel categories and eligibility requirements, setting mandatory lifecycle emissions limits. [ 73 ] [ 74 ] EISA explicitly mandated EPA to include "direct emissions and significant indirect emissions such as significant emissions from land use changes." [ 73 ] [ 74 ] [ 75 ] EISA required a 20% reduction in lifecycle GHG emissions for any fuel produced at facilities that commenced construction after December 19, 2007, to be classified as a "renewable fuel"; a 50% reduction for fuels to be classified as "biomass-based diesel" or "advanced biofuel", and a 60% reduction to be classified as "cellulosic biofuel". EISA provided limited flexibility to adjust these thresholds downward by up to 10 percent, and EPA proposed this adjustment for the advanced biofuels category. Existing plants were grandfathered in. [ 73 ] [ 74 ] [ 75 ] On May 5, 2009, EPA released a notice of proposed rulemaking for implementation of the 2007 modification of the Renewable Fuel Standard , known as RFS2. [ 75 ] [ 76 ] The draft of the regulations was released for public comment during a 60-day period, a public hearing was held on June 9, 2009, and also a workshop was conducted on June 10–11, 2009. [ 74 ] [ 75 ] EPA's draft analysis stated that ILUC could produce significant near-term GHG emissions due to land conversion but that biofuels can pay these back over subsequent years. EPA highlighted two scenarios, varying the time horizon and the discount rate for valuing emissions. The first assumed 30-year period uses a 0 percent discount rate (valuing emissions equally regardless of timing). The second scenario used 100 years and a 2% discount rate. [ 73 ] [ 74 ] [ 75 ] On the same day that EPA published its notice of proposed rulemaking, President Obama signed a Presidential Directive seeking to advance biofuels research and commercialization. The Directive established the Biofuels Interagency Working Group, to develop policy ideas for increasing investment in next-generation fuels and for reducing their environmental footprint. [ 33 ] [ 77 ] [ 78 ] [ 79 ] The inclusion of ILUC in the proposed ruling provoked complaints from ethanol [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] and biodiesel producers. [ 80 ] Several environmental organizations welcomed the inclusion of ILUC but criticized the consideration of a 100-year payback scenario, arguing that it underestimated land conversion effects. [ 35 ] [ 81 ] [ 82 ] [ 83 ] [ 84 ] American corn growers, biodiesel producers, ethanol producers and Brazilian sugarcane ethanol producers complained about EPA's methodology, [ 84 ] [ 85 ] [ 86 ] [ 87 ] while the oil industry requested an implementation delay. [ 84 ] [ 88 ] On June 26, 2009, the House of Representatives approved the American Clean Energy and Security Act 219 to 212, mandating EPA to exclude ILUC for a 5-year period, vis a vis RFS2. During this period, more research is to be conducted to develop more reliable models and methodologies for estimating ILUC, and Congress will review this issue before allowing EPA to rule on this matter. [ 89 ] [ 90 ] [ 91 ] [ 92 ] [ 93 ] The bill failed in the U.S. Senate . [ 94 ] [ 95 ] On February 3, 2010, EPA issued its final RFS2 rule for 2010 and beyond. [ 38 ] The rule incorporated direct and significant indirect emissions including ILUC. EPA incorporated comments and data from new studies. [ 39 ] Using a 30-year time horizon and a 0% discount rate, [ 96 ] EPA concluded that multiple biofuels would meet this standard. [ 97 ] EPA's analysis accepted both ethanol produced from corn starch and biobutanol from corn starch as "renewable fuels". Ethanol produced from sugarcane became an "advanced fuel". Both diesel produced from algal oils and biodiesel from soy oil and diesel from waste oils, fats, and greases fell in the "biomass-based diesel" category. Cellulosic ethanol and cellulosic diesel met the "cellulosic biofuel" standard. [ 39 ] The table summarizes the mean GHG emissions estimated by EPA modelling and the range of variations considering that the main source of uncertainty in the life cycle analysis is the GHG emissions related to international land use change. [ 96 ] UNICA welcomed the ruling, in particular, for the more precise lifecycle emissions estimate and hoped that classification the advanced biofuel designation would help eliminate the tariff. [ 98 ] [ 99 ] The U.S. Renewable Fuels Association (RFA) also welcomed the ruling, as ethanol producers "require stable federal policy that provides them the market assurances they need to commercialize new technologies", restating their ILUC objection. [ 100 ] RFA also complained that corn-based ethanol scored only a 21% reduction, noting that without ILUC, corn ethanol achieves a 52% GHG reduction. [ 100 ] [ 101 ] RFA also objected that Brazilian sugarcane ethanol "benefited disproportionally" because EPA's revisions lowered the initially equal ILUC estimates by half for corn and 93% for sugarcane. [ 102 ] Several Midwestern lawmakers commented that they continued to oppose EPA's consideration of the "dicey science" of indirect land use that "punishes domestic fuels". [ 101 ] House Agriculture Chairman Collin Peterson said, "... to think that we can credibly measure the impact of international indirect land use is completely unrealistic, and I will continue to push for legislation that prevents unreliable methods and unfair standards from burdening the biofuels industry." [ 101 ] EPA Administrator Lisa P. Jackson commented that the agency "did not back down from considering land use in its final rules, but the agency took new information into account that led to a more favorable calculation for ethanol". [ 101 ] She cited new science and better data on crop yield and productivity, more information on co-products that could be produced from advanced biofuels and expanded land-use data for 160 countries, instead of the 40 considered in the proposed rule. [ 101 ] As of 2010, European Union and United Kingdom regulators had recognized the need to take ILUC into account, but had not determined the most appropriate methodology. The UK Renewable Transport Fuel Obligation (RTFO) program requires fuel suppliers to report direct impacts, and asked the Renewable Fuels Agency (RFA) to report potential indirect impacts, including ILUC and commodity price changes. [ 14 ] The RFA's July 2008 "Gallager Review", mentioned several risks regarding biofuels and required feedstock production to avoid agricultural land that would otherwise be used for food production, despite concluding that "quantification of GHG emissions from indirect land-use change requires subjective assumptions and contains considerable uncertainty". [ 40 ] Some environmental groups argued that emissions from ILUC were not being taken into account and could be creating more emissions. [ 103 ] [ 104 ] [ 105 ] On December 17, 2008, the European Parliament approved the Renewable Energy Sources Directive (COM(2008)19) and amendments to the Fuel Quality Directive (Directive 2009/30), [ 106 ] which included sustainability criteria for biofuels and mandated consideration of ILUC. The Directive established a 10% biofuel target. A separate Fuel Quality Directive set the EU's Low Carbon Fuel Standard , requiring a 6% reduction in GHG intensity of EU transport fuels by 2020. The legislation ordered the European Commission to develop a methodology to factor in GHG emissions from ILUC by December 31, 2010, based on the best available scientific evidence. [ 41 ] [ 52 ] [ 107 ] In the meantime, the European Parliament defined lands ineligible for producing biofuel feedstocks for the Directives. This category included wetlands and continuously forested areas with canopy cover of more than 30 percent or cover between 10 and 30 percent given evidence that its existing carbon stock was low enough to justify conversion. [ 41 ] The Commission subsequently published terms of reference for three ILUC modeling exercises: one using a General Equilibrium model; [ 108 ] one using a Partial Equilibrium model [ 109 ] and one comparing other global modeling exercises. [ 110 ] It also consulted on a limited range of high-level options for addressing ILUC [ 111 ] to which 17 countries [ 112 ] and 59 organizations responded. [ 113 ] The United Nations Special Rapporteur on the Right to Food and several environmental organizations complained that the 2008 safeguards were inadequate. [ 114 ] [ 115 ] [ 116 ] [ 117 ] UNICA called for regulators to establish an empirical and "globally accepted methodology" to consider ILUC, with the participation of researchers and scientists from biofuel crop-producing countries. [ 118 ] In 2010 some NGOs accused the European Commission of lacking transparency given its reluctance to release documents relating to the ILUC work. [ 119 ] In March 2010 the Partial and General Equilibrium Modelling results were made available, with the disclaimer that the EC had not adopted the views contained in the materials. [ 120 ] These indicate that a 1.25% increase in EU biofuel consumption would require around 5,000,000 hectares (12,000,000 acres) of land globally. [ 121 ] The scenarios varied from 5.6 to 8.6% of road transport fuels. The study found that ILUC effects offset part of the emission benefits and that above the 5.6% threshold, ILUC emissions increase rapidly. [ 122 ] [ 123 ] For the expected scenario of 5.6% by 2020, the study estimated that biodiesel production increases would be primarily domestic, while bioethanol production would take place mainly in Brazil, regardless of EU duties. [ 122 ] The analysis concluded that eliminating trade barriers would further reduce emissions, because the EU would import more from Brazil. [ 121 ] Under this scenario, "direct emission savings from biofuels are estimated at 18 Mt CO 2 , additional emissions from ILUC at 5.3 Mt CO 2 (mostly in Brazil), resulting in a global net balance of nearly 13 Mt CO 2 savings in a 20 years horizon". [ 122 ] The study also found that ILUC emissions were much greater for biodiesel from vegetable oil and estimated that in 2020 even at the 5.6% level were over half the greenhouse gas emissions from diesel. [ 122 ] [ 123 ] As part of the announcement, the Commission said it would publish a report on ILUC by the end of 2010. [ 124 ] On June 10, 2010, the EC announced its decision to set up certification schemes for biofuels, including imports as part of the Renewable Energy Directive. The Commission encouraged E.U. nations, industry, and NGOs to set up voluntary certification schemes. [ 125 ] [ 126 ] EC figures for 2007 showed that 26% of biodiesel and 31% of bioethanol used in the E.U. was imported, mainly from Brazil and the United States . [ 127 ] UNICA welcomed the EU efforts to "engage independent experts in its assessments" but requested that improvements because "... the report currently contains a certain number of inaccuracies, so once these are corrected, we anticipate even higher benefits resulting from the use of Brazilian sugarcane ethanol." [ 128 ] UNICA highlighted the fact that the report assumed land expansion that "does not take into consideration the agro-ecological zoning for sugarcane in Brazil, which prevents cane from expanding into any type of native vegetation." [ 128 ] Critics said the 10% figure was reduced to 5.6% of transport fuels partly by exaggerating the contribution of electric vehicles (EV) in 2020, as the study assumed EVs would represent 20% of new car sales, two and six times the car industry's own estimate. [ 129 ] They also claimed the study "exaggerates to around 45 percent the contribution of bioethanol—the greenest of all biofuels—and consequently downplays the worst impacts of biodiesel." [ 129 ] Environmental groups found that the measures "are too weak to halt a dramatic increase in deforestation". [ 127 ] [ 130 ] According to Greenpeace , "indirect land-use change impacts of biofuel production still are not properly addressed", which for them was the most dangerous problem of biofuels [ 130 ] Industry representatives welcomed the certification system, but some dismissed concerns regarding the lack of land use criteria. [ 127 ] [ 131 ] [ 132 ] UNICA and other industry groups wanted the gaps in the rules filled to provide a clear operating framework. [ 131 ] [ 132 ] The negotiations between the European Parliament and the Council of European Ministers continue. A deal is not foreseen before 2014. [ 133 ]
https://en.wikipedia.org/wiki/Indirect_land_use_change_impacts_of_biofuels
In computer programming , an indirection (also called a reference ) is a way of referring to something using a name, reference, or container instead of the value itself. The most common form of indirection is the act of manipulating a value through its memory address . For example, accessing a variable through the use of a pointer . A stored pointer that exists to provide a reference to an object by double indirection is called an indirection node . In some older computer architectures, indirect words supported a variety of more-or-less complicated addressing modes . Another important example is the domain name system which enables names such as en.wikipedia.org to be used in place of network addresses such as 208.80.154.224 . The indirection from human-readable names to network addresses means that the references to a web page become more memorable, and links do not need to change when a web site is relocated to a different server. A famous aphorism of Butler Lampson that is attributed to David Wheeler goes: "All problems in computer science can be solved by another level of indirection" (the " fundamental theorem of software engineering "). [ 1 ] This is often deliberately mis-quoted [ by whom? ] with " abstraction layer " substituted for "level of indirection". A corollary to this aphorism, and the original conclusion from Wheeler, is "...except for the problem of too many layers of indirection." A humorous Internet memorandum , RFC 1925 , insists that: (6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture ) than it is to solve it. Object-oriented programming makes use of indirection extensively, a simple example being dynamic dispatch . Higher-level examples of indirection are the design patterns of the proxy and the proxy server . Delegation is another classic example of an indirection pattern. In strongly typed interpreted languages with dynamic data types , most variable references require a level of indirection: first the type of the variable is checked for safety, and then the pointer to the actual value is dereferenced and acted on. Recursive data types are usually implemented using indirection, because otherwise if a value of a data type can contain the entirety of another value of the same data type, there is no limit to the size a value of this data type could need. When doing symbolic programming from a formal mathematical specification the use of indirection can be quite helpful. To start with a simple example the variables x , y and z in an equation such as z = x 2 + y 2 {\textstyle z={\sqrt {x^{2}+y^{2}}}} can refer to any number. One could imagine objects for various numbers and then x , y and z could point to the specific numbers being used for a particular problem. The simple example has its limitation as there are infinitely many real numbers. In various other parts of symbolic programming there are only so many symbols. So to move on to a more significant example, in logic the formula α can refer to any formula, so it could be β , γ , δ , ... or η → π , ς ∨ σ , ... When set-builder notation is employed the statement Δ={ α } means the set of all formulae — so although the reference is to α there are two levels of indirection here, the first to the set of all α and then the second to a specific formula for each occurrence of α in the set Δ.
https://en.wikipedia.org/wiki/Indirection
In mathematical logic , indiscernibles are objects that cannot be distinguished by any property or relation defined by a formula . Usually only first-order formulas are considered. If a , b , and c are distinct and { a , b , c } is a set of indiscernibles , then, for example, for each binary formula β {\displaystyle \beta } , we must have Historically, the identity of indiscernibles was one of the laws of thought of Gottfried Leibniz . In some contexts one considers the more general notion of order-indiscernibles , and the term sequence of indiscernibles often refers implicitly to this weaker notion. In our example of binary formulas, to say that the triple ( a , b , c ) of distinct elements is a sequence of indiscernibles implies More generally, for a structure A {\displaystyle {\mathfrak {A}}} with domain A {\displaystyle A} and a linear ordering < {\displaystyle <} , a set I ⊆ A {\displaystyle I\subseteq A} is said to be a set of < {\displaystyle <} -indiscernibles for A {\displaystyle {\mathfrak {A}}} if for any finite subsets { i 0 , … , i n } ⊆ I {\displaystyle \{i_{0},\ldots ,i_{n}\}\subseteq I} and { j 0 , … , j n } ⊆ I {\displaystyle \{j_{0},\ldots ,j_{n}\}\subseteq I} with i 0 < … < i n {\displaystyle i_{0}<\ldots <i_{n}} and j 0 < … < j n {\displaystyle j_{0}<\ldots <j_{n}} and any first-order formula ϕ {\displaystyle \phi } of the language of A {\displaystyle {\mathfrak {A}}} with n {\displaystyle n} free variables, A ⊨ ϕ ( i 0 , … , i n ) ⟺ A ⊨ ϕ ( j 0 , … , j n ) {\displaystyle {\mathfrak {A}}\vDash \phi (i_{0},\ldots ,i_{n})\iff {\mathfrak {A}}\vDash \phi (j_{0},\ldots ,j_{n})} . [ 1 ] p. 2 Order-indiscernibles feature prominently in the theory of Ramsey cardinals , Erdős cardinals , and zero sharp .
https://en.wikipedia.org/wiki/Indiscernibles
Indium(III) hydroxide is the chemical compound with the formula In ( O H ) 3 . Its prime use is as a precursor to indium(III) oxide , In 2 O 3 . [ 1 ] It is sometimes found as the rare mineral dzhalindite . Indium(III) hydroxide has a cubic structure, space group Im3, a distorted ReO 3 structure. [ 2 ] [ 3 ] Neutralizing a solution containing an In 3+ salt such as indium nitrate ( In(NO 3 ) 3 ) or a solution of indium trichloride ( InCl 3 ) gives a white precipitate that on aging forms indium(III) hydroxide. [ 4 ] [ 5 ] A thermal decomposition of freshly prepared In(OH) 3 shows the first step is the conversion of In(OH) 3 · x H 2 O to cubic indium(III) hydroxide. [ 4 ] The precipitation of indium hydroxide was a step in the separation of indium from zincblende ore by Reich and Richter , the discoverers of indium. [ 6 ] Indium(III) hydroxide is amphoteric , like gallium(III) hydroxide ( Ga(OH) 3 ) and aluminium hydroxide ( Al(OH) 3 ), but is much less acidic than gallium hydroxide ( Ga(OH) 3 ), [ 5 ] having a lower solubility in alkaline solutions than in acid solutions. [ 7 ] It is for all intents and purposes a basic hydroxide. [ 8 ] Dissolving indium(III) hydroxide in strong alkali gives solutions that probably contain either four coordinate [In(OH) 4 ] − or [In(OH) 4 (H 2 O)] − . [ 8 ] Reaction with acetic acid or carboxylic acids is likely to give the basic acetate or carboxylate salt, e.g. (CH 3 COO) 2 In(OH) . [ 7 ] At 10 MPa pressure and 250-400 °C, indium(III) hydroxide converts to indium oxide hydroxide (InO(OH)), which has a distorted rutile structure. [ 5 ] Rapid decompression of samples of indium(III) hydroxide compressed at 34 GPa causes decomposition, yielding some indium metal. [ 9 ] Laser ablation of indium(III) hydroxide gives indium(I) hydroxide (InOH), a bent molecule with an In-O-H angle of around 132° and an In-O bond length of 201.7 pm. [ 10 ]
https://en.wikipedia.org/wiki/Indium(III)_hydroxide
Indium(III) oxide ( In 2 O 3 ) is a chemical compound , an amphoteric oxide of indium . Amorphous indium oxide is insoluble in water but soluble in acids, whereas crystalline indium oxide is insoluble in both water and acids. The crystalline form exists in two phases, the cubic ( bixbyite type) [ 1 ] and rhombohedral ( corundum type ). Both phases have a band gap of about 3 eV. [ 3 ] [ 4 ] The parameters of the cubic phase are listed in the infobox. The rhombohedral phase is produced at high temperatures and pressures or when using non-equilibrium growth methods. [ 5 ] It has a space group R 3 c No. 167, Pearson symbol hR30, a = 0.5487 nm, b = 0.5487 nm, c = 1.4510 nm, Z = 6 and calculated density 7.31 g/cm 3 . [ 6 ] Thin films of chromium - doped indium oxide (In 2−x Cr x O 3 ) are a magnetic semiconductor displaying high-temperature ferromagnetism , single- phase crystal structure, and semiconductor behavior with high concentration of charge carriers . It has possible applications in spintronics as a material for spin injectors. [ 7 ] Thin polycrystalline films of indium oxide doped with Zn 2+ are highly conductive (conductivity ~10 5 S/m) and even superconductive at liquid helium temperatures. The superconducting transition temperature T c depends on the doping and film structure and is below 3.3 K. [ 8 ] Bulk samples can be prepared by heating indium(III) hydroxide or the nitrate, carbonate or sulfate. [ 9 ] Thin films of indium oxide can be prepared by sputtering of indium targets in an argon / oxygen atmosphere. They can be used as diffusion barriers (" barrier metals ") in semiconductors , e.g. to inhibit diffusion between aluminium and silicon . [ 10 ] Monocrystalline nanowires can be synthesized from indium oxide by laser ablation, allowing precise diameter control down to 10 nm. Field effect transistors were fabricated from those. [ 11 ] Indium oxide nanowires can serve as sensitive and specific redox protein sensors . [ 12 ] The sol–gel method is another way to prepare nanowires. [ citation needed ] Indium oxide can serve as a semiconductor material , forming heterojunctions with p - InP , n - GaAs , n- Si , and other materials. A layer of indium oxide on a silicon substrate can be deposited from an indium trichloride solution, a method useful for manufacture of solar cells . [ 13 ] When heated to 700 °C, indium(III) oxide forms In 2 O, (called indium(I) oxide or indium suboxide), at 2000 °C it decomposes. [ 9 ] It is soluble in acids but not in alkali. [ 9 ] With ammonia at high temperature indium nitride is formed: [ 14 ] With K 2 O and indium metal the compound K 5 InO 4 containing tetrahedral InO 4 5− ions was prepared. [ 15 ] Reacting with a range of metal trioxides produces perovskites [ 16 ] for example: Indium oxide is used in some types of batteries, thin film infrared reflectors transparent for visible light ( hot mirrors ), some optical coatings , and some antistatic coatings . In combination with tin dioxide , indium oxide forms indium tin oxide (also called tin doped indium oxide or ITO), a material used for transparent conductive coatings. In semiconductors, indium oxide can be used as an n-type semiconductor used as a resistive element in integrated circuits . [ 17 ] In histology , indium oxide is used as a part of some stain formulations.
https://en.wikipedia.org/wiki/Indium(III)_oxide
Indium(III) selenide is a compound of indium and selenium . It has potential for use in photovoltaic devices and has been the subject of extensive research. The two most common phases, α and β, have a layered structure, while γ has a "defect wurtzite structure ." In all, five polymorphs are known: α, β, γ, δ, κ. [ 1 ] The α-β phase transition is accompanied by a change in electrical conductivity. [ 2 ] The band gap of γ-In 2 Se 3 is approximately 1.9 eV. The method of production influences the polymorph generated. For example, thin films of pure γ-In 2 Se 3 have been produced from trimethylindium (InMe 3 ) and hydrogen selenide via MOCVD techniques. [ 3 ] A conventional route entails heating the elements in a sealed tube: [ 4 ] Greenwood, Norman N. ; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann . ISBN 978-0-08-037941-8 .
https://en.wikipedia.org/wiki/Indium(III)_selenide
Indium(III) sulfide (Indium sesquisulfide, Indium sulfide (2:3), Indium (3+) sulfide) is the inorganic compound with the formula In 2 S 3 . It has a "rotten egg" odor characteristic of sulfur compounds, and produces hydrogen sulfide gas when reacted with mineral acids. [ 2 ] Three different structures (" polymorphs ") are known: yellow, α-In 2 S 3 has a defect cubic structure, red β-In 2 S 3 has a defect spinel , tetragonal, structure, and γ-In 2 S 3 has a layered structure. The red, β, form is considered to be the most stable form at room temperature, although the yellow form may be present depending on the method of production. In 2 S 3 is attacked by acids and by sulfide. It is slightly soluble in Na 2 S. [ 3 ] Indium sulfide was the first indium compound ever described, being reported in 1863. [ 4 ] Reich and Richter determined the existence of indium as a new element from the sulfide precipitate. In 2 S 3 features tetrahedral In(III) centers linked to four sulfido ligands. α-In 2 S 3 has a defect cubic structure. The polymorph undergoes a phase transition at 420 °C and converts to the spinel structure of β-In 2 S 3 . Another phase transition at 740 °C produces the layered γ-In 2 S 3 polymorph. [ 5 ] β-In 2 S 3 has a defect spinel structure. The sulfide anions are closely packed in layers, with octahedrally-coordinated In(III) cations present within the layers, and tetrahedrally-coordinated In(III) cations between them. A portion of the tetrahedral interstices are vacant, which leads to the defects in the spinel. [ 6 ] β-In 2 S 3 has two subtypes. In the T-In 2 S 3 subtype, the tetragonally-coordinated vacancies are in an ordered arrangement, whereas the vacancies in C-In 2 S 3 are disordered. The disordered subtype of β-In 2 S 3 shows activity for photocatalytic H 2 production with a noble metal cocatalyst, but the ordered subtype does not. [ 7 ] β-In 2 S 3 is an N-type semiconductor with an optical band gap of 2.1 eV. It has been proposed to replace the hazardous cadmium sulfide , CdS, as a buffer layer in solar cells, [ 8 ] and as an additional semiconductor to increase the performance of TiO 2 -based photovoltaics . [ 7 ] The unstable γ-In 2 S 3 polymorph has a layered structure. Indium sulfide is usually prepared by direct combination of the elements. Production from volatile complexes of indium and sulfur, for example dithiocarbamates (e.g. Et 2 In III S 2 CNEt 2 ), has been explored for vapor deposition techniques. [ 9 ] Thin films of the beta complex can be grown by chemical spray pyrolysis . Solutions of In(III) salts and organic sulfur compounds (often thiourea ) are sprayed onto preheated glass plates, where the chemicals react to form thin films of indium sulfide. [ 10 ] Changing the temperature at which the chemicals are deposited and the In:S ratio can affect the optical band gap of the film. [ 11 ] Single-walled indium sulfide nanotubes can be formed in the laboratory, by the use of two solvents (one in which the compound dissolves poorly and one in which it dissolves well). There is partial replacement of the sulfido ligands with O 2− , and the compound forms thin nanocoils, which self-assemble into arrays of nanotubes with diameters on the order of 10 nm, and walls approximately 0.6 nm thick. The process mimics protein crystallization . [ 12 ] The β-In 2 S 3 polymorph, in powdered form, can irritate eyes, skin and respiratory organs. It is toxic if swallowed, but can be handled safely under conventional laboratory conditions. It should be handled with gloves, and care should be taken to keep from inhaling the compound, and to keep it from contact with the eyes. [ 13 ] There is considerable interest in using In 2 S 3 to replace the semiconductor CdS (cadmium sulfide) in photoelectronic devices. β-In 2 S 3 has a tunable band gap, which makes it attractive for photovoltaic applications, [ 11 ] and it shows promise when used in conjunction with TiO 2 in solar panels, indicating that it could replace CdS in that application as well. [ 7 ] Cadmium sulfide is toxic and must be deposited with a chemical bath , [ 14 ] but indium(III) sulfide shows few adverse biological effects and can be deposited as a thin film through less hazardous methods. [ 11 ] [ 14 ] Thin films β-In 2 S 3 can be grown with varying band gaps, which make them widely applicable as photovoltaic semiconductors, especially in heterojunction solar cells . [ 11 ] Plates coated with beta-In 2 S 3 nanoparticles can be used efficiently for PEC (photoelectrochemical) water splitting. [ 15 ] A preparation of indium sulfide made with the radioactive 113 In can be used as a lung scanning agent for medical imaging . [ 16 ] It is taken up well by lung tissues, but does not accumulate there. In 2 S 3 nanoparticles luminesce in the visible spectrum. Preparing In 2 S 3 nanoparticles in the presence of other heavy metal ions creates highly efficient blue, green, and red phosphors , which can be used in projectors and instrument displays. [ 17 ]
https://en.wikipedia.org/wiki/Indium(III)_sulfide
Indium(III) telluride ( In 2 Te 3 ) is an inorganic compound . A black solid, it is sometimes described as an intermetallic compound , because it has properties that are metal-like and salt like. It is a semiconductor that has attracted occasional interest for its thermoelectric and photovoltaic applications. No applications have been implemented commercially however. [ 2 ] A conventional route entails heating the elements in a seal-tube: [ 3 ] Indium(III) telluride reacts with strong acids to produce hydrogen telluride . This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indium(III)_telluride
Indium (111In) altumomab pentetate ( INN ) ( USP , indium In 111 altumomab pentetate; trade name Hybri-ceaker ) is a mouse monoclonal antibody linked to pentetate which acts as a chelating agent for the radioisotope indium-111 . The drug is used for the diagnosis of colorectal cancer [ 1 ] [ 2 ] but has not been approved for use. [ 3 ] This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it . This nuclear medicine article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indium_(111In)_altumomab_pentetate
Indium ( 111 In) capromab pendetide (trade name Prostascint ) is used to image the extent of prostate cancer . [ 1 ] Capromab is a mouse monoclonal antibody which recognizes a protein found on both prostate cancer cells and normal prostate tissue. It is linked to pendetide , a derivative of DTPA . [ 2 ] Pendetide acts as a chelating agent for the radionuclide indium-111 . Following an intravenous injection of Prostascint, imaging is performed using single-photon emission computed tomography (SPECT). [ 1 ] Early trials with yttrium ( 90 Y ) capromab pendetide were also conducted. [ 3 ] This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it . This nuclear medicine article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indium_(111In)_capromab_pendetide
Indium ( 111 In) satumomab pendetide (trade name OncoScint CR103 ) is a mouse monoclonal antibody which is used for cancer diagnosis. [ 1 ] The antibody, satumomab, is linked to pendetide , a derivative of DTPA . Pendetide acts as a chelating agent for the radionuclide indium-111 . This nuclear medicine article is a stub . You can help Wikipedia by expanding it . This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indium_(111In)_satumomab_pendetide
Indium perchlorate is the inorganic compound with the chemical formula In(ClO 4 ) 3 . [ 1 ] The compound is an indium salt of perchloric acid . [ 2 ] [ 3 ] Dissolving indium hydroxide in perchloric acid : Indium(III) perchlorate forms colorless crystals. It is soluble in water and ethanol . The compound forms a hydrate In(ClO 4 ) 3 • 8H 2 O , that melts in its own crystallization water at 80 °C. [ 4 ] The octahydrate is easily soluble in ethanol and acetic acid .
https://en.wikipedia.org/wiki/Indium_perchlorate
Individual action on climate change describes the personal choices that everyone can make to reduce the greenhouse gas emissions of their lifestyles and catalyze climate action . These actions can focus directly on how choices create emissions, such as reducing consumption of meat or flying, or can focus more on inviting political action on climate or creating greater awareness how society can become more green. Excessive consumption is one of the most significant contributors to climate change and other environmental issue than population increase , although some experts contend that population remains a significant factor. [ 1 ] High consumption lifestyles have a greater environmental impact, with the richest 10% of people emitting about half the total lifestyle emissions. [ 2 ] [ 3 ] Creating changes in personal lifestyle, can change social and market conditions leading to less environmental impact. People who wish to reduce their carbon footprint (particularly those in high income countries with high consumption lifestyles), can for example reduce their air travel for holidays, use bicycles instead of cars on a daily basis, eat a plant-based diet , and use consumer products for longer. [ 4 ] Avoiding meat and dairy products has been called "the single biggest way" individuals can reduce their environmental impacts. [ 5 ] Some commentators say that actions taken by individual consumers, such as adopting a sustainable lifestyle , are insignificant compared to actions on the political level . [ 6 ] Others say that individual action does lead to collective action because " lifestyle change can build momentum for systemic change ." [ 7 ] [ 8 ] Other commentors have highlighted how the concept of individual carbon footprint was advanced by fossil fuel companies, like British Petroleum in order to reduce the culpability of fossil fuel companies. [ 9 ] [ 10 ] As of 2021 [update] the remaining carbon budget for a 50-50 chance of staying below 1.5 degrees of warming is 460 bn tonnes of CO 2 or 11 + 1 ⁄ 2 years at 2020 emission rates. [ 14 ] Global average greenhouse gas per person per year in the late 2010s was about 7 tonnes [ 15 ] – including 0.7 tonnes CO 2 eq food, 1.1 tonnes from the home, and 0.8 tonnes from transport. [ 16 ] Of this about 5 tonnes was actual carbon dioxide. [ 17 ] To meet the Paris Agreement target of under 1.5 degrees warming by the end of the century, it is estimated that the annual carbon footprint per person required by 2030 is 2.3 tonnes. [ 18 ] [ needs update ] As of 2020 [update] the average Indian almost meets this target, [ 19 ] the average person in France [ 20 ] or China overshoots it, and the average person in the US and Australia vastly overshoots it. Per capita emissions also vary significantly within countries, with wealthier individuals creating more emissions. [ 21 ] [ 22 ] A 2015 Oxfam report calculated that the wealthiest 10% of the global population were responsible for half of all greenhouse gas emissions. [ 23 ] According to a 2021 report by the UN, the wealthiest 5% contributed nearly 40% of emissions growth from 1990 to 2015. [ 24 ] The IPCC Sixth Assessment Report pointed out in 2022: "To enhance well-being, people demand services and not primary energy and physical resources per se. Focusing on demand for services and the different social and political roles people play broadens the participation in climate action ." [ 25 ] : TS-98 The report explains that behavior, lifestyle , and cultural change have a high climate change mitigation potential in some sectors, particularly when complementing technological and structural change. [ 26 ] : 5–3 The carbon footprint was originally coined and popularized by the ad campaign Beyond Petroleum in 2004–2006, funded by British Petroleum (BP) , for which other have accused them of popularizing to downplay their own culpability. [ 9 ] [ 10 ] In 2008 the World Health Organization wrote that "Your ' carbon footprint ' is a measure of the impact your activities have on the amount of carbon dioxide (CO 2 ) produced through the burning of fossil fuels". [ 27 ] In 2019 the Institute for Global Environmental Strategies in Japan defined "lifestyle carbon footprint" as "GHG emissions directly emitted and indirectly induced from the final consumption of households, excluding those induced by government consumption and capital formation such as infrastructure." [ 28 ] : v However an Oxfam and SEI study in 2020 estimated per capita CO 2 emissions rather than CO 2 -equivalent, and allocated all consumption emissions to individuals rather than just household consumption. [ 29 ] According to a 2020 review many academic studies do not properly explain the scope of the "personal carbon footprint" they study. [ 30 ] A comparison of travel options shows: Walking and biking emit little to no greenhouse gases and are healthy alternatives to driving or riding public transportation. [ 31 ] There are also increasing numbers of bike-sharing services in urban environments. [ 32 ] Reliable public transportation can be one of the most viable alternatives to driving personal vehicles. [ 33 ] While there are efficiency problems associated with public transportation (waiting times, missed transfers, unreliable schedules, energy consumption), they can be improved as funding and public interest increases and technology advances. [ 34 ] A 2022 survey found that 33% of car buyers in Europe will opt for a petrol or diesel car when purchasing a new vehicle. 67% of the respondents mentioned opting for the hybrid or electric version. [ 35 ] [ 36 ] In the EU, only 13% of the total population do not plan on owning a vehicle at all. [ 35 ] 44% of Chinese car buyers, on the other hand, are the most likely to buy an electric car. [ 35 ] [ 37 ] There are many options to choose from when considering alternatives to personal car use, but the use of a personal vehicle may be necessary due to location and accessibility reasons. [ 38 ] The life cycle assessment of a vehicle evaluates the environmental impact of the production of the vehicle and its spare parts, the fuel consumption of the vehicle, and what happens to the vehicle at the end of its lifespan. [ 39 ] These environmental impacts can be measured in greenhouse gas emissions , solid waste produced, and consumption of energy resources among other factors. [ 40 ] [ 39 ] Increasingly common alternatives to internal-combustion engines vehicles are electric vehicles (EVs) , and hybrid-electric vehicles . [ 41 ] Carpooling and ride-sharing services are also alternatives to personal transportation. Carpooling reduces the number of cars on the road, in turn reducing the amount of traffic and energy consumption. [ 42 ] Car ride-sharing services like Uber and Lyft could be viable options for transportation, but according to the Union of Concerned Scientists , ride-share service trips currently result in an estimated 69% increase in climate pollution on average. [ 43 ] There are more vehicles on the road as a result of passengers who would have otherwise taken public transportation, walked, or biked to their destination. [ 43 ] Ride-sharing services can reduce emissions if they implement strategies like electrifying vehicles and increase carpooling trips. [ 43 ] Air travel is one of the most emission-intensive modes of transportation. [ 44 ] The current most effective way to reduce personal emissions from air travel is to fly less. [ 45 ] [ 46 ] New technologies are being developed to allow for more efficient fuel consumption and planes powered by electricity. [ 46 ] Avoiding air travel and particularly frequent flyer programs [ 47 ] has a high benefit because the convenience makes frequent, long-distance travel easy, and high-altitude emissions are more potent for the climate than the same emissions made at ground level. Aviation is much more difficult to fix technically than surface transport, [ 48 ] so will need more individual action in future if the Carbon Offsetting and Reduction Scheme for International Aviation cannot be made to work properly. [ 49 ] Flying is responsible for 5 percent of global warming. [ 50 ] Compared to longer flight routes, shorter flights actually produce larger amounts of greenhouse gas emissions per passenger they carry and mile covered, so individuals may consider train travel instead but this can be more expensive due to aviation subsidies. [ 51 ] Airplanes contribute to damaging our environment since airplanes cause greater air pollution as they release carbon dioxide along with nitrogen oxides , which is an atmospheric pollutant. Exhaust emissions lead to changes in the amounts of the greenhouse gases ozone and methane . [ 52 ] Avoiding night-flights may help, as contrails may account for over half of aviation's climate change impact. [ 53 ] Climate change is a factor that 67% of Europeans consider when choosing where to go on holiday. 52% of Europeans, specifically 37% of people ages 30–64 and 25% of people aged above 65, state that in 2022 they will choose to travel by plane. 27% of young people claim they will travel to a faraway destination. More specifically, people under the age of 30 are more likely to consider climate implications of vacation spots and air travel. [ 54 ] [ 55 ] Reducing home energy use through measures such as insulation , better energy efficiency of appliances, cool roofs , heat reflective paints, [ 56 ] lowering water heater temperature, and improving heating and cooling efficiency can significantly reduce an individual's carbon footprint. [ 57 ] After home insulation and ventilation has been checked, replacing a failed gas boiler with a heat pump makes a considerable difference, [ 58 ] especially in climates where both heating and cooling are required. [ 59 ] In addition, the choice of energy used to heat, cool, and power homes makes a difference in the carbon footprint of individual homes. [ 60 ] Many energy suppliers in various countries worldwide have options to purchase part or pure " green energy " (usually electricity but occasionally also gas). [ 61 ] These methods of energy production emit almost no greenhouse gases once they are up and running. Installing rooftop solar , both on a household and community scale, also drastically reduces household emissions, and at scale could be a major contributor to greenhouse gas abatement. [ 62 ] [ 63 ] Labels, such as Energy Star in the US, can be seen on many household appliances, home electronics, office equipment, heating and cooling equipment, windows, residential light fixtures, and other products. Energy star is a program in the U.S. that promotes energy efficiency. When buying air conditioning the choice of coolant is important. [ 65 ] Carbon emission labels describe the carbon dioxide emissions created as a by-product of manufacturing, transporting, or disposing of a consumer product . Environmental Product Declarations (EPD) "present transparent, verified and comparable information about the life-cycle environmental impact of products". [ 66 ] These labels may help consumers choose lower energy products. Converting appliances such as stoves, water heaters and furnaces from gas to electric reduces emissions of CO 2 and methane . [ 67 ] Plants process carbon dioxide to make organic molecules like cellulose , sugars, starches, plant proteins, and oils. Perennials keep a large proportion of those organic molecules for as long as they live, not releasing them until microorganisms decompose them after they die. Perennial plants like trees and shrubs contribute to the absorption of carbon dioxide from the air. [ 68 ] [ 69 ] Annual plants that die each year release almost all of the CO 2 that they take in. Grass lawns that live over the winter but die back above ground can also soak up a share of carbon dioxide, reducing that greenhouse gas in the atmosphere. However, both organic and synthetic fertilizers are sources of NO x , and turfgrass lawns use 3 million tons of nitrogen-based fertilizer each year. That adds four to five tons of carbon to the atmosphere for every ton of nitrogen (660,000 tons of carbon dioxide/year). NO x is about 300 times more heat-absorbing than carbon dioxide. [ 70 ] [ 71 ] Soil microbes break down organic carbon into carbon dioxide. Reducing irrigation would slow the microbial activity of the soil and its production of carbon dioxide. [ 69 ] However, increased irrigation is required for lawn maintenance in areas that are becoming more arid due to climate change. Gas-powered lawnmowers and other power tools used for lawn maintenance produce carbon dioxide and methane , which are greenhouse gases . [ 70 ] Lawns management methods like fertilizers and fossil fuel-powered lawn equipment may outweigh any carbon sequestration from the perennial grass lawn. [ 71 ] Reducing irrigation, nitrogen fertilizer, chemical pesticides, and using hand tools instead of power tools that use fossil fuels can all reduce the climate impact of lawns. [ 72 ] Natural lawns promote pollination , require no fertilization, require less frequent mowing, promote diversity, and use less water. [ 73 ] There are many opportunities to plant trees and shrubs in the yard, along roads, in parks, and in public gardens. In addition, some charities plant fast-growing trees to help people in places with less tree coverage to restore the productivity of their lands. [ 74 ] Individuals can also plant home vegetable gardens that provide locally grown food, native plant gardens that provide a diversity of species, and trees and perennial shrubs that develop sustainable carbon sequestration . [ 71 ] Hanging laundry to dry saves energy that would have been used for heating, reducing clothing's carbon footprint. [ 75 ] [ 76 ] [ 77 ] [ 78 ] Additionally, using a shorter, cold water wash cycle can conserve energy by as much as 66%. [ 79 ] Purchasing well-made, durable clothing, and avoiding " fast fashion " is critical for reducing climate impact. [ 80 ] [ 81 ] [ 82 ] Some clothing is donated and/or recycled, meanwhile, the rest of the waste heads to landfills where they release "greenhouse gases". [ 83 ] Domestic heated water using non-renewable resources such as gas contributes to significant global carbon dioxide emissions. As of 2020, most homes use gas or electric boilers to heat their water. Powering these boilers with renewable energy would reduce these emissions, although the cost of installation means this is not a universally viable option. [ 84 ] Turning off the water heater and using unheated water for laundry, bathing (weather permitting), dishes, and cleaning eliminates those emissions. The production of many goods and services results in the emission of greenhouse gases as well as pollution . One way for individuals to decrease their environmental footprint is by consuming less goods and services. Decreasing the consumption of goods and services results in a lower demand, and lower supply (production) follows. [ 85 ] Individuals can prioritize shrinking the consumption of those goods and services whose production results in relatively high pollution levels. Individuals can also prioritize discontinuing the use of those goods and services that offer little to no real utility by "speaking with their money", since unpopular products neither satisfy consumer wants/needs nor the environment's; however, government subsidies may prove " boycott buying" to be futile in some cases, enabling the producer. [ 86 ] [ 87 ] [ 88 ] A climate survey found that in 2021 42% of Europeans, specifically 48% of women and 34% of men, already invest in second-hand clothing rather than buying new ones. Populations aged 15 to 29, are found more likely to do so. [ 89 ] [ 90 ] Education on sustainable consumption, specifically targeting children, is seen as a priority by 93% of Chinese citizens, 92% of EU, 88% of British citizens and 81% of Americans. [ 91 ] [ 92 ] The National Geographic Society has concluded that city dwellers can help with climate change if they (or we) simply "buy less stuff". [ 93 ] Lloyd Alter suggests that one way to get a practical sense of embodied carbon is to ask, "How much does your household weigh?" [ 94 ] For-profit companies usually promote and market their products as useful or needed to potential consumers, even when they in reality are harmful or wasteful to them and/or the environment. Individuals should be diligent in self-assessing and/or researching whether or not each product they purchase and consume is really of value to decrease consumption. If a gas stove or other type of stove needs to be replaced in a new house, then an electric stove is preferable. However, as cooking is usually a small part of household GHG emissions, it is generally not worth changing a stove simply for climate reasons. [ 95 ] Using durable reusable containers such as lunchboxes, "single-use" grocery and produce bags (can be used as light-duty trash bags), Tupperware, as well as buying local produce, minimally packaged foods and general items, all reduce carbon emissions and pollution from the production of single use containers and packaging. These tactics mitigate GHG production by reducing demand for extra packaging and shipping of products. [ 96 ] [ 97 ] The world's food production is responsible for approximately a quarter of the greenhouse gas emissions produced by humanity each year, [ 98 ] with livestock alone accounting for 14.5% of the total greenhouse gas emissions. [ 99 ] The carbon dioxide emissions associated with food are estimated to be 2.2 tons per person annually, from production to consumption. [ 100 ] If this is correct, it would mean that just the food aspect of daily life would nearly exhaust the entire Paris Agreement compliance goal of 2.3 tons [ 101 ] per person per year. Therefore, reducing food loss [ 102 ] is absolutely essential, and in the 2020 Project Drawdown , it was identified as the top priority solution to address climate change. [ 103 ] Fortunately, out of the 2.2 tons mentioned, 1.9 tons are considered reducible. [ 100 ] According to a 2023 study published in Nature Food , carbon dioxide emissions resulting from food waste make up half of the total emissions in the entire food system. [ 104 ] In the United States, it is estimated that 31% of food delivered to retail stores is discarded by either retailers or consumers. [ 105 ] Furthermore, the carbon dioxide emissions from food waste that decomposes in landfills, etc., amount to 2.5 kilograms of carbon dioxide per kilogram of food and also produce methane , a greenhouse gas with 25 times the warming potential of carbon dioxide. [ 106 ] Food waste also represents a loss of the energy to transport foods from producers to consumers. According to a study published in Nature Food in 2022, transportation-related emissions for food from producers to retail stores represent around 20% of the total emissions for vegetables and fruits, [ 107 ] while for refrigerated transport of items like meat, fresh fish, and dairy, it increases by an additional 20–30%. [ 108 ] In addition to the waste of food itself, the disposal of packaging materials is also a significant concern. Reducing food waste contributes to reducing both global warming and environmental pollution caused by plastic packaging materials . It is estimated that approximately 5% of the energy used to manufacture and distribute food products is attributed to packaging materials. [ 109 ] Plastic food packaging materials are known for their significant environmental pollution, therefore they contribute not only to carbon dioxide emissions associated with plastic production but also to overall adverse environmental impacts. [ 110 ] Japan's excessive packaging culture in the context of food, has been criticized internationally in relation to Japanese plastic waste. [ 111 ] [ 112 ] [ 113 ] [ 114 ] The world's food system is responsible for about one-quarter of the planet-warming greenhouse gases that humans generate each year [ 115 ] with the livestock sector alone contributing 14.5% of all anthropogenic GHG emissions. [ 116 ] The 2019 World Scientists’ Warning of a Climate Emergency , endorsed by over 11,000 scientists from more than 150 countries, stated that "eating mostly plant-based foods while reducing the global consumption of animal products, especially ruminant livestock, can improve human health and significantly lower GHG emissions." [ 117 ] The most common ruminant livestock are cattle and sheep. Agriculture is very difficult to fix technically so will need more individual action [ 118 ] or carbon offsetting than all other sectors except perhaps aviation. [ 48 ] Eating less meat , especially beef and lamb , reduces emissions. [ 119 ] A diet which is part of individual action on climate change is also good for health, averaging less than 15 g (about half an ounce) of red meat and 250 g dairy (about one glass of milk) per day. [ 120 ] The World Health Organization recommends trans-fats make up less than 1% of total energy intake: ruminant trans-fats are found in beef, lamb, milk and cheese. [ 121 ] The Special Report on Climate Change and Land says that a shift towards plant-based diets would help to mitigate and adapt to climate change. Ecologist Hans-Otto Pörtner, who contributed to the report, said "We don't want to tell people what to eat, but it would indeed be beneficial, for both climate and human health, if people in many rich countries consumed less meat, and if politics would create appropriate incentives to that effect." [ 122 ] Meats such as beef have a higher climate impact since cows release methane, a greenhouse gas that is more harmful in the short-term than carbon dioxide. [ 123 ] Eating a plant-rich diet is listed as the #1 individual solution for climate change as modeled by Project Drawdown , based on avoided emissions from the production of animals and avoided emissions from additional deforestation for grazing land. [ 124 ] A 2018 study indicated that one fifth of Americans are responsible for about half of the country's diet-related carbon emissions, due mostly to eating high levels of meat, especially beef. [ 125 ] [ 126 ] A 2022 study published in Nature Food found that if high-income nations switched to a plant-based diet , vast amounts of land used for animal agriculture could be allowed to return to their natural state , which in turn has the potential to sequester 100 billion tons of CO 2 by 2100. In addition to mitigating climate change, other benefits of this transition would include improved water quality, restoration of biodiversity, and reductions in air pollution. [ 127 ] [ 128 ] A 2022 survey found that half of Europeans (51%) support reducing the amount of meat and dairy products people may buy to combat climate change (11% more than Americans, who support it at 40%, but far lower than Chinese people, who support it at 73%). The same survey found that to assist individuals make more sustainable food decisions, 79% of Europeans support labelling all food with their carbon footprint (Americans support it at 62%, but Chinese respondents support it at 88%). [ 129 ] A 2023 paper published in Nature Food found that vegan diets reduce emissions, water pollution and land use by 75%, while also significantly reducing the destruction of wildlife and water usage. [ 130 ] The majority of greenhouse gas emissions from food production come from land use change and farm-level processes. Farm-level emissions include both organic (manure management) and synthetic fertilizer applications, as well as ruminant enteric fermentation methane production. Together, these account for more than 80% of the carbon footprint of most foods. The largest meta-analysis of the global food system, published in Science in 2018, using data from over 38,000 commercial farms in 19 countries, estimated the total greenhouse gas emissions per kilogram of diverse foods. [ 133 ] Carbon dioxide is the most important greenhouse gas, but not the only one, and agriculture is a large source of methane and nitrous oxide, which are much more potent greenhouse gases than carbon dioxide. To capture all greenhouse gas emissions associated with these food production processes, the carbon footprint is expressed in kilograms of carbon dioxide equivalent, taking into account all greenhouse gases besides carbon dioxide. Animal foods have a much larger carbon footprint than plant foods. Beef is the worst, with an equivalent carbon dioxide emission of 99.5 kg per kilogram of production . For lamb, it is 39.7 kg, for cheese, 23.9 kg, for pork, 12.3 kg, for chicken, 9.9 kg, and for peas, it is just 0.98 kg. [ 133 ] If we compare the equivalent carbon dioxide emission per 100 grams of protein from a food to compare nutritional value, beef is 49.9 kg, pork is 7.6 kg, chicken is 5.7 kg, and peas is 0.44 kg. [ 133 ] The reasons for beef’s particular inefficiency as a food source are the vast land and water resources (i.e., virtual water ) required for cattle farming and the fact that cattle emit methane, a greenhouse gas. The impact of food production is more realistic and tangible when expressed in terms of carbon dioxide emissions per serving rather than per weight. For example, a cheeseburger, a popular beef food, is estimated to emit about 4.79 pounds (2.17 kg) [ 134 ] or 1.9 kg of carbon dioxide per serving, [ 135 ] which is about 10 times the weight of the cheeseburger that emitted the carbon dioxide , which is the equivalent of driving about 5 miles (8 km) in a car. [ 136 ] Other estimates put the total carbon dioxide emissions at 3.6 to 6.1 kg per serving. If we convert this into the amount of greenhouse gases emitted each year from the annual consumption of cheeseburgers in the United States, it is equivalent to the amount emitted by 6.5 to 19.6 million SUVs. [ 137 ] On the other hand, the carbon footprint of food transportation is relatively small compared to that of production, and because the production footprint of beef and dairy products is so large as mentioned above, transportation typically accounts for less than 1% of beef’s greenhouse gas emissions, [ 133 ] meaning that even if beef is consumed close to where it is produced, any carbon footprint reduction would be negligible. Alcoholic beverage production is a resource- and energy-intensive process with a large carbon footprint. [ 138 ] [ 139 ] Alcoholic beverages are produced from agricultural crops, which generate carbon dioxide (and nitrogen oxides and methane) emissions from agricultural production, as well as energy and large amounts of water for brewing and bottling. In addition to the resource consumption associated with production, food miles generated by transporting beverage products and waste generated from packaging are also significant issues. [ 140 ] While glass bottles and aluminum cans are recyclable, the packaging that holds them together, such as six-pack rings , is a significant source of plastic pollution and carbon dioxide emissions. [ 141 ] Thus, the carbon footprint of alcoholic beverage production is extensive and significant. [ 142 ] [ 143 ] [ 144 ] [ 145 ] [ 146 ] [ 147 ] Although beer generally requires less energy to produce than wine or spirits, [ 148 ] the equivalent carbon dioxide emissions for industrial production are estimated at around 640–760 grams per 500 milliliters of beer, [ 149 ] which is more than the weight of the beer itself . In addition, beer is a mass-consumption beverage, which requires large-scale production, heavy product transportation, and energy-intensive refrigeration, which further increase the carbon dioxide emissions per final bottle. [ 150 ] While metal barrel ( keg ) catering products have a smaller carbon footprint for packaging and transportation than bottled or canned products (one estimate is that a 30-liter keg is 2.7 times smaller than a 330-ml bottle, [ 151 ] they are not readily applicable to the consumer market. In the production of low-carb light beer, brewing enzymes are used to break down most of the carbohydrates into monosaccharides, which are then fermented by yeast into alcohol and carbon dioxide, [ 152 ] but as of 2024 it has not been estimated whether this series of production processes is more advantageous in reducing carbon dioxide emissions than that of regular beer. On the other hand, non-alcoholic beer requires a shorter fermentation process than regular beer, but is often produced by removing alcohol from the beer, [ 153 ] [ 154 ] and the additional energy costs involved can increase the carbon footprint. [ 155 ] To avoid this problem, a special yeast strain that can brew non-alcoholic beer without the alcohol removal process [ 156 ] has been used, which has been reported to reduce the equivalent carbon dioxide emissions by 1,260 tons per 10,000 kiloliters of beer with an alcohol content of <0.05%. [ 157 ] In other words, the carbon dioxide emissions from alcohol removal per 500 milliliters are 63 grams, which is estimated to be about 10% higher than the carbon dioxide emissions from the production of regular beer mentioned above. Wine, especially when produced by large commercial wineries, is associated with grape farming, producing large amounts of greenhouse gas emissions and polluted wastewater. [ 158 ] The wine-making process is energy-intensive, involving fermentation, rigorous aging, bottling and storage. Although wine is not mass-produced like beer, it is usually bottled in heavy glass bottles, [ 159 ] resulting in a heavy product that is often exported long distances internationally, with the added waste from the special packaging for transport and food miles from heavy packaging. To make matters worse, red wine is often bottled in green-colored glass, which is more difficult to recycle than clear-colored glass bottles, necessitating the production of new glass bottles, which consumes large amounts of energy. [ 160 ] Wine products vary widely from low-cost to high-end, and carbon footprints can vary widely from product to product. One review estimated the carbon dioxide equivalent emissions per bottle (750 ml) of wine to be between 0.15 and 3.51 kilograms, [ 161 ] but it is difficult for the average consumer to determine which wines have a low carbon footprint. However, according to one estimate, the stages of the wine 'lifecycle' that have the largest carbon dioxide emissions are from grape farming (43.11%) and bottling and transportation (56.71%), [ 162 ] therefore choosing wine made from sustainably produced grapes (organic wine) or wine in a simple carton can reduce the carbon footprint. One estimate found that organic red wine, conventionally produced red wine, and white wine had carbon dioxide equivalent emissions of 1.02, 1.25, and 1.62 kilograms per bottle (750 ml), respectively . [ 163 ] Spirits such as whiskey, vodka, rum, and gin have a larger carbon footprint per bottle than beer or wine. [ 164 ] This is mainly due to the large amount of carbon dioxide emitted by the distillation process, which is further exacerbated by the temperature and humidity control in warehouses where barrels of whiskey and other spirits are aged for long periods. In addition, many spirits products are bottled in thick glass bottles made specifically for each product, and the carbon footprint of their production also has an impact on the environment, and to make matters worse, some cheaper spirits products even use unsustainable plastic bottles. However, spirits are not consumed in large quantities like beer or wine, and so may have a lower carbon footprint per standard consumption: one estimate puts the equivalent carbon dioxide emissions of a standard 350 ml glass of 3.5% light beer at about 280 grams, a 150 ml glass of wine at about 320 grams, and a 40 ml glass of spirits at about 90 grams on average. [ 165 ] Since all three contain roughly the same amount of alcohol, spirits with water are more effective at reducing global warming per unit of alcohol consumed than beer or wine. The alcoholic beverage industry as a whole, including pubs , bars and other food and beverage businesses, contributes significantly to carbon dioxide emissions more than any individual product, and as mentioned above, this impact comes not only from production but also from advertising, logistics, packaging and waste disposal. Given the scale of the industry, if individuals practice environmentally conscious alcohol consumption behavior and the industry promotes efforts to minimize its impact on the environment, the effect on curbing global warming for society as a whole could be significant. Worldwide population growth is considered to be a challenge for climate change mitigation . [ 166 ] [ 1 ] Proposed measures include an improved access to family planning and access of women to education and economic opportunities. [ 167 ] [ 168 ] [ 169 ] Targeting natalistic politics involves cultural, ethical and societal issues. Various religions discourage or prohibit some or all forms of birth control . [ 170 ] Although having fewer children is perhaps the individual action that most effectively reduces a person's climate impact, the issue is rarely raised, and it is arguably controversial due to its private nature. Even so, ethicists, [ 171 ] [ 172 ] some politicians such as Alexandria Ocasio-Cortez , [ 173 ] and others [ 174 ] [ 175 ] [ 176 ] [ 177 ] have started discussing the climate implications associated with reproduction. Researchers have found that some people (in wealthy countries) are having fewer children due to their beliefs that they can do more to slow climate change if they do not have children. [ 178 ] Two interrelated aspects of this action, family planning and women and girl's education , are modeled by Project Drawdown as the #6 and #7 top potential solutions for climate change, based on the ability of family planning and education to reduce the growth of the overall global population. [ 179 ] [ 180 ] In 2019, a warning on climate change signed by 11,000 scientists from 153 nations said that human population growth adds 80 million humans annually, and "the world population must be stabilized—and, ideally, gradually reduced—within a framework that ensures social integrity" to reduce the impact of "population growth on GHG emissions and biodiversity loss ". The policies they promote, which "are proven and effective policies that strengthen human rights while lowering fertility rates", would include removing barriers to gender equality, especially in education, and ensuring family planning services are available to all. [ 181 ] [ 182 ] In a 2021 paper it was said that "human population has been mostly ignored with regard to climate policy" and attribute this to the taboo nature of the issue given its association with population policies of the past, including forced sterilization campaigns and China's one-child policy . [ 183 ] [ 184 ] In 2022, a group of scientists urged families around the world to have no more than one child as part of the transformative changes needed to mitigate both climate change and biodiversity loss . [ 185 ] However, because climate change needs to be limited within the next few decades, having fewer children now might not make much difference. [ 186 ] However the "per person carbon footprint" of individual people is likely to reduce over time due to efforts to decarbonize our economies and reach net zero emissions in the future. [ 187 ] : 113 Individuals can check whether the financial companies they are using are part of the Glasgow Financial Alliance for Net Zero , [ 188 ] and consider switching pensions, insurance and investments. [ 189 ] Donating to climate change charities has been suggested. [ 190 ] Cryptocurrencies which are made by proof-of-work such as Bitcoin , are high carbon both because they use dirty electricity, such as electricity from Kazakhstan (some electricity in the United States used for Bitcoin mining is also dirty [ 191 ] but the gas might be burned anyway [ 192 ] ) and because cryptocurrency mining uses hardware for only a short time before it becomes ewaste . [ 193 ] [ 194 ] Individuals with such cryptocurrency can switch to proof of stake crypto such as Tezos or ethereum . [ 195 ] Individuals can also decide to not invest in cryptocurrencies at all. Impactful ways in the area of political advocacy that an individual can take include: [ 196 ] individual citizen participation in groups advocating for collective action in the form of political solutions, such as carbon pricing , meat pricing, [ 197 ] ending subsidies for fossil fuels [ 198 ] and animal husbandry, [ 199 ] and ending laws encouraging car use. [ 200 ] Climate change is a prevalent issue in many societies. [ 202 ] Some believe that some of the long-term negative effects of climate change can be ameliorated through individual and community actions to reduce resource consumption. Thus, many environmental advocacy organizations associated with the climate movement (such as the Earth Day Network ) focus on encouraging such individual conservation and grassroots organizing around environmental issues. [ 203 ] [ 204 ] To raise awareness of climate issues, activists organized a series of international labor and school strikes in late September 2019, [ 205 ] with estimates of total participants ranging between 6 and 7.3 million. [ 206 ] [ 207 ] A number of groups from around the world have come together to work on the issue of global warming. Non-governmental organizations (NGOs) from diverse fields of work have united on this issue. A coalition of 50 NGOs called Stop Climate Chaos launched in Britain in 2005 to highlight the issue of climate change. The Campaign against Climate Change was created to focus purely on the issue of climate change and to pressure governments into action by building a protest movement of sufficient magnitude to effect political change. Following environmentalist Bill McKibben's mantra that "if it's wrong to wreck the climate, it's wrong to profit from that wreckage", [ 208 ] fossil fuel divestment campaigns attempt to get public institutions, such as universities and churches, to remove investment assets from fossil fuel companies. By December 2016, a total of 688 institutions and over 58,000 individuals representing $5.5 trillion in assets worldwide had been divested from fossil fuels. [ 209 ] [ 210 ] A 2023 review study published in One Earth stated that opinion polls show that most people perceive climate change as occurring now and close by. [ 211 ] The study concluded that seeing climate change as more distant does not necessarily result in less climate action, and reducing psychological distancing does not reliably increase climate action. [ 211 ] Political advocacy can focus on removing those fossil fuel and other subsidies, and taxes which discourage individual action on climate change, for example: However, sudden removal of a subsidy by governments not trusted to redirect it, [ 216 ] or without providing good alternatives for individuals, can lead to civil unrest. An example of this took place in 2019, when Ecuador removed its gasoline and diesel subsidies without providing enough electric buses to maintain service. The result was overnight fuel price hikes of 25–75 percent. The corresponding fare hikes for Ecuador's existing gas and diesel powered bus fleet were met with violent protests. [ 217 ] "Discussing global warming leads to greater acceptance of climate science ". [ 218 ] The Yale Climate Communication Program recommends initiating "climate conversations" with more moderate individuals. [ 219 ] [ 43 ] Once personal climate impacts and core values are understood, it may become possible to open a discussion of potential climate solutions which are consistent with those core values. [ 219 ] [ 220 ] Carbon Conversations is a " psychosocial project that addresses the practicalities of carbon reduction while taking account of the complex emotions and social pressures that make this difficult". [ 221 ] It was cited in The Guardian newspaper as one of the 20 best ideas to tackle climate change. [ 222 ] A study published in Nature Human Behaviour in 2025 found that presenting people with binary climate data—for example, a lake freezing versus not freezing—significantly increases the perceived impact of climate change compared to when continuous data such as temperature change is presented. [ 223 ] The researchers said the findings confirmed the boiling frog effect for climate change communication. [ 223 ] Another opportunity for mitigation is through social contagion , where people in a network learn new behaviors, such as trying a plant-based diet or riding their bicycles to work instead of driving, and the new behaviors spread spontaneously through the group. For example, a 2020 Max Planck Institute study found that when meat-eaters are accompanied by vegetarians and have a choice of eating dishes with or without meat, they're more likely to choose a vegetarian dish, resulting in a reduction in the demand for meat. This probability increases as the number of vegetarians accompanying the meat eaters increases. [ 224 ] Public discourse on reducing one's carbon footprint overwhelmingly focuses on low-impact behaviors, and as of 2017, the mention of high-impact individual behaviors to impact climate was almost non-existent in mainstream media , government publications, K-12 school textbooks, etc. [ 167 ] [ 174 ] [ needs update ] Media focus on low impact [ 226 ] rather than high impact behaviors is concerning for scientists. The most impactful actions for individuals may differ significantly from the popular advice for "greening" one's lifestyle. For instance, popular suggestions for individual actions include replacing a typical car with a hybrid, washing clothes in cold water, recycling, upgrading light bulbs which are all regarded as lower impact behaviors. A few researchers have stated that some "recommended high-impact actions are more effective than many more commonly discussed options. For example, eating a plant-based diet saves eight times more emissions than upgrading light bulbs." [ 167 ] [ 174 ] Recommended high-impact actions are around having fewer children, [ 183 ] [ 227 ] living car-free, avoiding long-distance flights and eating a plant-based diet. However, other publications state that "population is actually irrelevant to solving the climate crisis". [ 228 ] Other researchers say that decarbonization need not mean a more austere lifestyle, and that the individual actions with the most impact are to electrify households, with for example electric cars and heating. [ 229 ] Scientists argue that piecemeal behavioral changes like re-using plastic bags are not a proportionate response to climate change. Though being beneficial, these debates would drive public focus away from the requirement for an energy system change of unprecedented scale to decarbonise rapidly. Moreover, policy measures such as targeted subsidies, eco-tariffs , effective sustainability certificates, legal product information requirements, CO 2 pricing, [ 230 ] emissions allowances rationing, [ 231 ] [ 232 ] budget-allocations/labelling, [ 231 ] targeted product-range exclusions, advertising bans, and feedback mechanisms are examples of measures that could have a more substantial positive impact on consumption behavior than changes exclusively carried out by consumers and could address social issues such as consumers' inhibitive constraints of budgets, awareness and time. [ 233 ] It has been argued that climate change is a collective action problem , specifically a tragedy of the commons , which is a political [ 234 ] and not individual category of problem. [ 235 ] Some commentators have argued that individual actions as consumers and "greening personal lives" are insignificant in comparison to collective action , especially actions that hold the fossil fuel corporations [ clarification needed ] accountable for producing 71% of carbon emissions since 1988. [ 6 ] [ 236 ] [ 237 ] The concept of a personal carbon footprint and calculating one's footprint was popularized by oil producer BP as "effective propaganda" as a way to shift their responsibility to "linguistically... remove itself as a contributor to the problem of climate change". [ 238 ] Others have shown that sometimes individual measures may effectively undermine political support for structural measures. In one example researchers found that "a green energy default nudge diminishes support for a carbon tax ." [ 239 ] Others say that individual action leads to collective action, and emphasize that "research on social behavior suggests lifestyle change can build momentum for systemic change ." [ 7 ] Furthermore, if individuals shrink their consumption of fossil fuel products, fossil fuel corporations are incentivized to produce less, as the demand for their product would decrease. [ 240 ] In other words, each individual's consumption plays a role in the total supply of fossil fuels and emission of greenhouse gases . According to a 2022 survey conducted by the European Investment Bank , climate change is the second most pressing issue confronting Europeans. Over three-quarters of respondents (72%) believe that their individual actions can make a difference in tackling the climate issue. [ 241 ] In many cases, media coverage of climate change reports only about the effects of climate change, such as extreme weather , but makes no mention of either individual or government actions which can be taken. The suggestion that eating a plant-based diet requires a person to become strictly vegetarian is also misinformation. [ 242 ] A plant-based diet focuses on consuming foods primarily from plants but does not eliminate all animal products like a vegan diet does. [ 243 ] Climate change education , which became mandatory in Italy in 2019, [ 244 ] is completely absent in some countries, or fails to provide information on action that individuals can take. It has been hypothesised many times that no matter how strong the climate knowledge provided by risk analysts, experts and scientists is, risk perception determines agents' ultimate response in terms of mitigation. However, recent literature reports conflicting evidence about the actual impact of risk perception on agents’ climate response. Rather, a no-direct perception-response link with the mediation and moderation of many other factors and a strong dependency on the context analysed is shown. Some moderation factors considered as such in the specialised literature include communication and social norms. Yet, conflicting evidence of the disparity between public communication about climate change and the lack of behavioural change has also been observed in the general public. Likewise, doubts are raised about the observance of social norms as an influencing predominant factor that affects action on climate change. [ 245 ] Disparate evidence also showed that even agents highly engaged in mitigation (engagement is a mediation factor) actions fail ultimately to respond. [ 246 ]
https://en.wikipedia.org/wiki/Individual_action_on_climate_change
Individualized cancer immunotherapy , also referred to as individualized immuno-oncology , is a novel concept for therapeutic cancer vaccines that are truly personalized to a single individual. The human immune system is generally able to recognize and fight cancer cells . However, this ability is usually insufficient and the cancer continues to spread. [ 1 ] Cancer immunotherapy is based on harnessing and potentiating the ability of the immune system to fight cancer. Each tumor has its own individual genetic fingerprint, the mutanome , that includes numerous genetic alterations. As opposed to a preformed drug, individualized cancer vaccination is a therapy that targets specific cancer mutations of the individual patient's tumor. [ 2 ] The production of vaccines tailored to match a person's individual constellation of cancer mutations has become a new field of research. [ 3 ] The concept of individualized cancer immunotherapy aims to identify individual mutations in the tumor of a patient, that are crucial for the proliferation, survival or metastasis of tumor cells. [ 2 ] For this purpose, the individual genetic blueprint of the tumor is decrypted by sequencing and, using this blueprint as a template, a synthetic vaccine tailored to the tumor of the individual patient is prepared. This vaccine is designed to control and train the body's immune system to fight the cancer. [ 4 ] Cancer is characterized by an accumulation of genetic alterations. A tumor may acquire up to thousands of different somatic mutations during the process of initiation and progression. A smaller number of cancer mutations interfere with normal cell regulation and help to drive cancer growth. [ 5 ] Somatic mutations in the tumor genome can cause tumors to express mutant proteins ( neoantigens ) that are recognized by autologous T cells as foreign and constitute cancer vaccine targets. [ 2 ] [ 6 ] Tumor Mutational Burden (TMB, the number of mutations within a targeted genetic region in the cancerous cell's DNA) have been thus suggested to correlate with patient survival post immunotherapy, although the findings are disputed. [ 7 ] [ 8 ] [ 9 ] Such neoantigens are specifically expressed by tumor tissue and are not found on the surface of normal cells. They can upregulate tumor-specific T cells in patients without killing normal cells. [ 10 ] T cells are key effectors of anticancer immunity. They are capable of distinguishing tumor cells from normal ones by recognizing HLA -bound cancer-specific peptides. [ 10 ] A requirement for the recognition of neoantigens by the immune system is that the neoantigens and their antigenic determinants , the neoepitopes , are processed and presented by human leukocyte antigen (HLA) molecules . [ 5 ] These molecules may be recognized by CD8+ cytotoxic T lymphocytes as foreign neoepitopes and, with the help of CD4+ T lymphocytes, trigger an immune response leading to tumor-specific killing. [ 4 ] CD8+ T cells are specialized for direct tumor cell killing. CD4+ T cells can interact with antigen-presenting cells such as dendritic cells to recruit other immune cells or stimulate effector cells . [ 10 ] Most cancer neoantigens in humans arise from unique mutations. A patient's cancer is intra- as well as interlesionally heterogeneous and changes its composition over time. [ 11 ] Each patient has an individual mutational signature (mutanome), and only a very small portion of the mutations are shared between patients. [ 10 ] [ 12 ] A concept is therefore that an immunotherapy directed at neoantigens needs to be individualized. [ citation needed ] The development of sequencing technology has improved the accuracy of identification and localization of neoantigens. With the advent of next-generation sequencing (NGS), it has become possible to systematically predict cancer neoantigens for individual patients. [ 5 ] [ 13 ] In animal models, several independent studies have shown that vaccines consisting of computationally predicted neoepitopes mediated anti-tumor activity in mice. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] The translation of individualized neoepitope vaccines into clinical oncology is under investigation. Formats under consideration for individualized vaccines are synthetic peptides , messenger RNA , DNA plasmids , viral vectors , engineered bacteria , and antigen -loaded dendritic cells . [ 2 ] In 2015, a first step towards individualized neoantigen vaccination was achieved by treating three melanoma patients with autologous dendritic cells loaded with a personalized mixture of seven peptides (neoantigens) that were predicted to bind to human leukocyte antigens (HLA). The neoantigen-loaded dendritic cells were cultured in vitro for autologous transfusion . Results showed that the vaccine enhanced the existing immune response and elicited a neoantigen-specific T cell response that was not detected prior to injection. [ 18 ] Sahin et al. were the first to identify suitable neoantigens using next generation sequencing (NGS) and used them to produce customized RNA vaccines capable of encoding these neoantigens. [ 19 ] A total of 13 patients with melanoma received the RNA vaccine, eight of which had no tumor development during the follow-up. Immune surveillance analysis of peripheral blood mononuclear cells (PBMCs) in patients demonstrated that the RNA vaccines expanded preexisting T cells and induced de novo T cell responses against neoepitopes not recognized prior to vaccination. [ 19 ] Another study group (Ott et al.) identified neoantigens in six melanoma patients and used them to create a customized vaccine for each patient with long peptides representing up to 20 mutations per patient. After surgical resection of the tumor, the vaccine was injected. The results showed that the tumor did not reappear in four patients during an observation period of 32 months after vaccination. [ 20 ] Hilf et al. administered individualized neoantigen vaccines to 15 patients with glioblastoma . The vaccine triggered T cell immune responses to the predicted neoantigens. [ 21 ] Keskin et al. investigated individualized neoantigen vaccines in eight glioblastoma patients after surgical resection and conventional radiotherapy . The study group observed that the vaccine increased the number of tumor-infiltrating T cells that migrated from the peripheral blood into the brain. [ 22 ] Individualized cancer vaccines typically consist of multiple predicted neoepitopes. The manufacturing process involves several steps. [ citation needed ] Tumor biopsies and healthy tissue (e.g., peripheral blood cells) of a patient diagnosed with cancer are examined by NGS . Tumor-specific mutations in protein-coding genes are then identified by comparison of sequences from tumor and normal DNA. Computational tools classify these mutations for the highest likelihood of immunogenicity , that is, for the predicted expression and binding affinity of neoepitopes on HLA molecules. The top rankers are then used for the production of the vaccine. [ 4 ] The intended output is an on-demand vaccine with a unique composition tailored to the patient's individual cancer mutanome. [ 10 ] The research approach to mobilize an immune response tailored to the individual tumor of a patient is also referred to as individualized neoantigen-specific immunotherapy (iNeST). iNeST is based on the specific tumor mutations (neoantigens) of a single patient, with the aim of triggering high-affinity immune responses of T cells to the individual patient-specific cancer. [ 19 ] The development of iNeST is driven by biotech companies [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Individualized_cancer_immunotherapy
An individually ventilated cage (IVC) is used to keep an animal separated from other animals and possible exposures, including exposure by air. In laboratory animal husbandry , there is a huge demand for animals that have been kept in disease free conditions and housed in barrier units such as individually ventilated cages. This is very important because when animals are used for scientific research, particularly drug-related research, the animals must provide accurate and valid results. Using an animal that is ill may cause the severity limit to be exceeded. If the animal already has a disease and then undergoes experimentation of a substance that also produces effects on the animals health, it could potentially worsen the effects of the agent being tested causing the animal to experience more suffering than necessary. The animals may produce false results which may prove vital at a later stage, e.g., in drug trials on humans. Not only that, the experiment will have to be performed again and the previous animals would have ended up being killed. Special caging systems are often used alongside many other barriers to keep unwanted materials out of range of the animals. The IVC-systems in which the animals are kept in ensures they are fully protected by use of HEPA -filters (high efficiency-particulate air) that defends them from all micro-organisms. A process of sterilisation of all items to be passed in to the barrier unit including bedding material, food etc. must be performed. The cages are usually made out of high tech special synthetic polycarbonates . Although this material allows various methods of sterilising and disinfecting to be carried out, repeated sterilisation can cause discolouration and brittleness. The cages are constructed and designed in a specific way to ensure an absolute microparticle free inner environment. This generally includes a cage bottom, a cage top (with a food hopper and water bottle holder incorporated) and a filter lid. It is also designed to allow maximum comfort of the animal and to provide a secure, chew proof environment. An external ventilation unit supplies the cages with fresh HEPA-filtered air which passes through the filter lids. The ventilation-system mostly consists of two tubes for ingoing and outgoing air. Individual cages, with no environmental enrichment, make it impossible for any animal to carry out species-specific behaviour and are a huge drawback in the terms of animal welfare. In natural conditions, many animals live in groups, but individual cages prevent that. However, that said, multiple sizes of IVCs are available for holding either 1–5 animals or in larger cages 12–15 per cage.
https://en.wikipedia.org/wiki/Individually_ventilated_cage
The principle of individuation , or principium individuationis , [ 1 ] describes the manner in which a thing is identified as distinct from other things. [ 2 ] The concept appears in numerous fields and is encountered in works of Leibniz , Carl Jung , Gunther Anders , Gilbert Simondon , Bernard Stiegler , Friedrich Nietzsche , Arthur Schopenhauer , David Bohm , Henri Bergson , Gilles Deleuze , [ 3 ] and Manuel DeLanda . The word individuation occurs with different meanings and connotations in different fields. Philosophically, "individuation" expresses the general idea of how a thing is identified as an individual thing that "is not something else". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time. [ 4 ] In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. [ 5 ] Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption. [ 6 ] The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users. Communications theorist Marshall McLuhan alluded to this trend when discussing the future of printed books in an electronically interconnected world in the 1970s and 1980s. [ 7 ] [ 8 ] From around 2016, coinciding with increased government regulation of the collection and handling of personal data, most notably the GDPR in EU Law, individuation has been used to describe the ‘singling out’ of a person from a crowd – a threat to privacy, autonomy and dignity. [ 9 ] [ 10 ] Most data protection and privacy laws turn on the identifiability of an individual as the threshold criterion for when data subjects will need legal protection. However, privacy advocates argue privacy harms can also arise from the ability to disambiguate or ‘single out’ a person. Doing so enables the person, at an individual level, to be tracked, profiled, targeted, contacted, or subject to a decision or action which impacts them - even if their civil or legal ‘identity’ is not known (or knowable). In some jurisdictions the wording of the statute already includes the concept of individuation. [ 11 ] In other jurisdictions regulatory guidance has suggested that the concept of 'identification' includes individuation - i.e., the process by which an individual can be 'singled out' or distinguished from all other members of a group. [ 12 ] [ 13 ] [ 14 ] However, where privacy and data protection statutes use only the word ‘identification’ or ‘identifiability’, different court decisions mean that there is not necessarily a consensus about whether the legal concept of identification already encompasses individuation [ 15 ] or not. [ 16 ] [ 17 ] Rapid advances in technologies including artificial intelligence , and video surveillance coupled with facial recognition systems have now altered the digital environment to such an extent that ‘not identifiable by name’ is no longer an effective proxy for ‘will suffer no privacy harm’. Many data protection laws may require redrafting to give adequate protection to privacy interests, by explicitly regulating individuation as well as identification of individual people. [ 18 ] Two quantum entangled particles cannot be understood independently. Two or more states in quantum superposition , e.g., as in Schrödinger's cat being simultaneously dead and alive, is mathematically not the same as assuming the cat is in an individual alive state with 50% probability. The Heisenberg's uncertainty principle says that complementary variables , such as position and momentum , cannot both be precisely known – in some sense, they are not individual variables. A natural criterion of individuality has been suggested. [ 19 ] For Schopenhauer, the principium individuationis is constituted of time and space, being the ground of multiplicity. In his view, the mere difference in location suffices to make two systems different, with each of the two states having its own real physical state, independent of the state of the other. This view influenced Albert Einstein . [ 20 ] Schrödinger put the Schopenhaurian label on a folder of papers in his files “Collection of Thoughts on the physical Principium individuationis.” [ 21 ] According to Jungian psychology , individuation ( German : Individuation ) is a process of psychological integration. "In general, it is the process by which individual beings are formed and differentiated [from other human beings]; in particular, it is the development of the psychological individual as a being distinct from the general, collective psychology." [ 22 ] Individuation is a process of transformation whereby the personal and collective unconscious are brought into consciousness (e.g., by means of dreams, active imagination , or free association ) to be assimilated into the whole personality. It is a completely natural process that is necessary for the integration of the psyche. [ 23 ] Individuation has a holistic healing effect on the person, both mentally and physically. [ 23 ] In addition to Jung's theory of complexes , his theory of the individuation process forms conceptions of an unconscious filled with mythic images, a non-sexual libido , the general types of extraversion and introversion , the compensatory and prospective functions of dreams , and the synthetic and constructive approaches to fantasy formation and utilization. [ 24 ] "The symbols of the individuation process . . . mark its stages like milestones, prominent among them for Jungians being the shadow , the wise old man . . . and lastly the anima in man and the animus in woman." [ 25 ] Thus, "There is often a movement from dealing with the persona at the start . . . to the ego at the second stage, to the shadow as the third stage, to the anima or animus, to the Self as the final stage. Some would interpose the Wise Old Man and the Wise Old Woman as spiritual archetypes coming before the final step of the Self ." [ 26 ] "The most vital urge in every being, the urge to self-realize, is the motivating force behind the individuation process. With the internal compass of our very nature set toward self-realization , the thrust to become who and what we are derives its power from the instincts. On taking up the study of alchemy, Jung realized his long-held desire to find a body of work expressive of the psychological processes involved in the overarching process of individuation." [ 27 ] In L'individuation psychique et collective , Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation. Simondon also conceived of "pre-individual fields" which make individuation possible. Individuation is an ever-incomplete process, always leaving a "pre-individual" left over, which makes possible future individuations. Furthermore, individuation always creates both an individual subject and a collective subject, which individuate themselves concurrently. Like Maurice Merleau-Ponty , Simondon believed that the individuation of being cannot be grasped except by a correlated parallel and reciprocal individuation of knowledge. [ 28 ] The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud . During a talk given at the Tate Modern art gallery in 2004, [ 29 ] Stiegler summarized his understanding of individuation. The essential points are the following:
https://en.wikipedia.org/wiki/Individuation
Indo-1 is a popular dye that is used as a ratiometric calcium indicator similar to Fura-2 . In contrast to Fura-2, Indo-1 has a dual emissions peak and a single excitation. The main emission peak in calcium-free solution is 475 nm while in the presence of calcium the emission is shifted to 400 nm. It is widely used in flow cytometry and laser scanning microscopy, due to its single excitation property. However, its use for confocal microscopy is limited due to its photo-instability caused by photobleaching . Indo-1 is also able to keep possession of its ratiometric emission, dissimilar to Fura-2. [ 1 ] The penta potassium salt is commercially available and preferred to the free acid because of its higher solubility in water. While Indo-1 is not cell permeable the penta acetoxymethyl ester Indo-1 AM enters the cell where it is cleaved by intracellular esterases to Indo-1. The synthesis and properties of Indo-1 were presented in 1985 by the group of Roger Y Tsien . [ 2 ] In intact heart muscle, Indo-1, in combination with bioluminescent protein aequorin , can be utilized as a tool to distinguish between the internal and exterior inotropic regulation processes. [ 3 ]
https://en.wikipedia.org/wiki/Indo-1
The Indo-European cosmogony refers to the creation myth of the reconstructed Proto-Indo-European mythology . The comparative analysis of different Indo-European tales has led scholars to reconstruct an original Proto-Indo-European creation myth involving twin brothers, * Manu - ('Man') and * Yemo - ('Twin'), as the progenitors of the world and mankind, and a hero named * Trito ('Third') who ensured the continuity of the original sacrifice. Although some thematic parallels can be made with Ancient Near East (the primordial couple Adam and Eve ), and even Polynesian or South American legends, the linguistic correspondences found in descendant cognates of *Manu and *Yemo make it very likely that the myth discussed here has a Proto-Indo-European (PIE) origin. [ 1 ] Hermann Güntert, stressing philological parallels between the Germanic and Indo-Iranian texts, argued in 1923 for an inherited Indo-European motif of the creation of the world from the sacrifice and dismemberment of a primordial androgyne . [ 2 ] Following a first paper on the cosmogonical legend of Manu and Yemo, published simultaneously with Jaan Puhvel in 1975 (who pointed out the Roman reflex of the story), Bruce Lincoln assembled the initial part of the myth with the legend of the third man Trito in a single ancestral motif. [ 3 ] [ 4 ] [ 5 ] Since the 1970s, the reconstructed motifs of Manu and Yemo, and to a lesser extent that of Trito, have been generally accepted among scholars. [ 6 ] The basic Indo-European root for the divine creation is * d h eh 1 , 'to set in place, lay down, or establish', as attested in the Hittite expression nēbis dēgan dāir ("...established heaven (and) earth"), the Young Avestan formula kə huvāpå raocåscā dāt təmåscā? ("What skillful artificer made the regions of light and dark?"), the name of the Vedic creator god Dhātr , and possibly in the Greek name Thetis , presented as a demiurgical goddess in Alcman 's poetry. [ 7 ] The concept of the Cosmic Egg , symbolizing the primordial state from which the universe arises, is also found in many Indo-European creation myths. [ 8 ] A similar depiction of the appearance of the universe before the act of creation is given in the Vedic, Germanic and, at least partly, in the Greek tradition. [ 9 ] [ 10 ] Although the idea of a created world is untypical of early Greek thinking, similar descriptions have been highlighted in Aristophanes 's The Birds : "...there was Chasm and Night and dark Erebos at first, and broad Tartarus , but earth nor air nor heaven there was..." The analogy between the Greek Χάος ('Chaos, Chasm ') and the Norse Ginnungagap ('Gaping abyss') has also been noted by scholars. [ 11 ] The importance of heat in Germanic creation myths has also been compared with similar Indian beliefs emphasized in the Vedic hymn on 'cosmic heat'. [ 12 ] The first man Manu and his giant twin Yemo are crossing the cosmos , accompanied by a primordial cow. To create the world, Manu sacrifices his brother and, with the help of heavenly deities (the Sky-Father , the Storm-God and the Divine Twins ), [ 4 ] [ 13 ] forges both the natural elements and human beings from his twin's remains. [ 14 ] [ 5 ] Manu thus becomes the first priest after initiating sacrifice as the primordial condition for the world order. His deceased brother Yemo turns into the first king as social classes emerge from his anatomy (priesthood from his head, the warrior class from his breast and arms, and the commoners from his sexual organs and legs). [ 14 ] Although the European and Indo-Iranian versions differ on this matter, the primeval cow was most likely sacrificed in the original myth, giving birth to the other animals and vegetables. [ 15 ] Yemo may have become the King of the Otherworld , the realm of the dead, as the first mortal to die in the primordial sacrifice, a role suggested by the Indo-Iranian and, to a lesser extent, in the Germanic, Greek and Celtic traditions. [ 16 ] [ 17 ] [ 18 ] To the third man Trito, the celestial gods offer cattle as a divine gift, which is stolen by a three-headed serpent named * Ng w hi ('serpent'; and the Indo-European root for negation ). [ 19 ] [ 4 ] Trito first suffers at his hands, but fortified by an intoxicating drink and aided by a helper-god (the Storm-God or *H a ner , 'Man'), [ 4 ] [ 20 ] together they go to a cave or a mountain, and the hero finally manages to overcome the monster. Trito then gives the recovered cattle back to a priest for it to be properly sacrificed. [ 21 ] [ 4 ] [ 22 ] He is now the first warrior, maintaining through his heroic deeds the cycle of mutual giving between gods and mortals. [ 23 ] [ 4 ] According to Lincoln, Manu and Yemo seem to be the protagonists of "a myth of the sovereign function, establishing the model for later priests and kings", while the legend of Trito should be seen as "a myth of the warrior function, establishing the model for all later men of arms". [ 23 ] He has thus interpreted the narrative as an expression of the priests's and kings's attempt to justify their role as indispensable for the preservation of the cosmos, and therefore as essential for the organization of society. [ 24 ] The motif indeed recalls the Dumézilian tripartition of the cosmos between the priest (in both his magical and legal aspects), the warrior (the Third Man), and the herder (the cow). [ 4 ] Some scholars have proposed that the primeval being Yemo was depicted as a two-folded hermaphrodite rather than a twin brother of Manu, both forming indeed a pair of complementary beings entwined together. [ 25 ] [ 26 ] The Germanic names Ymir and Tuisto were understood as twin , bisexual or hermaphrodite , and some myths give a sister to the Vedic Yama, also called Yamī ('Twin'). [ 27 ] [ 28 ] [ 29 ] The primordial being may therefore have self-sacrificed, [ 26 ] or have been divided in two, a male half and a female half, embodying a prototypal separation of the sexes that continued the primordial union of the Sky Father ( Dyēus ) with the Mother Earth ( Dhéǵhōm ). [ 25 ] The story of Trito served as a model for later cattle raiding epic myths and most likely as a moral justification for the practice of raiding among Indo-European peoples. In their legends, Trito is portrayed as only taking back what rightfully belongs to his people, those who sacrifice properly to the gods. [ 23 ] [ 22 ] Although cattle raiding is a common theme found in all societies keeping cattle, it was particularly popular among Indo-European peoples, as attested by the legends of Indra and the Panis , Beowulf and Grendel , the quest of Queen Medb for the Bull, or Odysseus hunting down the cattle of Helios . [ 30 ] The myth has been variously interpreted as a cosmic conflict between a heavenly hero and an earthly serpent; as a depiction of the male fellowships' struggle to protect society against external evil; or as an Indo-European victory over non-Indo-European people, the monster symbolizing the aboriginal thief or usurper. [ 31 ] [ 22 ] [ 24 ] The Vedic serpent Vṛtrá is indeed described as a * dāsa , an aboriginal inhabitant who is inimical to the Indo-European invaders; the Iranian serpent Aži Dahāka carries in his name the pejorative suffix -ka ; and the Latin inimical giant Cācus is depicted as a non-Indo-European aborigine ( incola ), hostile to Romans and Greeks alike. [ 32 ] According to Martin L. West , the Proto-Indo-European name *Trito ('Third') may have been a "poetic or hieratic code-name, fully comprehensible only with specialized knowledge". [ 33 ] Cognates deriving from the Proto-Indo-European First Priest *Manu (' Man ', 'ancestor of humankind') include the Indic Mánu , legendary first man in Hinduism , and Manāvī, his sacrificed wife; the Germanic Mannus (from Germ. *Manwaz ), mythical ancestor of the West Germanic tribes ; and the Persian Manūščihr (from Av. Manūš.čiθra , 'son of Manuš'), Zoroastrian high priest of the 9th century AD. [ 34 ] [ 35 ] From the name of the sacrificed First King *Yemo ('Twin') derive the Indic Yama , god of death and the underworld; the Avestan Yima , king of the golden age and guardian of hell ; the Norse Ymir (from Germ. * Yumiyáz ), ancestor of the giants ( jötnar ); and most likely Remus (from Proto-Latin *Yemos ), killed in the Roman foundation myth by his twin brother Rōmulus . [ 36 ] [ 37 ] [ 38 ] [ 4 ] Latvian jumis ('double fruit'), Latin geminus ('twin') and Middle Irish emuin ('twin') are also linguistically related. [ 36 ] [ 39 ] Cognates stemming from the First Warrior *Trito ('Third') include the Vedic Trita , the hero who recovered the stolen cattle from the serpent Vṛtrá ; the Avestan Thraētona ('son of Thrita'), who won back the abducted women from the serpent Aži Dahāka ; and the Norse þriði ('Third'), one of the names of Óðinn . [ 41 ] [ 33 ] [ 22 ] Other cognates may appear in the Greek expressions trítos sōtḗr (τρίτος σωτήρ; 'Third Saviour'), an epithet of Zeus , and tritogḗneia (τριτογήνεια; 'Third born' or 'born of Zeus'), an epithet of Athena ; and perhaps in the Slavic mythical hero Troyan , found in Russian and Serbian legends alike. [ 33 ] [ a ] *Ng w hi , a term meaning 'serpent', is also related to the Indo-European root for negation ( *ne- ). [ 19 ] [ 4 ] Descendent cognates can be found in the Iranian Aži , the name of the inimical serpent, and in the Indic áhi ('serpent'), a term used to designate the monstrous serpent Vṛtrá , [ 33 ] both descending from Proto-Indo-Iranian *aj'hi . [ 43 ] Many Indo-European beliefs explain aspects of human anatomy from the results of the original dismemberment of Yemo: his flesh usually becomes the earth, his hair grass, his bone yields stone, his blood water, his eyes the sun, his mind the moon, his brain the clouds, his breath the wind, and his head the heavens. [ 5 ] The traditions of sacrificing an animal before dispersing its parts following socially established patterns, a custom found in Ancient Rome and India, has been interpreted as an attempt to restore the balance of the cosmos ruled by the original sacrifice. [ 5 ] In the Indo-Iranian version of the myth, his brother Manu also sacrifices the cow, and from the parts of the dead animal are born the other living species and vegetables. In the European reflexes, however, the cow (represented by a she-wolf in the Roman myth) serves only as a provider of milk and care for the twins before the creation. [ 14 ] This divergence may be explained by the cultural differences between the Indo-Iranian and European branches of the Indo-European family, with the former still strongly influenced by pastoralism , and the latter much more agricultural, perceiving the cow mainly as a source of milk. [ 45 ] According to Lincoln, the Indo-Iranian version best preserves the ancestral motif, since they lived closer to the original Proto-Indo-European pastoral way of life. [ 15 ] Mánu ('Man, human') appears in the Rigveda as the first sacrificer and the founder of religious law, the Law of Mánu . [ 46 ] [ 47 ] He is the brother (or half-brother) of Yama ('Twin'), both presented as the sons of the solar deity Vivasvat . The association of Mánu with the ritual of sacrifice is so strong that those who do not sacrifice are named amanuṣāḥ , which means 'not belonging to Mánu', 'unlike Mánu', or 'inhuman'. [ 48 ] The Song of Puruṣa (another word meaning 'man') tells how the body parts of the sacrificed primeval man led to the creation of the cosmos (the heaven from his head, the air from his navel, the earth from his legs) and the Hindu castes (the upper parts becoming the upper castes and the lower parts the commoners). [ 49 ] [ 47 ] [ 13 ] In the later Śatapatha Brāhmana , both a primordial bull and Mánu's wife Manāvī are sacrificed by the Asuras (demi-gods). According to Lincoln, this could represent an independent variant of the original myth, with the figure of Yama laying behind that of Manāvī. [ 50 ] After a religious transformation led by Zarathustra around the 7th–6th centuries BC that degraded the status of prior myths and deities, *Manuš was replaced in the Iranian tradition with three different figures: Ahriman , who took his role as first sacrificer; Manūščihr ('son' or 'seed of Manuš'), who replaced him as ancestor of the priestly line; and Zarathustra himself, who took his role as priest par excellence . Manūščihr is described in the Greater Bun-dahišnīh as the ancestor of all Mōpats ('High Priests') of Pars , and it has been proposed that *Manuš was originally regarded as the First Priest instead of Zarathustra by pre- Zoroastrian tribes. [ 51 ] The Indo-Iranian tradition portrays the first mortal man or king, *YamHa, as the son of the solar deity, * Hui-(H)uas-uant . [ 13 ] [ 52 ] Invoked in funeral hymns of the Rigveda , Yama is depicted as the first man to die, the one who established the path towards death after he freely chose his own departure from life. [ 53 ] Although his realm was originally associated with feasting, beauty and happiness, Yama was gradually portrayed as a horrific being and the ruler of the Otherworld in the epic and puranic traditions. [ 53 ] Some scholars have equated this abandonment (or transcendence) of his own body with the sacrifice of Puruṣa . [ 54 ] [ 49 ] In a motif shared with the Iranian tradition, which is touched in the Rigveda and told in later traditions, Yama and his twin sister Yamī are presented as the children of the sun-god Vivasvat. Discussing the advisability of incest in a primordial context, Yamī insists on having sexual intercourse with her brother Yama, who rejects it, thus forgoing his role as the creator of humankind. [ 27 ] In pre-Zoroastrian Iran, Yima was seen as the first king and first mortal. The original myth of creation was indeed condemned by Zarathustra , who makes mention of it in the Avesta when talking about the two spirits that "appeared in the beginning as two twins in a dream ... (and) who first met and instituted life and non-life". [ 55 ] Yima in particular is depicted as the first to distribute portions of the cow for consumption, [ 56 ] and is explicitly condemned for having introduced the eating of meat. [ 57 ] After a brief reign on earth, the king Yima was said in a later tradition to be deprived of his triple royal nimbus, which embodied the three social classes in Iranian myths. Mithra receives the part of the Priest, Thraētona that of the Warrior, and Kərəsāspa that of the Commoner. The saga ends with the real dismemberment of Yima by his own brother, the daiwic figure Spityura. [ 58 ] [ 49 ] [ 56 ] In another myth of the Younger Avesta , the primal man Gayōmart ( Gaya marətan ; 'Mortal Life') and the primeval world ox Gōšūrvan are sacrificed by the destructive spirit Ahriman ( Aŋra Mainyu , 'Evil Spirit'). [ 51 ] From the ox's parts came all the plants and animals, and from Gayōmart's body the minerals and humankind. [ 55 ] [ 56 ] In the Vīdēvdāt , Yima is presented as the builder of an underworld, a sub-terrestrial paradise eventually ruled by Zarathustra and his son. The story, giving a central position to the new religious leader, is once again probably the result of a Zoroastrian reformation of the original myth, and Yima might have been seen as the ruler of the realm of the dead in the early Iranian tradition. [ 57 ] Norbert Oettinger argues that the story of Yima and the Vara was originally a flood myth, and the harsh winter was added in due to the dry nature of Eastern Iran, as flood myths didn't have as much of an effect as harsh winters. He has argued that the Videvdad 2.24's mention of melted water flowing is a remnant of the flood myth , and mentions that the Indian Flood Myths originally had their protagonist as Yama, but it was changed to Manu later. [ 59 ] Both the Rigveda and the Younger Avesta depict the slaying of a three-headed serpent by a hero named Trita Āptya or Thraēta(ona) Āthwya for the recovery of cattle or women. *Atpya may refer to the name of an Indo-Iranian family of heroes. Both heroes are known as the preparers of the Indo-Iranian sacred beverage, the * sauma , which *Trita Atpya probably drank to obtain god-like powers. [ 41 ] [ 33 ] The Greek story of Herakles recovering the stolen cattle from the three-headed monster Geryon is likely related, and a Germanic reflex may be found in the depiction of a three-head man fighting three serpents while holding a goat on the Golden Horns of Gallehus . [ 21 ] [ 22 ] In the Vedic tradition, Trita Āptya and the god Indra maintain a relationship of mutual assistance, Trita giving soma to the god so that he can, in return, provide help to the hero in his fight against the monster Vṛtrá . [ 60 ] [ 22 ] The hero confronts the three-headed dragon ( áhi-) and kills him to let the cows go out. Finally, Indra cuts off three heads of Vṛtrá and drives the cows home for Trita. [ 33 ] In the Younger Avestan , the stolen cattle was replaced with his two beautiful wives ( vantā ), said to have been abducted by the serpent Aži Dahāka and whom the hero Thraētona ('son of Thrita') eventually wins back after confronting the monster. [ 61 ] [ 22 ] [ 33 ] Vantā , which means 'female who is desired', has been compared with Indo-Iranian *dhainu ('one who lactates, gives milk'), a frequent word for 'cow' also used to designate female humans. [ 62 ] Although Thraētona was aided in his quest by several deities, the pre-Zoroastrian warrior-god * Vr̥traghna ('Smasher of Resistance') appears to be the most probable helper-god in the original Iranian myth, since it was the name borrowed as Vahagn in the Armenian version of the story. [ 63 ] The Roman writer Livy relates the murder of Remus by his brother Rōmulus during the legendary founding of Rome following a disagreement about which hill to build the city on. In a version of the myth, Rōmulus himself is said to have been torn limb-from-limb by a group of senators for being a tyrant, [ 64 ] which may represent a reflex of the gods who sacrificed the twin giant in the original motif. [ 13 ] Like in the Proto-Indo-European myth, the sacrifice of Remus (Yemos) led to a symbolical creation of humankind, represented by the birth of the three Roman 'tribes' (the Ramnes , Luceres and Tities ), and to the enthronement of his brother as the 'First King'. [ 64 ] It is likely that Remus was originally seen as the main protagonist of the Latin myth, since the formula initially went by Remo et Romulo , and his name was often used as an elliptical replacement for the whole couple, such as in Remi nepotes ("descendants of Remus"), a poetic name for the Romans. [ 65 ] While the name Rōmulus is interpreted as a back-formation of the city name Rōma , Remus is derived from PIE *Yemo , via an intermediary Proto-Latin form *Yemos or *Yemonos . [ 66 ] [ 39 ] [ 38 ] The initial 'y' sound may have shifted to 'r' as a result of long and frequent associations with the names Roma and Rōmulus in Latin myths. [ 66 ] [ 39 ] In the legend reported by Livy , Rōmulus and Remus were nurtured as infants by a she-wolf, a motif that parallels the cow nourishing Ymir in the Old Norse version. [ 66 ] Some scholars have proposed that the original motifs of Yemo, the Proto-Indo-European sacrificed twin ancestor and ruler of the dead, have been transferred in Greek mythology to three different figures: Kronos , Rhadamanthys and Menealos . [ 67 ] A possible reflex of the original legend of the Third Man *Trito may be found in a Greek myth told by Hesiod . A three-headed monster named Geryon , the grandson of Medusa (the serpent-haired Gorgon ), is said to have been killed by Herakles to recover a stolen cattle. [ 68 ] [ 22 ] [ 33 ] The Greek hero is helped by the sun-god Helios , from whom he borrows the cup that helps him cross the western Ocean and reach the island of Erythea. Together with his herdsman Eurytion and his dog, Herakles finally overcomes the monster and drives the cattle back to Greece. [ 33 ] Roman versions of myth, which relied on earlier Greek texts, have been remodelled around an opposition between Hercules and a fire-breathing ogre named Cācus , who lives in a cave on the Aventine . [ 69 ] [ 70 ] They have nonetheless retained some features of the original three-headed monstrous opponent: Hercules' club, with which he kills Cācus with three strikes, is said to be three-noded; and Hercules runs around the mountain three times after finding the monster's cave, batters the door three times, and sits down to rest three times before finally breaking in. [ 69 ] Like in the Iranian and Greek versions, Cācus is portrayed as the one who initially stole the cattle which rightfully belongs to the hero, Hercules. [ 68 ] [ 70 ] Ymir is depicted in the Eddas as the primal being and a frost jötunn ('giant'). After Óðinn and his brothers killed him, they made the earth out of his flesh, the mountains from his bones, the trees from his hair, the sky from his skull, and the sea and lakes from his blood; and from his two armpits came a man and a woman. [ 71 ] [ 72 ] [ 11 ] The Germanic name Ymir means 'Twin', and some scholars have proposed that it was also understood as hermaphrodite or bisexual . In fact, one of his legs is said to make love to the other one, fathering a six-headed son, the ancestor of the giants. [ 28 ] [ 11 ] In another Old Norse story, the primeval cow Auðhumla is said to be formed from melting ice like Ymir, and she fed him with her milk. [ 25 ] In his book Germania (ca. 98 AD), Tacitus reports the existence of a myth involving an earth-born god named Tuisto ('Twin') who fathered Mannus ('Man'), the ancestor of West Germanic peoples. [ 73 ] [ 72 ] [ 13 ] Tuisto has begotten Mannus on his own, and his name is also understood to mean hermaphrodite . [ 13 ] Some scholars have proposed that the Germanic tribal name Alamanni meant ' Mannus ' own people', although 'all-men' remains the most widely accepted etymology among linguists. [ 74 ] A Germanic reflex of myth of Trito fighting the three-headed serpent Ng w hi may be found on the Golden Horns of Gallehus (5th c. AD), where a three-headed man is portrayed as holding a goat and confronting three serpents. [ 21 ] One of the names of Óðinn , Þriði ('Third'), is also linguistically related to *Trito . [ 41 ] [ 75 ] [ 33 ] Another reflex may be found in the Norse legend of the giant Hymir who employed an ox head to capture the serpent Jǫrmungandr with the help of the storm-god Thor . [ 22 ] A possible Celtic reflex of the Proto-Indo-European myth of creation has been proposed in the Irish epic Táin Bó Cúailnge , where two mythical bulls, Donn Cúalnge ('the Dark [bull] of Cooley') and Findbennach Aí ('the White-horned bull of Aí'), fight each other. The battle ends with the former tearing his opponent limb from limb, creating the Irish landscape out of his body. Donn himself dies shortly after the fight from a broken heart, and thereafter also gives his body to form the island's landscape. [ 76 ] Julius Caesar reported that the Gaulish believed in a mythical ancestor he compared to Dīs Pater , the Roman god of the underworld . According to some scholars, this could represent a reflex of the original Proto-Indo-European twin ancestor and ruler of the dead *Yemo , a function similar to that held by the Indo-Iranian Yama. [ 77 ] The motif of Manu and Yemo has been influential throughout Eurasia following the Indo-European migrations . The Greek, Old Russian ( Poem on the Dove King ) and Jewish versions depend on the Iranian, and a Chinese version of the myth has been introduced from Ancient India. [ 78 ] The Armenian version of the myth of the First Warrior Trito depends on the Iranian, and the Roman reflexes were influenced by earlier Greek versions. [ 79 ] Linguist and comparativist Jaan Puhvel proposed that the characters of "Man" and "Twin" are present in Proto-Latin under the names of Remus (from *Yemo(no)s ) and Romulus . The latter was deified as god Quirinus , a name he considered to be ultimately derived from *wihₓrós ('man'). [ 80 ] [ 81 ] [ b ] Baltic mythology records a fertility deity Jumis , [ 83 ] whose name means 'pair, double (of fruits)'. [ 84 ] [ 85 ] His name is also considered a cognate to Indo-Iranian Yama , and related to Sanskrit yamala 'in pairs, twice' and Prakrit yamala 'twins'. [ 86 ] Ranko Matasović cites the existence of Jumala as a female counterpart and sister of Jumis in Latvian dainas (folksongs), as another fertility deity, [ 87 ] [ 88 ] and in the same vein, Zmago Smitek mentioned the pair as having "pronounced vegetational characteristics". [ 89 ] Jumis, whose name can also mean 'double ear of wheat', is also considered a Latvian chthonic deity that lived "beneath the plowed field", [ 90 ] or a vegetation spirit connected to the harvest. [ 91 ] [ 92 ] Following Puhvel's line of argument, Belarusian scholar Siarhiej Sanko attempted to find a Proto-Baltic related pair, possibly named Jumis ("twin") and Viras ("male, hero"). He saw a connection with (quasi-pseudo-)historical Prussian king Widewuto and his brother Bruteno. Related to them is a pair of figures named Wirschaitos [ c ] and Szwaybrutto (Iszwambrato, Schneybrato, Schnejbrato, Snejbrato) which he interprets as "Elder" and "His Brother", respectively. [ 94 ] These latter two would, in turn, be connected to the worship, by the Prussians , of stone statues erected during their expansion in the 12th and 13th centuries. [ 95 ] Later Iranian tradition ( Pahlavi ) attests a brother-sister pair named Jima (Yima) and Jimak (Yimak). [ 96 ] [ 97 ] Yimak, or Jamag, is described as Yima's twin sister in the Bundahishn from Central Iran. [ 89 ] [ 98 ] Yima consorts with his sister Yimak to produce humankind, but is later killed by Azi Dahaka. [ 99 ] The name Yama is attested as a compound in personal names of the historical Persepolis Administrative Archives , such as Yamakka and Yamakšedda (from Old Persian *Yama-xšaita- 'majestic Yama', modern Jamshid ). [ 100 ] Nuristani deity Imra is also considered a reflex of Indo-Iranian Yama. The name Imra is thought to derive from *Yama-raja "King Yama", [ 101 ] [ 102 ] [ 103 ] a name possibly cognate to the Bangani title Jim Raza 'god of the dead'. [ 104 ] He is also known as Mara "Killer, Death". [ 105 ] [ 106 ] This name may have left traces in other Nuristani languages : Waigali Yamrai , [ 107 ] Kalash (Urtsun) imbro , [ 108 ] Ashkun im'ra , Prasun yumr'a and Kati im'ro – all referring to a "creator god". [ 109 ] [ 110 ] This deity also acts as the guardian to the gates of hell (located in a subterranean realm), preventing the return to the world of the living - a motif that echoes the role of Yama as the king of the underworld. [ 111 ]
https://en.wikipedia.org/wiki/Indo-European_cosmogony
The indole test is a biochemical test performed on bacterial species to determine the ability of the organism to convert tryptophan into indole . This division is performed by a chain of a number of different intracellular enzymes , a system generally referred to as " tryptophanase ." [ citation needed ] Indole is generated by reductive deamination from tryptophan via the intermediate molecule indolepyruvic acid. Tryptophanase catalyzes the deamination reaction, during which the amine (-NH 2 ) group of the tryptophan molecule is removed. Final products of the reaction are indole, pyruvic acid , ammonium (NH 4 + ) and energy. Pyridoxal phosphate is required as a coenzyme . Like many biochemical tests on bacteria, results of an indole test are indicated by a change in color following a reaction with an added reagent. Pure bacterial culture must be grown in sterile tryptophan or peptone broth for 24–48 hours before performing the test. Following incubation, five drops of Kovac's reagent ( isoamyl alcohol , para-Dimethylaminobenzaldehyde , concentrated hydrochloric acid ) are added to the culture broth. A positive result is shown by the presence of a red or reddish-violet color in the surface alcohol layer of the broth. A negative result appears yellow. A variable result can also occur, showing an orange color as a result. This is due to the presence of skatole , also known as methyl indole or methylated indole, another possible product of tryptophan degradation. The positive red color forms as a result of a series of reactions. The para-Dimethylaminobenzaldehyde reacts with indole present in the medium to form a red rosindole dye. The isoamyl alcohol forms a complex with rosindole dye, which causes it to precipitate . The remaining alcohol and the precipitate then rise to the surface of the medium. A variation on this test using Ehrlich's reagent (using ethyl alcohol in place of isoamyl alcohol, developed by Paul Ehrlich ) is used when performing the test on nonfermenters and anaerobes . Bacteria that test positive for cleaving indole from tryptophan include: Aeromonas hydrophila , Aeromonas punctata , Bacillus alvei , Edwardsiella sp., Escherichia coli , Flavobacterium sp., Haemophilus influenzae , Klebsiella oxytoca , Proteus sp. (not P. mirabilis and P. penneri ), Plesiomonas shigelloides , Pasteurella multocida , Pasteurella pneumotropica , Vibrio sp., and Lactobacillus reuteri . Bacteria which give negative results for the indole test include: Actinobacillus spp., Aeromonas salmonicida , Alcaligenes sp., most Bacillus sp., Bordetella sp., Enterobacter sp., most Haemophilus sp., most Klebsiella sp., Neisseria sp., Mannheimia haemolytica , Pasteurella ureae , Proteus mirabilis , P. penneri , Pseudomonas sp., Salmonella sp., Serratia sp., Yersinia sp., and Rhizobium sp. The Indole test is one of the four tests of the IMViC series, which tests for evidence of an enteric bacterium. The other three tests include: the methyl red test [M], the Voges–Proskauer test [V] and the citrate test [C]. [ 1 ]
https://en.wikipedia.org/wiki/Indole_test
Indolicidin is an antimicrobial peptide isolated from neutrophil blood cells of cows. The mature peptide is just 13 amino acids, making it one of the smallest antimicrobial peptides known to be encoded as the primary product of the encoding antimicrobial peptide gene. [ 1 ] Indolicidin is active against bacterial pathogens, but has also been shown to kill fungi and even HIV virus. [ 2 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indolicidin
Indolyl-3-acryloylglycine , also known as trans-indolyl-3-acryloylglycine, or IAG for short, is a compound consisting of an indole group attached to an acrylic acid moiety, which is in turn attached to a glycine molecule. This compound has been shown to isomerize when exposed to light. [ 1 ] It is likely a metabolic intermediate in the biosynthesis of tryptophan , [ 2 ] and is synthesized from tryptophan via indolepropionic acid and indoleacrylicacid (IAcrA). It is also likely that IAcrA is converted into IAG in the gut wall. [ 3 ] It may also be produced by certain elements of the mammalian gut microbiota by phenylalanine ammonia-lyase . [ 4 ] Identifiable in the urine by high-performance liquid chromatography , it may be a biomarker for autism spectrum disorders , as demonstrated by the research of Paul Shattock [ 5 ] [ 6 ] [ 7 ] and other researchers from Australia. [ 8 ] These researchers have reported that urinary levels of IAG are much higher in autistic children than in controls; however, other researchers have found no association between IAG concentrations in the urine and autism. [ 9 ] Its excretion in the urine may also be changed in Hartnup disease and celiac disease , [ 10 ] as well as photodermatosis , muscular dystrophy , and liver cirrhosis . [ 11 ]
https://en.wikipedia.org/wiki/Indolyl-3-acryloylglycine
Indolylpropylaminopentane ( IPAP ), also known as α, N -dipropyltryptamine ( α, N -DPT ), is a monoaminergic activity enhancer (MAE) that is closely related to benzofuranylpropylaminopentane (BPAP) and phenylpropylaminopentane (PPAP). [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] It is a tryptamine derivative and the corresponding analogue of PPAP and BPAP with an indole ring instead of a benzene ring or benzofuran ring, respectively. [ 3 ] IPAP is also a positional isomer of N , N -dipropyltryptamine ( N , N -DPT). [ 6 ] MAEs are agents that enhance the action potential -mediated release of monoamine neurotransmitters . [ 7 ] [ 8 ] [ 9 ] IPAP is a MAE of serotonin , norepinephrine , and dopamine . [ 3 ] [ 2 ] However, IPAP acts preferentially as a MAE of serotonin and is about 10-fold more potent in enhancing serotonin than in enhancing norepinephrine or dopamine. [ 3 ] [ 2 ] This is in contrast to BPAP, which is of similar potency as a MAE of serotonin and the catecholamines . [ 3 ] [ 2 ] It is also in contrast to PPAP and selegiline , which act exclusively as catecholaminergic activity enhancers (CAEs) and do not enhance serotonin. [ 7 ] [ 8 ] [ 9 ] Hence, IPAP is a representative selective serotonergic activity enhancer (SAE) at lower doses. [ 3 ] [ 2 ] IPAP is more potent as a MAE than PPAP and selegiline but is less potent than BPAP. [ 3 ] [ 2 ] As with BPAP and PPAP, the negative enantiomer (i.e., R (–)-IPAP) is more biologically active as a MAE and is often the employed compound. [ 3 ] The effects of MAEs appear to be mediated by intracellular TAAR1 agonism coupled with uptake by monoamine transporters into monoaminergic neurons . [ 3 ] [ 4 ] In contrast to amphetamines , IPAP has no classical monoamine releasing agent actions. [ 1 ] [ 3 ] It is a weak MAO-A inhibitor similarly to BPAP. [ 1 ] [ 2 ] [ 5 ] IPAP was first described in the scientific literature in 2001, following BPAP in 1999. [ 2 ] [ 8 ] [ 10 ] It was discovered by József Knoll and colleagues. [ 2 ] [ 8 ] [ 10 ]
https://en.wikipedia.org/wiki/Indolylpropylaminopentane
An indoor antenna is a type of radio or TV antenna placed indoors, as opposed to being mounted on the roof. They are usually considered a simple and cheap solution to receive transmissions. An indoor antenna is prone to picking up electrical noise, but digital broadcasts are resistant to this noise. An indoor antenna is a type of radio or TV antenna placed indoors, as opposed to being mounted on the roof. Indoor antennas are a common solution for cord cutting , with a variety of commercial options. [ 1 ] They are usually considered a simple and cheap solution that may work well when the receiver is relatively near to the broadcasting transmitter and the building walls do not shield the radio waves too much. [ citation needed ] Being close to other electric or electronic equipment in the building, an indoor antenna is prone to picking up more electrical noise that may interfere with a clear (analog) reception. [ 2 ] Used for digital broadcast , the noise is less of a factor than analog broadcast , which recently makes this type of antenna a more popular solution. [ citation needed ] Indoor antennas are used for radio reception, particularly the folded dipole constructed from twin-lead, [ 3 ] which can be nailed to a skirting board . This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indoor_antenna
Indoor tanning involves using a device that emits ultraviolet radiation to produce a cosmetic tan . [ a ] Typically found in tanning salons, gyms, spas, hotels, and sporting facilities, and less often in private residences, the most common device is a horizontal tanning bed, also known as a sunbed or solarium. Vertical devices are known as tanning booths or stand-up sunbeds. First introduced in the 1960s, indoor tanning became popular with people in the Western world , particularly in Scandinavia , in the late 1970s. [ 2 ] The practice finds a cultural parallel in skin whitening in Asian countries, and both support multibillion-dollar industries. [ 3 ] Most indoor tanners are women, 16–25 years old, who want to improve their appearance or mood, acquire a pre-holiday tan, or treat a skin condition. [ 4 ] Once the connection between indoor tanning and skin cancer was confirmed, the number and use of indoor tanning facilities have declined, and many countries have either banned the practice outright or banned it for use by people under the age of 18 years. Ultraviolet radiation (UVR) is part of the electromagnetic spectrum , just beyond visible light . Ultraviolet wavelengths are 100 to 400 nanometres (nm, billionths of a metre) and are divided into three bands: A, B and C. UVA wavelengths are the longest, 315 to 400 nm; UVB are 280 to 315 nm, and UVC wavelengths are the shortest, 100 to 280 nm. [ 5 ] [ 6 ] [ b ] About 95% of the UVR that reaches the earth from the sun is UVA and 5% UVB; no appreciable UVC reaches the earth. While tanning systems before the 1970s produced some UVC, modern tanning devices produce no UVC, a small amount of UVB and mostly UVA. [ 7 ] [ 8 ] Classified by the WHO as a group 1 carcinogen, [ 9 ] UV radiation has "complex and mixed effects on human health". While it causes skin cancer and other damage, including skin aging or creases such as wrinkles , it also triggers the synthesis of vitamin D and endorphins in the skin. [ 6 ] In 1890 the Danish physician Niels Ryberg Finsen developed a carbon arc lamp ("Finsen's light" or a "Finsen lamp") that produced ultraviolet radiation for use in skin therapy, including to treat lupus vulgaris . [ 10 ] He won the 1903 Nobel Prize in Physiology or Medicine for his work. [ 11 ] [ 12 ] Until the 20th century in Europe and the United States, pale skin was a symbol of high social class among white people. Victorian women would carry parasols and wear wide-brimmed hats and gloves; their homes featured heavy curtains that kept out the sun. But as the working classes moved from country work to city factories, and to crowded, dark, unsanitary homes, pale skin became increasingly associated with poverty and ill health. [ 13 ] In 1923 Coco Chanel returned from a holiday in Cannes with a tan, later telling Vogue magazine: "A golden tan is the index of chic!" Tanned skin had become a fashion accessory. [ 14 ] [ 15 ] [ 16 ] In parallel physicians began advising their patients on the benefits of the "sun cure", citing its antiseptic properties. Sunshine was promoted as a treatment for depression, diabetes, constipation, pneumonia, high and low blood pressure, and many other ailments. [ 17 ] Home-tanning equipment was introduced in the 1920s in the form of "sunlamps" or "health lamps", UV lamps that emitted a large percentage of UVB, leading to burns. [ 18 ] Friedrich Wolff , a German scientist, began using UV light on athletes, and developed beds that emitted 95% UVA and 5% UVB, which reduced the likelihood of burning. The world's first tanning salon opened in 1977 in Berlin, [ 19 ] followed by tanning salons in Europe and North America in the late 1970s. [ 20 ] In 1978 Wolff's devices began selling in the United States, and the indoor tanning industry was born. [ 21 ] [ 22 ] Tanning lamps, also known as tanning bulbs or tanning tubes, produce the ultraviolet light in tanning devices. The performance (or output) varies widely between brands and styles. Most are low-pressure fluorescent tubes, but high-pressure bulbs also exist. The electronics systems and number of lamps affect performance, but to a lesser degree than the lamp itself. Tanning lamps are regulated separately from tanning beds in most countries, as they are the consumable portion of the system. Most tanning beds are horizontal enclosures with a bench and canopy (lid) that house long, low-pressure fluorescent bulbs (100–200 watt) under an acrylic surface. The tanner is surrounded by bulbs when the canopy is closed. Modern tanning beds emit mostly UVA (the sun emits around 95% UVA and 5% UVB). [ 23 ] One review of studies found that the UVB irradiance of beds was on average lower than the summer sun at latitudes 37°S to 35°N , but that UVA irradiance was on average much higher. [ 24 ] The user sets a timer (or it is set remotely by the salon operator), lies on the bed and pulls down the canopy. The maximum exposure time for most low-pressure beds is 15–20 minutes. In the US , maximum times are set by the manufacturer according to how long it takes to produce four "minimal erythema doses" (MEDs), an upper limit laid down by the FDA . [ 25 ] An MED is the amount of UV radiation that will produce erythema (redness of the skin) within a few hours of exposure. [ 26 ] High-pressure beds use smaller, higher-wattage quartz bulbs and emit a higher percentage of UVA. [ 27 ] They may emit 10–15 times more UVA than the midday sun, [ 20 ] and have a shorter maximum exposure time (typically 10–12 minutes). UVA gives an immediate, short-term tan by bronzing melanin in the skin, but no new melanin is formed. UVB has no immediate bronzing effect, but with a delay of 72 hours makes the skin produce new melanin, leading to tans of longer duration. UVA is less likely to cause burning or dry skin than UVB but is associated with wrinkling and loss of elasticity because it penetrates deeper. [ 27 ] Commercial tanning beds cost $6,000 to $30,000 as of 2006 [update] , with high-pressure beds at the high end. [ 28 ] Tanning booths (also known as stand-up sunbeds) are vertical enclosures; the tanner stands during exposure, hanging onto straps or handrails, and is surrounded by tanning bulbs. In most models, the tanner closes a door, but there are open designs too. Some booths use the same electronics and lamps as tanning beds, but most have more lamps and are likely to use 100–160 watt lamps. They often have a maximum session of 7–15 minutes. There are other technical differences, or degrees of intensity, but for all practical intents, their function and safety are the same as a horizontal bed. Booths have a smaller footprint, which some commercial operators find useful. Some tanners prefer booths out of concern for hygiene, since the only shared surface is the floor. Eye protection for indoor tanning, either in the form of goggles , or disposable eye protection [ 29 ] must be worn to avoid eye damage. [ 30 ] In one 2004 study, tanners said they avoided using indoor tanning eye protection at times to prevent leaving the appearance of pale skin around the eyes. [ 31 ] Indoor tanning is most popular with white females, 16–25 years old, with low-to-moderate skin sensitivity, who know other tanners. [ 32 ] Studies seeking to link indoor tanning to education level and income have returned inconsistent results. Prevalence was highest in one German study among those with a moderate level of education (neither high nor low). [ 33 ] The late teens to early–mid 20s is the highest-prevalence age group. [ 33 ] In a national survey of white teenagers in 2003 in the US (aged 13–19), 24% had used a tanning facility. [ 34 ] Indoor-tanning prevalence figures in the US vary from 30 million each year to just under 10 million (7.8 million women and 1.9 million men). [ c ] The figures in the US are in decline: according to the Centers for Disease Control and Prevention , usage in the 18–29 age group fell from 11.3 percent in 2010 to 8.6 percent in 2013, perhaps attributable in part to a 10% " tanning tax " introduced in 2010. [ 35 ] Attitudes toward tanning vary across states; in one study, doctors in the Northeast and Midwest of the country were more likely than those in the South or West to recommend tanning beds to treat vitamin D deficiency and depression . [ 36 ] Tanning bed use is more prevalent in northern countries. [ 36 ] In Sweden in 2001, 44% said they had used one (in a survey of 1,752 men and women aged 18–37). Their use increased in Denmark between 1994 and 2002 from 35% to 50% (reported use in the previous two years). In Germany, between 29% and 47% had used one, and one survey found that 21% had done so in the previous year. In France, 15% of adults in 1994–1995 had tanned indoors; the practice was more common in the north of France. [ 37 ] In 2006, 12% of grade 9–10 students in Canada had used a tanning bed in the last year. [ 38 ] In 2004, 7% of 8–11-year-olds in Scotland said they had used one. [ 39 ] Tanning bed use is higher in the UK in the north of England. [ 37 ] One study found that the prevalence was lower in London than in less urban areas of the country. [ 36 ] Tanning facilities are ubiquitous in the US, although the figures are in decline. In a study in the US published in 2002, there was a higher density in colder areas with a lower median income and higher proportion of whites. [ 40 ] A study in 1997 found an average of 50.3 indoor-tanning facilities in 20 US cities (13.89 facilities for every 100,000 residents); the highest was 134 in Minneapolis , MN, and the lowest four in Honolulu , Hawaii. In 2006 a study of 116 cities in the US found 41.8 facilities on average, a higher density than either Starbucks or McDonald's . [ 41 ] Of the country's 125 top colleges and universities in 2014, 12% had indoor-tanning facilities on campus and 42.4% in off-campus housing, 96% of the latter free of charge to the tenants. [ 42 ] There are fewer professional salons than tanning facilities; the latter includes tanning beds in gyms, spas and similar. According to the FDA, citing the Indoor Tanning Association, there were 25,000 tanning salons in 2010 in the US (population 308.7 million in 2010). [ d ] [ 43 ] Mailing-list data suggest there were 18,200 in September 2008 and 12,200 in September 2015, a decline of 30 percent. According to Chris Sternberg of the American Suntanning Association, the figures are 18,000 in 2009 and 9,500 in 2016. [ 44 ] The South West Public Health Observatory found 5,350 tanning salons in the UK in 2009: 4,492 in England (population 52.6 million in 2010), 484 in Scotland (5.3 million), 203 in Wales (3 million) and 171 in Northern Ireland (1.8 million). [ 45 ] [ 46 ] Reasons cited for indoor tanning include improving appearance, acquiring a pre-holiday tan, feeling good and treating a skin condition. [ 4 ] Tanners often cite feelings of well-being; exposure to tanning beds is reported to "increase serum beta-endorphin levels by 44%". Beta-endorphin is associated with feelings of relaxation and euphoria , including " runner's high ". [ 11 ] Improving appearance is the most-cited reason. Studies show that tanned skin has semiotic power, signifying health, beauty, youth and the ability to seduce. [ 47 ] Women, in particular, say not only that they prefer their appearance with tanned skin, but that they receive the same message from friends and family, especially from other women. They believe tanned skin makes them look thinner and more toned, and that it covers or heals skin blemishes such as acne . Other reasons include acquiring a base tan for further sunbathing; that a uniform tan is easier to achieve in a tanning unit than in the sun; and a desire to avoid tan lines . [ 48 ] [ 49 ] Proponents of indoor tanning say that tanning beds deliver more consistent, predictable exposure than the sun, but studies show that indoor tanners do suffer burns. In two surveys in the US in 1998 and 2004, 58% of indoor tanners said they had been burned during sessions. [ 50 ] [ 51 ] Vitamin D is produced when the skin is exposed to UVB, whether from sunlight or an artificial source. [ e ] It is needed for mineralization of bone and bone growth. Exposing arms and legs to a minimal 0.5 erythemal (mild sunburn) UVB dose is equal to consuming about 3000 IU of vitamin D3 . Adults who used tanning beds weekly had higher blood concentrations of 25(OH)D along with higher hip bone density compared to adults who did not use them. [ 53 ] Obtaining vitamin D from indoor tanning has to be weighed against the risk of developing skin cancer. [ 52 ] The indoor-tanning industry has stressed the relationship between tanning and the production of vitamin D. [ 6 ] According to the US National Institutes of Health , some researchers have suggested that "5–30 minutes of sun exposure between 10 AM and 3 PM at least twice a week to the face, arms, legs, or back without sunscreen usually lead to sufficient vitamin D synthesis and that the moderate use of commercial tanning beds that emit 2%–6% UVB radiation is also effective". [ 52 ] [ 54 ] Most researchers say the health risks outweigh the benefits, that the UVB doses produced by tanning beds exceed what is needed for adequate vitamin D production, and that adequate vitamin D levels can be achieved by taking supplements and eating fortified foods . [ 6 ] [ 55 ] [ 56 ] Certain skin conditions, including keratosis , psoriasis , eczema and acne , may be treated with UVB light therapy, including by using tanning beds in commercial salons. Using tanning beds allows patients to access UV exposure when dermatologist-provided phototherapy is not available. A systematic review of studies, published in Dermatology and Therapy in 2015, noted that moderate sunlight is a treatment recommended by the American National Psoriasis Foundation , and suggested that clinicians consider UV phototherapy and tanning beds as a source of that therapy. [ 57 ] Physicians have recommended tanning devices to treat skin conditions. [ 57 ] [ 58 ] When UV light therapy is used in combination with psoralen , an oral or topical medication, the combined therapy is referred to as PUVA . [ 59 ] [ 60 ] A concern with the use of commercial tanning is that beds that primarily emit UVA may not treat psoriasis effectively. One study found that plaque psoriasis is responsive to erythemogenic doses of either UVA or UVB. It does require more energy to reach erythemogenic dosing with UVA. [ 57 ] Exposure to ultraviolet radiation (UVR), whether from outdoor tanning under the sun or indoor tanning using tanning devices is known to be a major cause of the three main types of skin cancer : non-melanoma skin cancer ( basal cell carcinoma and squamous cell carcinoma ) and melanoma . [ 61 ] [ 62 ] [ 63 ] Overexposure to UVR induces at least two types of DNA damage: cyclobutane – pyrimidine dimers (CPDs) and 6–4 photoproducts (6–4PPs). While DNA repair enzymes can fix some mutations, if they are not sufficiently effective, a cell will acquire genetic mutations which may cause the cell to die or become cancerous. These mutations can result in cancer, aging, persistent mutation and cell death. For example, squamous cell carcinoma can be caused by a UVB-induced mutation in the p53 gene . [ 64 ] Non-melanoma skin cancer includes squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) and is more common than melanoma. With early detection and treatment, it is typically not life-threatening. [ 65 ] [ 66 ] Prevalence increases with age, cumulative exposure to UV, and proximity to the equator . It is most prevalent in Australia, where the rate is 1,000 in 100,000 and where, as of 2000, it represented 75 percent of all cancers. [ 67 ] Melanoma accounts for approximately one percent of skin cancer, and causes most of skin cancer-related deaths. [ 68 ] The average age of diagnosis is 63, [ 69 ] and it is the most common cancer in the 25–29 age group and the second most common in the 15-29 group, which may be due in part to the increased sunlight UV exposure and use of indoor tanning observed in these populations. [ 70 ] [ 71 ] [ 72 ] In the United States, the melanoma incidence rate was 22.3 per 100,000, based on 2010–2014 data from the National Institutes of Health Surveillance, Epidemiology and End Results (SEER) Program, and the death rate was 2.7 per 100,000. [ 73 ] 9,730 people were estimated to die of melanoma in the United States in 2017, and these numbers are anticipated to continue rising. [ 73 ] [ 74 ] [ f ] Although 91.7% of patients diagnosed with melanoma survive beyond 5 years, advanced melanoma is largely incurable, and only 19.9% percent of patients with metastatic disease survive beyond 5 years. [ 73 ] A meta-analysis of U.S., Europe and Australia data on tanning bed use and skin cancer estimated that annually, 450,000 cases of non-melanoma skin cancer and more than 10,000 cases of melanoma can be attributed to exposure to indoor tanning. [ 75 ] The age at which someone begins indoor tanning has a known impact on the later risk of developing cancers. A 2012 analysis of epidemiological studies found a 20% increase in the risk of melanoma (a relative risk of 1.20) among those who had ever used a tanning device compared to those who had not, and a 59% percent increase (a relative risk of 1.59) among those who had used one before age 35. [ 61 ] Additionally, a 2014 systematic review and meta-analysis found that indoor tanners had a 16 percent increased risk of developing melanoma, which increased to 23 percent for North Americans. For those who started tanning indoors before age 25, their risk further increased to 35% compared to those who began after age 25. [ 76 ] Children and adolescents who use tanning beds are at greater risk because of biological vulnerability to UV radiation. Epidemiological studies have shown that exposure to artificial tanning increases the risk of malignant melanoma and that the longer the exposure, the greater the risk, particularly in individuals exposed before the age of 30 or who have been sunburned. [ 20 ] [ 77 ] One study conducted among college students found that awareness of the risks of tanning beds did not deter the students from using them. [ 78 ] Teenagers are frequent targets of tanning industry marketing, which includes offers of coupons and placing ads in high-school newspapers. [ 79 ] Members of the United States House Committee on Energy and Commerce commissioned a "sting" operation in 2012, in which callers posing as a 16-year-old girl who wanted to tan for the first time called 300 tanning salons in the US. Staff reportedly failed to follow FDA recommendations, denied the risks of tanning, and offered misleading information about benefits. [ 20 ] Developing a dependence on indoor tanning has been recognized as a psychiatric disorder. The disorder is characterized as excessive indoor tanning that causes the subject personal distress; it has been associated with anxiety , eating disorders and smoking . [ 20 ] [ 80 ] The media has described the disorder as tanorexia . [ 81 ] According to the Canadian Pediatric Society , "repeated UVR exposures, and the use of indoor tanning beds specifically, may have important systemic and behavioral consequences, including mood changes, compulsive disorders, pain and physical dependency." [ 82 ] Researchers at the Yale School of Public Health found evidence of dependence on tanning in a 2017 paper. [ 83 ] Exposure to UV radiation is associated with skin aging , wrinkle production, liver spots , loss of skin elasticity, erythema (reddening of the skin), sunburn, photokeratitis (snow blindness), [ 84 ] ocular melanoma (eye cancer), [ 9 ] and infections. [ 82 ] Tanning beds can contain many microbes , some of which are pathogens that can cause skin infections and gastric distress. In one study in New York in 2009, the most common pathogens found on tanning beds were Pseudomonas spp. ( aeruginosa and putida ), Bacillus spp., Klebsiella pneumoniae , Enterococcus species, Staphylococcus aureus , and Enterobacter cloacae . [ 85 ] Several prescription and over-the-counter drugs, including antidepressants , antibiotics , antifungals and anti-diabetic medication , can cause photosensitivity , which makes burning the skin while tanning more likely. This risk is increased by a lack of staff training in tanning facilities. [ 86 ] From 1997 several countries and US states had banned persons under 18 years old from indoor tanning. [ 87 ] Commercial tanning services are banned in all states, except the Northern Territory , where no salons are in operation. [ 88 ] Private ownership of tanning beds is permitted. [ 89 ] The commercial use of tanning beds was banned entirely in Australia in 2015. [ 88 ] Brazil's National Health Surveillance Agency banned the use of tanning beds for cosmetic purposes in 2009, making that country the first to enact a ban. [ 90 ] It followed a 2002 ban on minors using the beds. [ 87 ] Indoor tanning is prohibited for under-18s in British Columbia, [ 91 ] Alberta, [ 92 ] Manitoba, [ 93 ] Saskatchewan, [ 94 ] Ontario, [ 95 ] Quebec, [ 96 ] [ 97 ] and Prince Edward Island; [ 98 ] and for under-19s in New Brunswick, [ 99 ] Nova Scotia, [ 100 ] Newfoundland and Labrador, [ 101 ] and the Northwest Territories. [ 102 ] Health Canada recommends against the use of tanning equipment. [ 103 ] In 1997, France became the first country to ban minors from indoor tanning. Under-18s are similarly prohibited in Austria, Belgium, Germany, Ireland, Portugal, Spain, Norway, Poland and the United Kingdom. [ 104 ] [ 105 ] [ 87 ] [ 106 ] In addition, Ireland prohibits salons from offering " happy hour " discounts. Netherlands also forbid the usage of a tanning bed below the age of 18. [ 106 ] In New Zealand, indoor tanning is regulated by a voluntary code of practice. Salons are asked to turn away under-18s, those with type 1 skin (fair skin that burns easily or never tans), people who experienced episodes of sunburn as children, and anyone taking certain medications, with several moles, or who has had skin cancer. Tanners are asked to sign a consent form , which includes health information and advice about the importance of wearing goggles. Surveys have found a high level of non-compliance. [ 107 ] [ 108 ] The government has carried out bi-annual surveys of tanning facilities since 2012. [ 109 ] The Food and Drug Administration (FDA) classifies tanning beds as "moderate risk" devices (changed in 2014 from "low risk"). It requires that devices carry a black box warning that they should not be used by individuals under the age of 18. There is no federal ban on indoor use by minors. [ 110 ] As of 1 January 2017 [update] , California , Delaware , the District of Columbia , Hawaii , Illinois , Kansas , Louisiana , Massachusetts , Minnesota , Nevada , New Hampshire , North Carolina , Oregon , Texas , Vermont and Washington have banned the use of tanning beds for minors under the age of 18. Other states strictly regulate indoor tanning under the age of 18, with most banning indoor tanning for persons under the age of 14 unless medically required, and some requiring the consent of a guardian for those aged 14–17. [ 111 ] Injuries caused by tanning devices lead to over 3,000 emergency room cases a year in the United States. [ g ] In 2010 under the Affordable Care Act , a 10% excise tax was introduced on indoor tanning dubbed a "tanning tax", which is added to the fees charged by tanning facilities; it was expected to raise $2.7 billion for health care over ten years. [ 113 ] Tanning beds are regulated in the United States by the federal government's Code of Federal Regulations (21 CFR 1040.20). [ 114 ] This is designed to ensure that the devices adhere to a set of safety rules, with the primary focus on sunbed and lamp manufacturers regarding maximum exposure times and product equivalence. Additionally, tanning salons must have a "Recommended Exposure Schedule" posted on both the front of the tanning bed and in the owners' manual, and list the original lamp that was certified for that particular tanning bed. Salon owners are required to replace the lamps with either exactly the same lamp, or a lamp that is certified by the manufacturer. States control regulations for salons, regarding operator training, sanitization of sunbeds and eyewear, and additional warning signs. Many states also ban or regulate the use of tanning beds by minors under the age of 18. [ 111 ] American osteopathic physician Joseph Mercola was prosecuted in 2016 by the Federal Trade Commission (FTC) for selling tanning beds to "reverse your wrinkles" and "slash your risk of cancer". [ 115 ] [ 116 ] The settlement meant that consumers who had purchased the devices were eligible for refunds totaling $5.3 million. [ 116 ] Mercola had falsely claimed that the FDA "endorsed indoor tanning devices as safe", and had failed to disclose that he had paid the Vitamin D Council for its endorsement of his devices. The FTC said that it was deceptive for the defendants to fail to disclose that tanning is not necessary to produce vitamin D. [ 116 ] [ 117 ] Book chapters are cited in short form above and long form below. All other sources are cited above only.
https://en.wikipedia.org/wiki/Indoor_tanning
PT Indosat Tbk , trading as Indosat Ooredoo Hutchison , abbreviated as IOH , is an Indonesian telecommunications provider which is owned by Ooredoo Hutchison Asia, a joint venture between Ooredoo and Hutchison Asia Telecom Group (a part of CK Hutchison Holdings ) since 2022. [ 2 ] The company offers wireless services for mobile phones and, to a lesser extent, broadband internet lines for homes. Indosat operates its wireless services under two brands: IM3 and Three (3). These brands differ by their payment model (pre-paid vs. post-paid) as well as pricing. Indosat also provides other services such as IDD , fixed telecommunications, and multimedia. [ 3 ] In February 2013, Qtel, a majority stakeholder in Indosat, rebranded itself as Ooredoo. This was followed by a renaming of all their subsidiaries across multiple countries. [ 4 ] As such, Indosat was renamed Indosat Ooredoo on November 19, 2015. [ 5 ] As of Q4 2018 [update] , Indosat had 58 million subscribers. [ 6 ] This is a sharp decrease from 2017, when the number was reported as 110 million. The market share was 16.5%, making it the second largest mobile network operator in the country. (1732.5~1742.5, 1827.5~1837.5) (1742.5~1762.5, 1837.5~1857.5) (1920~1935, 2110~2125) (1935~1945, 2125~2135) Indosat was established as the first foreign investment company in Indonesia that provides international telecommunication services using an international satellite. It was owned by American conglomerate company ITT (through its subsidiary American Cable and Radio Corporation ) until 1980. Indosat expanded into becoming the first international company that was acquired and 100% owned by the Indonesian Government. Indosat acquired Satelindo and SLI through share majority. They also established PT Indosat Multimedia Mobile (IM3) to provide and operate a nationwide GPRS network, a first for the country. In 2003, Indosat merged with its 3 subsidiaries—Satelindo, IM3, and Bimagraha—and established itself as a mobile network operator . Indosat was granted a 3G network license and introduced a 3.5G service in Jakarta and Surabaya . In 2009, Qtel bought 24.19% of series B shares from the public and became Indosat's majority shareholder with a 65% ownership. They were granted the use of additional 3G frequencies later that same year. Indosat also won the WiMAX bid from the government during this period. In 2014, Indosat launched and commercialized a 4G service at 900 MHz, with a download speed of up to 42 Mbit/s. The service was first rolled out in the major cities, with planned expansions to rural areas. In November 2015, Indosat rebranded itself as Indosat Ooredoo. In 2016, Indosat teamed up with Swedish-based music streaming service Spotify to become the first operator to offer Spotify music services in Indonesia. [ 7 ] In January 2021, Indosat announced that it will exit the satellite business. [ 8 ] In September 2021, Indosat has announced that the latter company would be merged with Hutchison Asia Telecom Group /Garibaldi Thohir's joint venture PT Hutchison 3 Indonesia (who operates 3 -branded networks in Indonesia) to form Indosat Ooredoo Hutchison (IOH). The merger was closed on 4 January 2022. Following are Indosat Ooredoo Hutchison shareholders (as of 4 January 2022):
https://en.wikipedia.org/wiki/Indosat
Induced-charge electrokinetics in physics is the electrically driven fluid flow and particle motion in a liquid electrolyte . [ 2 ] Consider a metal particle (which is neutrally charged but electrically conducting) in contact with an aqueous solution in a chamber/channel. If different voltages apply to the end of this chamber/channel, electric field will generate in this chamber/channel. This applied electric field passes through this metal particle and causes the free charges inside the particle migrate under the skin of particle. As a result of this migration, the negative charges move to the side which is close to the positive (or higher) voltage while the positive charges move to the opposite side of the particle. These charges under the skin of the conducting particle attract the counter-ions of the aqueous solution; thus, the electric double layer (EDL) forms around the particle. The EDL sign on the surface of the conducting particle changes from positive to negative and the distribution of the charges varies along the particle geometry. Due to these variations, the EDL is non-uniform and has different signs. Thus, the induced zeta potential around the particle, and consequently slip velocity on the surface of the particle, vary as a function of the local electric field. Differences in magnitude and direction of slip velocity on the surface of the conducting particle effects the flow pattern around this particle and causes micro vortices. Yasaman Daghighi and Dongqing Li, for the first time, experimentally illustrated these induced vortices around a 1.2 mm diameter carbon-steel sphere under the 40V/cm direct current (DC) external electric filed. [ 1 ] Chenhui Peng et al. [ 3 ] also experimentally showed the patterns of electro-osmotic flow around an Au sphere when alternating current (AC) is involved (E=10mV/μm, f=1 kHz). Electrokinetics here refers to a branch of science related to the motion and reaction of charged particles to the applied electric filed and its effects on its environment. It is sometimes referred as non-linear electrokinetic phenomena as well. [ citation needed ] Levich is one of the pioneers in induced-charge electrokinetic field. [ 2 ] He calculated the perturbed slip profile around a conducting particle in contact with electrolyte. He also theoretically predicted that vortices induced around this particle once the electric filed is applied. The size and strength of the induced vortices around a conducting particle have direct relationship with the applied electric filed and also the size of the conducted surface. This phenomenon is experimentally and numerically proven by several studies. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The vortices grow as the external electric field increases and generate "sinkhole" [ 1 ] at the center of the each vortex while circulates the fluid faster. It is demonstrated that increasing the size of the conducting surface forms bigger induced vortices to the point that geometry does not limits this grows. The induced vortices have many applications in various aspects of electrokinetic microfluidics. There are many micro-mixers that are designed and fabricated based on the existence of their induced vortices in the microfluidics devices. Such micro-mixers which are used for biochemical, medicine, biology applications has no mechanical parts and only use conducting surfaces to generate induced vortices to mix the different fluid streams. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] This phenomenon even is used to trap the micron and submicron particles floating in flow inside a micro-channel. This method can be used to manipulate, detect, handle, and concentrate cells and virus in biomedical field; or, for colloidal particle assembly. In addition the induced vortices around the conducting surfaces in a microfluidic system can be used as a micro-valve, micro-actuator, micro-motor and micro-regulator to control the direction and manipulation.
https://en.wikipedia.org/wiki/Induced-charge_electrokinetics
Induced-self antigen is a marker of abnormal self, which can be recognized upon infected (in particular, virus-infected) and transformed cells. Therefore, the recognition of "induced self" is an important strategy for surveillance of infection or tumor transformation - it results in elimination of the affected cells by activated NK cells or other immunological mechanisms. [ 1 ] Similarly γδ T cells can recognize induced-self antigens expressed on cells under stress conditions. [ 2 ] Probably the most studied receptor involved in recognition of induced-self antigens is NKG2D . It is an activating receptor which is expressed on NK cells and subsets of T and NKT cells. NKG2D can bind proteins at the surface of most cells that are not normally expressed, but that are expressed during a stress response of the cells (e.g. induction of the DNA damage pathway). Moreover, other recognition targets exist, for example ligands induced on human macrophages by TLR stimulation. [ 3 ] Ligands that bind to NKG2D receptor can be divided into two families of MHC class I-related proteins: MICs ( MICA , MICB ) and ULBPs (ULBP1, ULBP2, ULBP3, ULBP4, RAET1G, RAET1L). [ 4 ] Other receptors able to bind induced-self antigens are NKG2C , NKG2E, NKG2F (CD94) or some NCRs (e.g. NKp 46 [ 5 ] ). Practical use of the knowledge of induced-self antigens is in targeting tumors for immune response. As tumors are very often capable of escaping the immune system by many ways, upregulation of specific ligands on the tumor cells could mount effective immune mechanisms able to eliminate these cells. For example, upregulation of NKG2D ligands can stimulate the NK cells triggering cell-mediated cytotoxicity. [ 6 ] This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Induced-self_antigen
Induced cell cycle arrest is the use of a chemical or genetic manipulation to artificially halt progression through the cell cycle . Cellular processes like genome duplication and cell division stop. [ 1 ] It can be temporary or permanent. [ 1 ] It is an artificial activation of naturally occurring cell cycle checkpoints , induced by exogenous stimuli controlled by an experimenter. In an academic research context, cell cycle arrest is typically performed in model organisms and cell extracts, such as Saccharomyces cerevisiae (yeast) or Xenopus oocytes (frog eggs). [ 2 ] [ 3 ] Frog egg cell extracts have been used extensively in cell cycle research because they are relatively large, reaching a diameter of 1mm, and so contain large amounts of protein, making protein levels more easily measurable. [ 4 ] There are a variety of reasons a researcher may want to temporarily or permanently prevent progress through the cell cycle. In some experiments, a researcher may want to control and synchronize the time when a group of cells progress to the next phase of the cell cycle. [ 5 ] The cells can be induced to arrest as they arrive (at different time points) at a certain phase, so that when the arrest is lifted (for instance, rescuing cell cycle progression by introducing another chemical) all the cells resume cell cycle progression at the same time. In addition to this method acting as a scientific control for when the cells resume the cell cycle, this can be used to investigate necessity and sufficiency . Another reason synchrony is important is the control for amount of DNA content, which varies at different parts of the cell cycle based on whether DNA replication has occurred since the last round of completed mitosis and cytokinesis. [ 6 ] Furthermore, synchronization of large numbers of cells into the same phase allows for the collection of large enough groups of cells in the same cycle for the use in other assays, such as western blot and RNA sequencing . [ 7 ] Researchers may be investigating mechanisms of DNA damage repair . Given that some of the mechanisms below of inducing cell cycle arrest involve damaging the DNA, this allows investigation into how the cell responds to damage of its genetic material. [ 8 ] Genetic engineering of cells with specific gene knockouts can also result in cells that arrest at different phases of the cell cycle. Examples include: G 1 phase is the first of the four phases of the cell cycle, and is part of interphase . While in G 1 the cell synthesizes messenger RNA (mRNA) and proteins in preparation for subsequent steps of interphase leading to mitosis. In human somatic cells , the cell cycle lasts about 18 hours, and the G 1 phase makes up about 1 / 3 of that time. [ 13 ] On the other hand, in frog, sea urchin , and fruit fly embryos, the G 1 phase is extremely brief and instead is a slight gap between cytokinesis and S phase. [ 13 ] α-factor is a pheromone secreted by Saccharomyces cerevisiae that arrests the yeast cells in G 1 phase. It does so by inhibiting the enzyme adenylate cyclase . [ 2 ] The enzyme catalyzes the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate . [ 14 ] Contact inhibition is a method of arresting cells when neighboring cells come into contact with each other. It results in a single layer of arrested cells of arrested cells, and is a process that is notably missing in cancer cells . The suspected mechanism is dependent on p27 Kip1 , a cyclin-dependent kinase inhibitor . [ 15 ] p27 Kip1 protein levels are elevated in arresting cells. This natural process can be mimicked in a lab through the overexpression of p27 Kip1 , which results in induced cell cycle arrest in G 1 phase. [ 16 ] Mimosine is a plant amino acid that has been shown to reversibly inhibit progression beyond G 1 phase in some human cells, including lymphoblastoid cells . [ 5 ] Its proposed mechanism of action is an iron/zinc chelator that depletes iron within the cell. This induces double-strand breaks in the DNA, inhibiting DNA replication. This may involve blocking the action of an iron-dependent ribonucleotide reductase . It may also inhibit transcription of serine hydroxymethyltransferase , which has zinc dependence. [ 17 ] In cell culture, serum is the growth medium in which the cells are grown and contains vital nutrients. The use of serum deprivation - partially or completely removing the serum and its nutrients - has been shown to arrest and synchronize cell cycle progression in G 0 phase , for example in neonatal mammalian astrocytes [ 18 ] and human foreskin fibroblasts . [ 19 ] Amino acid starvation is a similar approach. When grown in a media without some essential amino acids, such as methionine , some cells arrest in early G 1 phase. [ 5 ] S phase follows G 1 phase via the G 1 /S transition and precedes G 2 phase in interphase and is the part of the cell cycle in which DNA is replicated. Since accurate duplication of the genome is critical to successful cell division, the processes that occur during S-phase are tightly regulated and widely conserved. Pre-replication complexes assembled before S phase are converted into active replication forks . [ 20 ] Driving this conversion is Cdc7 and S-phase cyclin-dependent kinases , which are both upregulated after the G 1 /S transition. [ 20 ] Aphidicolin is an antibiotic isolated from the fungus Cephalosporum aphidicola . It is a reversible inhibitor of eukaryotic nuclear DNA replication that blocks progression past the S phase. Its mechanism is the inhibition of DNA polymerase A and D . A structural study found that this is thought to occur through binding the alpha active site of the polymerase and "rotating the template guanine," which prevents deoxycytidine triphosphate (dCTP) from binding. [ 21 ] This S phase block induces apoptosis in HeLa cells. [ 5 ] Hydroxyurea (HU) is a small molecule drug that inhibits the enzyme ribonucleotide reductase (RNR), preventing the catalysis of converting deoxyribonucleotides (DNTs) to ribonucleotides . It is hypothesized that there is tyrosyl free radical within RNR that is disabled by HU. [ 6 ] [ 22 ] The free radicals are necessary for the reduction of the DNTs and are scavenged by HU instead. [ 23 ] HU has been shown to arrest cells in both S phase (healthy cells) and immediately before cytokinesis (mutant cells). [ 22 ] 2 3-(2,3-dichlorophenoxy)propyl aminoethanol (2,3-DCPE) is a small-molecule that induces S phase arrest. [ 24 ] This was demonstrated in cancer cell lines and downregulates expression of B-cell lymphoma-extra large ( Bcl-XL ), an anti-apoptotic protein that prevents the release of mitochondrial contents like cytochrome c . G 2 phase is the final part of interphase and directly precedes mitosis. It will only be entered in regular cells if the DNA replication in S phase is completed successfully. It is a period of rapid cell growth and protein synthesis during which the cell prepares itself for mitosis. Cyclins are proteins that control progression through the cell cycle by activating cyclin-dependent kinases. Destruction of a cell's endogenous cyclin messenger RNA can arrest frog egg extracts in interphase and prevent them from entering mitosis. [ 3 ] Introduction of exogenous cyclin mRNA is also sufficient to rescue cell cycle progression. [ 3 ] One method of this destruction is through the use of antisense oligonucleotides , pieces of RNA that bind to the cyclin mRNA and prevent the mRNA from being translated into cyclin protein. [ 25 ] This can actually be used to destroy phase-specific cyclins beyond just G 2 - for instance, destruction of cyclin D1 mRNA by antisense oligonucleotides prevents progression from G 1 phase to S phase. [ 26 ] Mitosis is the final part of the cell cycle and follows interphase. It is composed of four phases - prophase , metaphase , anaphase , and telophase - and involves the condensation of the chromosomes in the nucleus , the dissolution of the nuclear envelope , and the separation of sister chromatids by spindle fibers . As mitosis concludes, the spindle fibers disappear and the nuclear membrane reforms around each of the two sets of chromosomes. After successful mitosis, the cell physically splits into two identical daughter cells in a process called cytokinesis , and this concludes a full round of the cell cycle. Each of these new cells could then potentially re-enter G 1 phase and begin the cell cycle again. [ citation needed ] Nocodazole is a chemical agent that interferes with the polymerization of microtubules. [ 27 ] Cells treated with nocodazole arrest with a G 2 or M phase DNA content, which can be verified with flow cytometry. From microscopy it has been determined they do enter mitosis but they cannot form the spindles necessary for metaphase because the microtubules cannot polymerize. [ 28 ] Research into the mechanism has hinted at it potentially preventing tubulin from forming its alpha/beta heterodimer. [ 29 ] Taxol works in the opposite way of nocodazole, instead stabilizing the microtubule polymer and preventing it from disassembly. It also causes M phase arrest, as the spindle that is supposed to pull apart sister chromatids is unable to disassemble. [ 30 ] [ 31 ] It acts through a specific binding site on the microtubule polymer, and as such does not require GTP or other cofactors to induce tubulin polymerization. [ 32 ] Temperature has been shown to regulate HeLa cell cycle progression. Mitosis was found to be the most temperature-sensitive part of the cell cycle. [ 33 ] Pre-cytokinesis mitotic arrest was visible through accumulation of cells in mitosis in below-normal temperatures between 24 and 31 °C (75.2-87.8 °F). [ 33 ] There are several methods that can be used to verify that cells have been arrested in the proper phase. Flow cytometry is a technique of measuring physical and chemical characteristics of a population of cells using lasers and fluorophore dyes covalently linked to protein markers. [ 34 ] The stronger the signal, the more of a particular protein is present. Staining with DNA dyes propidium iodide or 4',6'-diamidino-2-phenylindole (DAPI) allows delineation or sorting of cells between G 1, S, or G 2 /M phases. [ 35 ] Immunoblotting is the detection of specific proteins in a tissue sample or extract. Primary antibodies recognize and bind the protein in question, and secondary antibodies are added that recognize the primary antibodies. The secondary antibody is then visualized through staining or immunofluorescence , allowing indirect detection of the original target protein. Immunoblotting can be performed to detect the presence of cyclins , proteins that regulate the cell cycle. [ 36 ] Different classes of cyclins are up- and down-regulated at different parts of the cell cycle. Measurement of the cyclins from an extract of an arrested cell can determine what phase the cell is in. For example, a peak of cyclin E protein would indicate the G 1 /S transition , a cyclin A peak would indicate late G 2 phase, and a cyclin B peak would indicate mitosis. [ 37 ] FUCCI is a system that takes advantage of cell cycle phase-specific expression of proteins and their degradation by the ubiquitin-proteasome pathway . Two fluorescent probes - Cdt1 and Geminin conjugated to fluorescent proteins - allow for real-time visualization of the cell cycle phase a cell is in. [ 38 ]
https://en.wikipedia.org/wiki/Induced_cell_cycle_arrest
Enzyme catalysis is the increase in the rate of a process by an " enzyme ", a biological molecule . Most enzymes are proteins, and most such processes are chemical reactions. Within the enzyme, generally catalysis occurs at a localized site, called the active site . Most enzymes are made predominantly of proteins, either a single protein chain or many such chains in a multi-subunit complex . Enzymes often also incorporate non-protein components, such as metal ions or specialized organic molecules known as cofactor (e.g. adenosine triphosphate ). Many cofactors are vitamins, and their role as vitamins is directly linked to their use in the catalysis of biological process within metabolism. Catalysis of biochemical reactions in the cell is vital since many but not all metabolically essential reactions have very low rates when uncatalysed. One driver of protein evolution is the optimization of such catalytic activities, although only the most crucial enzymes operate near catalytic efficiency limits, and many enzymes are far from optimal. Important factors in enzyme catalysis include general acid and base catalysis , orbital steering, entropic restriction, orientation effects (i.e. lock and key catalysis), as well as motional effects involving protein dynamics [ 1 ] [ 2 ] Mechanisms of enzyme catalysis vary, but are all similar in principle to other types of chemical catalysis in that the crucial factor is a reduction of energy barrier(s) separating the reactants (or substrates ) from the products. The reduction of activation energy ( E a ) increases the fraction of reactant molecules that can overcome this barrier and form the product. An important principle is that since they only reduce energy barriers between products and reactants, enzymes always catalyze reactions in both directions, and cannot drive a reaction forward or affect the equilibrium position – only the speed with which is it achieved. As with other catalysts, the enzyme is not consumed or changed by the reaction (as a substrate is) but is recycled such that a single enzyme performs many rounds of catalysis. Enzymes are often highly specific and act on only certain substrates. Some enzymes are absolutely specific meaning that they act on only one substrate, while others show group specificity and can act on similar but not identical chemical groups such as the peptide bond in different molecules. Many enzymes have stereochemical specificity and act on one stereoisomer but not another. [ 3 ] The classic model for the enzyme- substrate interaction is the induced fit model. [ 4 ] This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. The advantages of the induced fit mechanism arise due to the stabilizing effect of strong enzyme binding. There are two different mechanisms of substrate binding: uniform binding, which has strong substrate binding, and differential binding, which has strong transition state binding. The stabilizing effect of uniform binding increases both substrate and transition state binding affinity, while differential binding increases only transition state binding affinity. Both are used by enzymes and have been evolutionarily chosen to minimize the activation energy of the reaction. Enzymes that are saturated, that is, have a high affinity substrate binding, require differential binding to reduce the energy of activation, whereas small substrate unbound enzymes may use either differential or uniform binding. [ 5 ] These effects have led to most proteins using the differential binding mechanism to reduce the energy of activation, so most substrates have high affinity for the enzyme while in the transition state. Differential binding is carried out by the induced fit mechanism – the substrate first binds weakly, then the enzyme changes conformation increasing the affinity to the transition state and stabilizing it, so reducing the activation energy to reach it. It is important to clarify, however, that the induced fit concept cannot be used to rationalize catalysis. That is, the chemical catalysis is defined as the reduction of E a ‡ (when the system is already in the ES ‡ ) relative to E a ‡ in the uncatalyzed reaction in water (without the enzyme). The induced fit only suggests that the barrier is lower in the closed form of the enzyme but does not tell us what the reason for the barrier reduction is. Induced fit may be beneficial to the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. [ 6 ] These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. After binding takes place, one or more mechanisms of catalysis lowers the energy of the reaction's transition state , by providing an alternative chemical pathway for the reaction. There are six possible mechanisms of "over the barrier" catalysis as well as a "through the barrier" mechanism: Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that they collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality – which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state. However, the situation might be more complex, since modern computational studies have established that traditional examples of proximity effects cannot be related directly to enzyme entropic effects. [ 7 ] [ 8 ] [ 9 ] Also, the original entropic proposal [ 10 ] has been found to largely overestimate the contribution of orientation entropy to catalysis. [ 11 ] Proton donors and acceptors, i.e. acids and base may donate and accept protons in order to stabilize developing charges in the transition state. This is related to the overall principle of catalysis, that of reducing energy barriers, since in general transition states are high energy states, and by stabilizing them this high energy is reduced, lowering the barrier. A key feature of enzyme catalysis over many non-biological catalysis, is that both acid and base catalysis can be combined in the same reaction. In many abiotic systems, acids (large [H+]) or bases ( large concentration H+ sinks, or species with electron pairs) can increase the rate of the reaction; but of course the environment can only have one overall pH (measure of acidity or basicity (alkalinity)). However, since enzymes are large molecules, they can position both acid groups and basic groups in their active site to interact with their substrates, and employ both modes independent of the bulk pH. [ citation needed ] Often general acid or base catalysis is employed to activate nucleophile and/or electrophile groups, or to stabilize leaving groups. Many amino acids with acidic or basic groups are this employed in the active site, such as the glutamic and aspartic acid, histidine, cystine, tyrosine, lysine and arginine, as well as serine and threonine. In addition, the peptide backbone, with carbonyl and amide N groups is often employed. Cystine and Histidine are very commonly involved, since they both have a pKa close to neutral pH and can therefore both accept and donate protons. Many reaction mechanisms involving acid/base catalysis assume a substantially altered pKa. This alteration of pKa is possible through the local environment of the residue. [ citation needed ] pKa can also be influenced significantly by the surrounding environment, to the extent that residues which are basic in solution may act as proton donors, and vice versa. The modification of the pKa's is a pure part of the electrostatic mechanism. [ 12 ] The catalytic effect of the above example is mainly associated with the reduction of the pKa of the oxyanion and the increase in the pKa of the histidine, while the proton transfer from the serine to the histidine is not catalyzed significantly since it is not the rate determining barrier. [ 13 ] Note that in the example shown, the histidine conjugate acid acts as a general acid catalyst for the subsequent loss of the amine from a tetrahedral intermediate. Evidence supporting this proposed mechanism (Figure 4 in Ref. 13) [ 14 ] has, however, been controverted. [ 15 ] Stabilization of charged transition states can also be by residues in the active site forming ionic bonds (or partial ionic charge interactions) with the intermediate. These bonds can either come from acidic or basic side chains found on amino acids such as lysine , arginine , aspartic acid or glutamic acid or come from metal cofactors such as zinc . Metal ions are particularly effective and can reduce the pKa of water enough to make it an effective nucleophile. Systematic computer simulation studies have established that electrostatic effects give, by far, the largest contribution to catalysis, [ 12 ] and can increase the rate of reaction by a factor of up to 10 7 . [ 16 ] In particular, it has been found that enzymes provide an environment which is more polar than water, and that ionic transition states are stabilized by fixed dipoles. This is very different from transition state stabilization in water, where the water molecules must pay with "reorganization energy" [ 17 ] in order to stabilize ionic and charged states. Thus, catalysis is associated with the fact that the enzyme polar groups are preorganized. [ 18 ] The magnitude of the electrostatic field exerted by an enzyme's active site has been shown to be highly correlated with the enzyme's catalytic rate enhancement. [ 19 ] Binding of substrate usually excludes water from the active site, thereby lowering the local dielectric constant to that of an organic solvent. This strengthens the electrostatic interactions between the charged/polar substrates and the active sites. In addition, studies have shown that the charge distributions about the active sites are arranged so as to stabilize the transition states of the catalyzed reactions. In several enzymes, these charge distributions apparently serve to guide polar substrates toward their binding sites so that the rates of these enzymatic reactions are greater than their apparent diffusion-controlled limits [ citation needed ] . Covalent catalysis involves the substrate forming a transient covalent bond with residues in the enzyme active site or with a cofactor. This adds an additional covalent intermediate to the reaction, and helps to reduce the energy of later transition states of the reaction. The covalent bond must, at a later stage in the reaction, be broken to regenerate the enzyme. This mechanism is utilised by the catalytic triad of enzymes such as proteases like chymotrypsin and trypsin , where an acyl-enzyme intermediate is formed. An alternative mechanism is schiff base formation using the free amine from a lysine residue, as seen in the enzyme aldolase during glycolysis . Some enzymes utilize non-amino acid cofactors such as pyridoxal phosphate (PLP) or thiamine pyrophosphate (TPP) to form covalent intermediates with reactant molecules. [ 20 ] [ 21 ] Such covalent intermediates function to reduce the energy of later transition states, similar to how covalent intermediates formed with active site amino acid residues allow stabilization, but the capabilities of cofactors allow enzymes to carryout reactions that amino acid side residues alone could not. Enzymes utilizing such cofactors include the PLP-dependent enzyme aspartate transaminase and the TPP-dependent enzyme pyruvate dehydrogenase . [ 22 ] [ 23 ] Rather than lowering the activation energy for a reaction pathway, covalent catalysis provides an alternative pathway for the reaction (via to the covalent intermediate) and so is distinct from true catalysis. [ 12 ] For example, the energetics of the covalent bond to the serine molecule in chymotrypsin should be compared to the well-understood covalent bond to the nucleophile in the uncatalyzed solution reaction. A true proposal of a covalent catalysis (where the barrier is lower than the corresponding barrier in solution) would require, for example, a partial covalent bond to the transition state by an enzyme group (e.g., a very strong hydrogen bond), and such effects do not contribute significantly to catalysis. A metal ion in the active site participates in catalysis by coordinating charge stabilization and shielding. Because of a metal's positive charge, only negative charges can be stabilized through metal ions. [ 24 ] However, metal ions are advantageous in biological catalysis because they are not affected by changes in pH. [ 25 ] Metal ions can also act to ionize water by acting as a Lewis acid . [ 26 ] Metal ions may also be agents of oxidation and reduction. [ 27 ] This is the principal effect of induced fit binding, where the affinity of the enzyme to the transition state is greater than to the substrate itself. This induces structural rearrangements which strain substrate bonds into a position closer to the conformation of the transition state, so lowering the energy difference between the substrate and transition state and helping catalyze the reaction. However, the strain effect is, in fact, a ground state destabilization effect, rather than transition state stabilization effect. [ 12 ] [ 28 ] [ page needed ] Furthermore, enzymes are very flexible and they cannot apply large strain effect. [ 29 ] In addition to bond strain in the substrate, bond strain may also be induced within the enzyme itself to activate residues in the active site. These traditional "over the barrier" mechanisms have been challenged in some cases by models and observations of "through the barrier" mechanisms ( quantum tunneling ). Some enzymes operate with kinetics which are faster than what would be predicted by the classical ΔG ‡ . In "through the barrier" models, a proton or an electron can tunnel through activation barriers. [ 31 ] [ 32 ] Quantum tunneling for protons has been observed in tryptamine oxidation by aromatic amine dehydrogenase . [ 33 ] Quantum tunneling does not appear to provide a major catalytic advantage, since the tunneling contributions are similar in the catalyzed and the uncatalyzed reactions in solution. [ 32 ] [ 34 ] [ 35 ] [ 36 ] However, the tunneling contribution (typically enhancing rate constants by a factor of ~1000 [ 33 ] compared to the rate of reaction for the classical 'over the barrier' route) is likely crucial to the viability of biological organisms. This emphasizes the general importance of tunneling reactions in biology. In 1971-1972 the first quantum-mechanical model of enzyme catalysis was formulated. [ 37 ] [ 38 ] [ independent source needed ] The binding energy of the enzyme-substrate complex cannot be considered as an external energy which is necessary for the substrate activation. The enzyme of high energy content may firstly transfer some specific energetic group X 1 from catalytic site of the enzyme to the final place of the first bound reactant, then another group X 2 from the second bound reactant (or from the second group of the single reactant) must be transferred to active site to finish substrate conversion to product and enzyme regeneration. [ 39 ] We can present the whole enzymatic reaction as a two coupling reactions: It may be seen from reaction ( 1 ) that the group X 1 of the active enzyme appears in the product due to possibility of the exchange reaction inside enzyme to avoid both electrostatic inhibition and repulsion of atoms. So we represent the active enzyme as a powerful reactant of the enzymatic reaction. The reaction ( 2 ) shows incomplete conversion of the substrate because its group X 2 remains inside enzyme. This approach as idea had formerly proposed relying on the hypothetical extremely high enzymatic conversions (catalytically perfect enzyme). [ 40 ] The crucial point for the verification of the present approach is that the catalyst must be a complex of the enzyme with the transfer group of the reaction. This chemical aspect is supported by the well-studied mechanisms of the several enzymatic reactions. Consider the reaction of peptide bond hydrolysis catalyzed by a pure protein α-chymotrypsin (an enzyme acting without a cofactor), which is a well-studied member of the serine proteases family, see. [ 41 ] We present the experimental results for this reaction as two chemical steps: where S 1 is a polypeptide, P 1 and P 2 are products. The first chemical step ( 3 ) includes the formation of a covalent acyl-enzyme intermediate. The second step ( 4 ) is the deacylation step. It is important to note that the group H+, initially found on the enzyme, but not in water, appears in the product before the step of hydrolysis, therefore it may be considered as an additional group of the enzymatic reaction. Thus, the reaction ( 3 ) shows that the enzyme acts as a powerful reactant of the reaction. According to the proposed concept, the H transport from the enzyme promotes the first reactant conversion, breakdown of the first initial chemical bond (between groups P 1 and P 2 ). The step of hydrolysis leads to a breakdown of the second chemical bond and regeneration of the enzyme. The proposed chemical mechanism does not depend on the concentration of the substrates or products in the medium. However, a shift in their concentration mainly causes free energy changes in the first and final steps of the reactions ( 1 ) and ( 2 ) due to the changes in the free energy content of every molecule, whether S or P, in water solution. This approach is in accordance with the following mechanism of muscle contraction. The final step of ATP hydrolysis in skeletal muscle is the product release caused by the association of myosin heads with actin. [ 42 ] The closing of the actin-binding cleft during the association reaction is structurally coupled with the opening of the nucleotide-binding pocket on the myosin active site. [ 43 ] Notably, the final steps of ATP hydrolysis include the fast release of phosphate and the slow release of ADP. [ 44 ] [ 45 ] The release of a phosphate anion from bound ADP anion into water solution may be considered as an exergonic reaction because the phosphate anion has low molecular mass. Thus, we arrive at the conclusion that the primary release of the inorganic phosphate H 2 PO 4 − leads to transformation of a significant part of the free energy of ATP hydrolysis into the kinetic energy of the solvated phosphate, producing active streaming. This assumption of a local mechano-chemical transduction is in accord with Tirosh's mechanism of muscle contraction, where the muscle force derives from an integrated action of active streaming created by ATP hydrolysis. [ 46 ] [ 47 ] In reality, most enzyme mechanisms involve a combination of several different types of catalysis. Triose phosphate isomerase ( EC 5.3.1.1 ) catalyses the reversible interconversion of the two triose phosphates isomers dihydroxyacetone phosphate and D- glyceraldehyde 3-phosphate . Trypsin ( EC 3.4.21.4 ) is a serine protease that cleaves protein substrates after lysine or arginine residues using a catalytic triad to perform covalent catalysis, and an oxyanion hole to stabilise charge-buildup on the transition states . Aldolase ( EC 4.1.2.13 ) catalyses the breakdown of fructose 1,6-bisphosphate (F-1,6-BP) into glyceraldehyde 3-phosphate and dihydroxyacetone phosphate ( DHAP ). The advent of single-molecule studies in the 2010s led to the observation that the movement of untethered enzymes increases with increasing substrate concentration and increasing reaction enthalpy . [ 48 ] Subsequent observations suggest that this increase in diffusivity is driven by transient displacement of the enzyme's center of mass , resulting in a "recoil effect that propels the enzyme". [ 49 ] Similarity between enzymatic reactions ( EC ) can be calculated by using bond changes, reaction centres or substructure metrics ( EC-BLAST Archived 30 May 2019 at the Wayback Machine ). [ 50 ]
https://en.wikipedia.org/wiki/Induced_fit
Induced gas flotation ( IGF ) is a water treatment process that clarifies wastewater (or other waters) by removing suspended matter such as oil or solids. The removal is achieved by injecting gas bubbles into the water or wastewater in a flotation tank or basin. The small bubbles adhere to the suspended matter causing the suspended matter to float to the surface of the water where it may then be removed by a skimming device. Induced gas flotation is very widely used in treating the industrial wastewater effluents from oil refineries , petrochemical and chemical plants , natural gas processing plants and similar industrial facilities. A very similar process known as dissolved air flotation is also used for waste water treatment. Froth flotation is commonly used in the processing of mineral ores. IGF units in the oil industry do not use air as the flotation medium due to the explosion risk. These IGF units use natural gas or nitrogen to create the bubbles. The feed water to the IGF float tank is often (but not always) dosed with a coagulant (such as ferric chloride or aluminum sulfate) to flocculate the suspended matter. The bubbles may be generated by an impeller, eductors or a sparger. The bubbles adhere to the suspended matter, causing the suspended matter to float to the surface and form a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the IGF unit. [ 1 ] Some IGF unit designs utilize parallel plate packing material to provide more separation surface and therefore to enhance the separation efficiency of the unit.
https://en.wikipedia.org/wiki/Induced_gas_flotation
Induced pluripotent stem cells (also known as iPS cells or iPSCs ) are a type of pluripotent stem cell that can be generated directly from a somatic cell . The iPSC technology was pioneered by Shinya Yamanaka and Kazutoshi Takahashi in Kyoto , Japan , who together showed in 2006 that the introduction of four specific genes (named Myc , Oct3/4 , Sox2 and Klf4 ), collectively known as Yamanaka factors, encoding transcription factors could convert somatic cells into pluripotent stem cells. [ 1 ] Shinya Yamanaka was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent." [ 2 ] Pluripotent stem cells hold promise in the field of regenerative medicine . [ 3 ] Because they can propagate indefinitely, as well as give rise to every other cell type in the body (such as neurons, heart, pancreatic, and liver cells), they represent a single source of cells that could be used to replace those lost to damage or disease. The most well-known type of pluripotent stem cell is the embryonic stem cell . However, since the generation of embryonic stem cells involves destruction (or at least manipulation) [ 4 ] of the pre-implantation stage embryo, there has been much controversy surrounding their use. Patient-matched embryonic stem cell lines can now be derived using somatic cell nuclear transfer (SCNT). [ citation needed ] Since iPSCs can be derived directly from adult tissues, they not only bypass the need for embryos, but can be made in a patient-matched manner, which means that each individual could have their own pluripotent stem cell line. These unlimited supplies of autologous cells could be used to generate transplants without the risk of immune rejection. While the iPSC technology has not yet advanced to a stage where therapeutic transplants have been deemed safe, iPSCs are readily being used in personalized drug discovery efforts and understanding the patient-specific basis of disease. [ 5 ] Yamanaka named iPSCs with a lower case "i" due to the popularity of the iPod and other products. [ 6 ] [ 7 ] [ 8 ] [ 9 ] In his Nobel seminar, Yamanaka cited the earlier seminal work of Harold Weintraub on the role of myoblast determination protein 1 (MyoD) in reprogramming cell fate to a muscle lineage as an important precursor to the discovery of iPSCs. [ 10 ] iPSCs are typically derived by introducing products of specific sets of pluripotency-associated genes, or "reprogramming factors", into a given cell type. The original set of reprogramming factors (also dubbed Yamanaka factors) are the transcription factors Oct4 (Pou5f1), Sox2 , Klf4 and cMyc . While this combination is most conventional in producing iPSCs, each of the factors can be functionally replaced by related transcription factors, miRNAs , small molecules, or even non-related genes such as lineage specifiers. [ 11 ] It is also clear that pro-mitotic factors such as C-MYC/L-MYC or repression of cell cycle checkpoints, such as p53, are conduits to creating a compliant cellular state for iPSC reprogramming. [ 12 ] iPSC derivation is typically a slow and inefficient process, taking one–two weeks for mouse cells and three–four weeks for human cells, with efficiencies around 0.01–0.1%. However, considerable advances have been made in improving the efficiency and the time it takes to obtain iPSCs. Upon introduction of reprogramming factors, cells begin to form colonies that resemble pluripotent stem cells, which can be isolated based on their morphology, conditions that select for their growth, or through expression of surface markers or reporter genes . Induced pluripotent stem cells were first generated by Shinya Yamanaka and Kazutoshi Takahashi at Kyoto University , Japan, in 2006. [ 1 ] They hypothesized that genes important to embryonic stem cell (ESC) function might be able to induce an embryonic state in adult cells. They chose twenty-four genes previously identified as important in ESCs and used retroviruses to deliver these genes to mouse fibroblasts . The fibroblasts were engineered so that any cells reactivating the ESC-specific gene, Fbx15 , could be isolated using antibiotic selection. Upon delivery of all twenty-four factors, ESC-like colonies emerged that reactivated the Fbx15 reporter and could propagate indefinitely. To identify the genes necessary for reprogramming, the researchers removed one factor at a time from the pool of twenty-four. By this process, they identified four factors, Oct4, Sox2, cMyc, and Klf4, which were each necessary and together sufficient to generate ESC-like colonies under selection for reactivation of Fbx15. In June 2007, three separate research groups, including that of Yamanaka's, a Harvard / University of California, Los Angeles collaboration, and a group at MIT , published studies that substantially improved on the reprogramming approach, giving rise to iPSCs that were indistinguishable from ESCs. Unlike the first generation of iPSCs, these second generation iPSCs produced viable chimeric mice and contributed to the mouse germline, thereby achieving the 'gold standard' for pluripotent stem cells. These second-generation iPSCs were derived from mouse fibroblasts by retroviral-mediated expression of the same four transcription factors (Oct4, Sox2, cMyc, Klf4). However, instead of using Fbx15 to select for pluripotent cells, the researchers used Nanog , a gene that is functionally important in ESCs. By using this different strategy, the researchers created iPSCs that were functionally identical to ESCs. [ 13 ] [ 14 ] [ 15 ] [ 16 ] Reprogramming of human cells to iPSCs was reported in November 2007 by two independent research groups: Shinya Yamanaka of Kyoto University, Japan, who pioneered the original iPSC method, and James Thomson of University of Wisconsin-Madison who was the first to derive human embryonic stem cells. With the same principle used in mouse reprogramming, Yamanaka's group successfully transformed human fibroblasts into iPSCs with the same four pivotal genes, Oct4, Sox2, Klf4, and cMyc, using a retroviral system, [ 17 ] while Thomson and colleagues used a different set of factors, Oct4, Sox2, Nanog, and Lin28, using a lentiviral system. [ 18 ] Obtaining fibroblasts to produce iPSCs involves a skin biopsy, and there has been a push towards identifying cell types that are more easily accessible. [ 19 ] [ 20 ] In 2008, iPSCs were derived from human keratinocytes, which could be obtained from a single hair pluck. [ 21 ] [ 22 ] In 2010, iPSCs were derived from peripheral blood cells, [ 23 ] [ 24 ] and in 2012, iPSCs were made from renal epithelial cells in the urine. [ 25 ] Other considerations for starting cell type include mutational load (for example, skin cells may harbor more mutations due to UV exposure), [ 19 ] [ 20 ] time it takes to expand the population of starting cells, [ 19 ] and the ability to differentiate into a given cell type. [ 26 ] [ citation needed ] The generation of induced pluripotent cells is crucially dependent on the transcription factors used for the induction. Oct-3/4 and certain products of the Sox gene family (Sox1, Sox2, Sox3, and Sox15) have been identified as crucial transcriptional regulators involved in the induction process whose absence makes induction impossible. Additional genes, however, including certain members of the Klf family (Klf1, Klf2, Klf4, and Klf5), the Myc family (c-myc, L-myc, and N-myc), Nanog , and LIN28 , have been identified to increase the induction efficiency. Although the methods pioneered by Yamanaka and others have demonstrated that adult cells can be reprogrammed to iPS cells, there are still challenges associated with this technology: The table on the right summarizes the key strategies and techniques used to develop iPS cells in the first five years after Yamanaka et al.'s 2006 breakthrough. Rows of similar colors represent studies that used similar strategies for reprogramming. One of the main strategies for avoiding problems (1) and (2) has been to use small molecules that can mimic the effects of transcription factors. These compounds can compensate for a reprogramming factor that does not effectively target the genome or fails at reprogramming for another reason; thus they raise reprogramming efficiency. They also avoid the problem of genomic integration, which in some cases contributes to tumor genesis. Key studies using such strategy were conducted in 2008. Melton et al. studied the effects of histone deacetylase (HDAC) inhibitor valproic acid. They found that it increased reprogramming efficiency 100-fold (compared to Yamanaka's traditional transcription factor method). [ 42 ] The researchers proposed that this compound was mimicking the signaling that is usually caused by the transcription factor c-Myc. A similar type of compensation mechanism was proposed to mimic the effects of Sox2 . In 2008, Ding et al. used the inhibition of histone methyl transferase (HMT) with BIX-01294 in combination with the activation of calcium channels in the plasma membrane in order to increase reprogramming efficiency. [ 43 ] Deng et al. of Beijing University reported in July 2013 that induced pluripotent stem cells can be created without any genetic modification. They used a cocktail of seven small-molecule compounds including DZNep to induce the mouse somatic cells into stem cells which they called CiPS cells with the efficiency – at 0.2% – comparable to those using standard iPSC production techniques. The CiPS cells were introduced into developing mouse embryos and were found to contribute to all major cells types, proving its pluripotency. [ 44 ] [ 45 ] Ding et al . demonstrated an alternative to transcription factor reprogramming through the use of drug-like chemicals. By studying the mesenchymal-epithelial transition (MET) process in which fibroblasts are pushed to a stem-cell like state, Ding's group identified two chemicals – ALK5 inhibitor SB431412 and MEK (mitogen-activated protein kinase) inhibitor PD0325901 – which was found to increase the efficiency of the classical genetic method by 100 fold. Adding a third compound known to be involved in the cell survival pathway, thiazovivin further increases the efficiency by 200 fold. Using the combination of these three compounds also decreased the reprogramming process of the human fibroblasts from four weeks to two weeks. [ 46 ] [ 47 ] In April 2009, it was demonstrated that generation of iPS cells is possible without any genetic alteration of the adult cell: a repeated treatment of the cells with certain proteins channeled into the cells via poly-arginine anchors was sufficient to induce pluripotency. [ 48 ] The acronym given for those iPSCs is piPSCs (protein-induced pluripotent stem cells). Another key strategy for avoiding problems such as tumorgenesis and low throughput has been to use alternate forms of vectors: adenoviruses , plasmids , and naked DNA or protein compounds. In 2008, Hochedlinger et al. used an adenovirus to transport the requisite four transcription factors into the DNA of skin and liver cells of mice, resulting in cells identical to ESCs. The adenovirus is unique from other vectors like viruses and retroviruses because it does not incorporate any of its own genes into the targeted host and avoids the potential for insertional mutagenesis. [ 43 ] In 2009, Freed et al. demonstrated successful reprogramming of human fibroblasts to iPS cells. [ 49 ] Another advantage of using adenoviruses is that they only need to present for a brief amount of time in order for effective reprogramming to take place. Also in 2008, Yamanaka et al. found that they could transfer the four necessary genes with a plasmid. [ 35 ] The Yamanaka group successfully reprogrammed mouse cells by transfection with two plasmid constructs carrying the reprogramming factors; the first plasmid expressed c-Myc, while the second expressed the other three factors ( Oct4 , Klf4 , and Sox2 ). Although the plasmid methods avoid viruses, they still require cancer-promoting genes to accomplish reprogramming. The other main issue with these methods is that they tend to be much less efficient compared to retroviral methods. Furthermore, transfected plasmids have been shown to integrate into the host genome and therefore they still pose the risk of insertional mutagenesis. Because non-retroviral approaches have demonstrated such low efficiency levels, researchers have attempted to effectively rescue the technique with what is known as the PiggyBac Transposon System . Several studies have demonstrated that this system can effectively deliver the key reprogramming factors without leaving footprint mutations in the host cell genome. The PiggyBac Transposon System involves the re-excision of exogenous genes, which eliminates the issue of insertional mutagenesis. [ citation needed ] In January 2014, two articles were published claiming that a type of pluripotent stem cell can be generated by subjecting the cells to certain types of stress (bacterial toxin, a low pH of 5.7, or physical squeezing); the resulting cells were called STAP cells, for stimulus-triggered acquisition of pluripotency . [ 50 ] In light of difficulties that other labs had replicating the results of the surprising study, in March 2014, one of the co-authors has called for the articles to be retracted. [ 51 ] On 4 June 2014, the lead author, Obokata agreed to retract both the papers [ 52 ] after she was found to have committed 'research misconduct' as concluded in an investigation by RIKEN on 1 April 2014. [ 53 ] MicroRNAs are short RNA molecules that bind to complementary sequences on messenger RNA and block expression of a gene. Measuring variations in microRNA expression in iPS cells can be used to predict their differentiation potential. [ 54 ] Addition of microRNAs can also be used to enhance iPS potential. Several mechanisms have been proposed. [ 54 ] ES cell-specific microRNA molecules (such as miR-291, miR-294 and miR-295) enhance the efficiency of induced pluripotency by acting downstream of c-Myc. [ 55 ] MicroRNAs can also block expression of repressors of Yamanaka's four transcription factors, and there may be additional mechanisms induce reprogramming even in the absence of added exogenous transcription factors. [ 54 ] Induced pluripotent stem cells are similar to natural pluripotent stem cells, such as embryonic stem cells, in many aspects, such as the expression of certain stem cell genes and proteins, chromatin methylation patterns, doubling time, embryoid body formation, teratoma formation, viable chimera formation, and potency and differentiability, but the full extent of their relation to natural pluripotent stem cells is still being assessed. [ 1 ] Gene expression and genome-wide H3K4me3 and H3K27me3 were found to be extremely similar between ES and iPS cells. [ 56 ] [ citation needed ] The generated iPSCs were remarkably similar to naturally isolated pluripotent stem cells (such as mouse and human embryonic stem cells, mESCs and hESCs, respectively) in the following respects, thus confirming the identity, authenticity, and pluripotency of iPSCs to naturally isolated pluripotent stem cells: The task of producing iPS cells continues to be challenging due to the six problems mentioned above. A key tradeoff to overcome is that between efficiency and genomic integration. Most methods that do not rely on the integration of transgenes are inefficient, while those that do rely on the integration of transgenes face the problems of incomplete reprogramming and tumor genesis, although a vast number of techniques and methods have been attempted. Another large set of strategies is to perform a proteomic characterization of iPS cells. [ 58 ] Further studies and new strategies should generate optimal solutions to the five main challenges. One approach might attempt to combine the positive attributes of these strategies into an ultimately effective technique for reprogramming cells to iPS cells. Another approach is the use of iPS cells derived from patients to identify therapeutic drugs able to rescue a phenotype. For instance, iPS cell lines derived from patients affected by ectodermal dysplasia syndrome (EEC), in which the p63 gene is mutated, display abnormal epithelial commitment that could be partially rescued by a small compound. [ 67 ] An attractive feature of human iPS cells is the ability to derive them from adult patients to study the cellular basis of human disease. Since iPS cells are self-renewing and pluripotent, they represent a theoretically unlimited source of patient-derived cells which can be turned into any type of cell in the body. This is particularly important because many other types of human cells derived from patients tend to stop growing after a few passages in laboratory culture. iPS cells have been generated for a wide variety of human genetic diseases, including common disorders such as Down syndrome and polycystic kidney disease. [ 68 ] [ 69 ] [ 70 ] In many instances, the patient-derived iPS cells exhibit cellular defects not observed in iPS cells from healthy subjects, providing insight into the pathophysiology of the disease. [ 71 ] [ 72 ] An international collaborated project, StemBANCC, was formed in 2012 to build a collection of iPS cell lines for drug screening for a variety of diseases. Managed by the University of Oxford , the effort pooled funds and resources from 10 pharmaceutical companies and 23 universities. The goal is to generate a library of 1,500 iPS cell lines which will be used in early drug testing by providing a simulated human disease environment. [ 73 ] Furthermore, combining hiPSC technology and small molecule or genetically encoded voltage and calcium indicators provided a large-scale and high-throughput platform for cardiovascular drug safety screening. [ 74 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] A proof-of-concept of using induced pluripotent stem cells (iPSCs) to generate human organ for transplantation was reported by researchers from Japan. Human ' liver buds' (iPSC-LBs) were grown from a mixture of three different kinds of stem cells: hepatocyte (for liver function) coaxed from iPSCs; endothelial stem cells (to form lining of blood vessels ) from umbilical cord blood ; and mesenchymal stem cells (to form connective tissue ). This new approach allows different cell types to self-organize into a complex organ, mimicking the process in fetal development . After growing in vitro for a few days, the liver buds were transplanted into mice where the 'liver' quickly connected with the host blood vessels and continued to grow. Most importantly, it performed regular liver functions including metabolizing drugs and producing liver-specific proteins. Further studies will monitor the longevity of the transplanted organ in the host body (ability to integrate or avoid rejection ) and whether it will transform into tumors . [ 79 ] [ 80 ] In 2021, a switchable Yamanaka factors- reprogramming -based approach for regeneration of damaged heart without tumor-formation was demonstrated in mice and was successful if the intervention was carried out immediately before or after a heart attack. [ 81 ] Embryonic cord-blood cells were induced into pluripotent stem cells using plasmid DNA. Using cell surface endothelial/pericytic markers CD31 and CD146 , researchers identified 'vascular progenitor', the high-quality, multipotent vascular stem cells. After the iPS cells were injected directly into the vitreous of the damaged retina of mice, the stem cells engrafted into the retina, grew and repaired the vascular vessels . [ 82 ] [ 83 ] Labelled iPSCs-derived NSCs injected into laboratory animals with brain lesions were shown to migrate to the lesions and some motor function improvement was observed. [ 84 ] Beating cardiac muscle cells, iPSC-derived cardiomyocytes , can be mass-produced using chemically defined differentiation protocols. [ 85 ] [ 86 ] These protocols typically modulate the same developmental signaling pathways required for heart development . [ 87 ] These iPSC-cardiomyocytes can recapitulate genetic arrhythmias and cardiac drug responses, since they exhibit the same genetic background as the patient from which they were derived. [ 88 ] [ 89 ] [ 90 ] [ 91 ] In June 2014, Takara Bio received technology transfer from iHeart Japan, a venture company from Kyoto University's iPS Cell Research Institute, to make it possible to exclusively use technologies and patents that induce differentiation of iPS cells into cardiomyocytes in Asia. The company announced the idea of selling cardiomyocytes to pharmaceutical companies and universities to help develop new drugs for heart disease. [ 92 ] On March 9, 2018, the Specified Regenerative Medicine Committee of Osaka University officially approved the world's first clinical research plan to transplant a "myocardial sheet" made from iPS cells into the heart of patients with severe heart failure. Osaka University announced that it had filed an application with the Ministry of Health, Labor and Welfare on the same day. On May 16, 2018, the clinical research plan was approved by the Ministry of Health, Labor and Welfare's expert group with a condition. [ 93 ] [ 94 ] In October 2019, a group at Okayama University developed a model of ischemic heart disease using cardiomyocytes differentiated from iPS cells. [ 95 ] Although a pint of donated blood contains about two trillion red blood cells and over 107 million blood donations are collected globally, there is still a critical need for blood for transfusion. In 2014, type O red blood cells were synthesized at the Scottish National Blood Transfusion Service from iPSC. The cells were induced to become a mesoderm and then blood cells and then red blood cells. The final step was to make them eject their nuclei and mature properly. Type O can be transfused into all patients. Human clinical trials were not expected to begin before 2016. [ 96 ] The first human clinical trial using autologous iPSCs was approved by the Japan Ministry Health and was to be conducted in 2014 at the Riken Center for Developmental Biology in Kobe . However the trial was suspended after Japan's new regenerative medicine laws came into effect in November 2015. [ 97 ] More specifically, an existing set of guidelines was strengthened to have the force of law (previously mere recommendations). [ 98 ] iPSCs derived from skin cells from six patients with wet age-related macular degeneration were reprogrammed to differentiate into retinal pigment epithelial (RPE) cells. The cell sheet would be transplanted into the affected retina where the degenerated RPE tissue was excised. Safety and vision restoration monitoring were to last one to three years. [ 99 ] [ 100 ] In March 2017, a team led by Masayo Takahashi completed the first successful transplant of iPS-derived retinal cells from a donor into the eye of a person with advanced macular degeneration. [ 101 ] However it was reported that they are now having complications. [ 102 ] The benefits of using autologous iPSCs are that there is theoretically no risk of rejection and that it eliminates the need to use embryonic stem cells. However, these iPSCs were derived from another person. [ 100 ] New clinical trials involving iPSCs are now ongoing not only in Japan, but also in the US and Europe. [ 103 ] Research in 2021 on the trial registry Clinicaltrials.gov identified 129 trial listings mentioning iPSCs, but most were non-interventional. [ 104 ] To make iPSC-based regenerative medicine technologies available to more patients, it is necessary to create universal iPSCs that can be transplanted independently of haplotypes of HLA . The current strategy for the creation of universal iPSCs has two main goals: to remove HLA expression and to prevent NK cells attacks due to deletion of HLA. Deletion of the B2M and CIITA genes using the CRISPR/Cas9 system has been reported to suppress the expression of HLA class I and class II, respectively. To avoid NK cell attacks. transduction of ligands inhibiting NK-cells, such as HLA-E and CD47 has been used. [ 105 ] HLA-C is left unchanged, since the 12 common HLA-C alleles are enough to cover 95% of the world's population. [ 105 ] A multipotent mesenchymal stem cell, when induced into pluripotence, holds great promise to slow or reverse aging phenotypes. Such anti-aging properties were demonstrated in early clinical trials in 2017. [ 106 ] In 2020, Stanford University researchers concluded after studying elderly mice that old human cells when subjected to the Yamanaka factors, might rejuvenate and become nearly indistinguishable from their younger counterparts. [ 107 ]
https://en.wikipedia.org/wiki/Induced_pluripotent_stem_cell
Induced radioactivity , also called artificial radioactivity or man-made radioactivity , is the process of using radiation to make a previously stable material radioactive . [ 1 ] The husband-and-wife team of Irène Joliot-Curie and Frédéric Joliot-Curie discovered induced radioactivity in 1934, and they shared the 1935 Nobel Prize in Chemistry for this discovery. [ 2 ] Irène Curie began her research with her parents, Marie Curie and Pierre Curie , studying the natural radioactivity found in radioactive isotopes . Irene branched off from the Curies to study turning stable isotopes into radioactive isotopes by bombarding the stable material with alpha particles (denoted α). The Joliot-Curies showed that when lighter elements, such as boron and aluminium , were bombarded with α-particles, the lighter elements continued to emit radiation even after the α−source was removed. They showed that this radiation consisted of particles carrying one unit positive charge with mass equal to that of an electron, now known as a positron . Neutron activation is the main form of induced radioactivity. It occurs when an atomic nucleus captures one or more free neutrons . This new, heavier isotope may be either stable or unstable (radioactive), depending on the chemical element involved. Because neutrons disintegrate within minutes outside of an atomic nucleus, free neutrons can be obtained only from nuclear decay , nuclear reaction , and high-energy interaction, such as cosmic radiation or particle accelerator emissions. Neutrons that have been slowed through a neutron moderator ( thermal neutrons ) are more likely to be captured by nuclei than fast neutrons. A less common form of induced radioactivity results from removing a neutron by photodisintegration . In this reaction, a high energy photon (a gamma ray ) strikes a nucleus with an energy greater than the binding energy of the nucleus, which releases a neutron. This reaction has a minimum cutoff of 2 MeV (for deuterium ) and around 10 MeV for most heavy nuclei. [ 3 ] Many radionuclides do not produce gamma rays with energy high enough to induce this reaction. The isotopes used in food irradiation ( cobalt-60 , caesium-137 ) both have energy peaks below this cutoff and thus cannot induce radioactivity in the food. [ 4 ] The conditions inside certain types of nuclear reactors with high neutron flux can induce radioactivity. The components in those reactors may become highly radioactive from the radiation to which they are exposed. Induced radioactivity increases the amount of nuclear waste that must eventually be disposed, but it is not referred to as radioactive contamination unless it is uncontrolled. Further research originally done by Irene and Frederic Joliot-Curie has led to modern techniques to treat various types of cancers. [ 5 ] After World War I , with support from Constantin Kirițescu , Ștefania Mărăcineanu obtained a fellowship that allowed her to travel to Paris to further her studies. In 1919 she took a course on radioactivity at the Sorbonne with Marie Curie . [ 6 ] Afterwards, she pursued research with Curie at the Radium Institute until 1926. She received her Ph.D. At the institute, Mărăcineanu researched the half-life of polonium and devised methods of measuring alpha decay .This work led her to believe that radioactive isotopes could be formed from atoms as a result of exposure to polonium's alpha rays, an observation which would lead to the Joliot-Curies ' 1935 Nobel Prize. [ 7 ] In 1935, Frederic and Irene Joliot-Curie (n.r.—daughter of scientists Pierre Curie and Marie Curie) won the Nobel Prize in Chemistry for the discovery of artificial radioactivity. Ștefania Mărăcineanu expressed her dismay at the fact that Irene Joliot-Curie had used a large part of her work observations regarding artificial radioactivity, without mentioning it. Mărăcineanu publicly claimed that she discovered artificial radioactivity during her years of research in Paris, as evidenced by her doctoral dissertation, presented more than 10 years earlier. "Mărăcineanu wrote to Lise Meitner in 1936, expressing her disappointment that Irene Joliot Curie, without her knowledge, used much of her work, especially that related to artificial radioactivity, in her work," is mentioned in the book A devotion to their science: Pioneer women of radioactivity . Historians, however, have thrown doubt on the claims of Mărăcineanu. [ 8 ]
https://en.wikipedia.org/wiki/Induced_radioactivity
An induced thymic epithelial cell (iTEC) is a cell that has been induced to become a thymic epithelial cell . [ 1 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Induced_thymic_epithelial_cell
In molecular biology , an inducer is a molecule that regulates gene expression . [ 1 ] An inducer functions in two ways; namely: Because a small inducer molecule is required, the increased expression of the target gene is called induction . [ 2 ] The lactose operon is one example of an inducible system. [ 2 ] Repressor proteins bind to the DNA strand and prevent RNA polymerase from being able to attach to the DNA and synthesize mRNA. Inducers bind to repressors, causing them to change shape and preventing them from binding to DNA. Therefore, they allow transcription, and thus gene expression, to take place. For a gene to be expressed, its DNA sequence must be copied (in a process known as transcription ) to make a smaller, mobile molecule called messenger RNA (mRNA), which carries the instructions for making a protein to the site where the protein is manufactured (in a process known as translation ). Many different types of proteins can affect the level of gene expression by promoting or preventing transcription. In prokaryotes (such as bacteria), these proteins often act on a portion of DNA known as the operator at the beginning of the gene. Transcription begins when RNA polymerase , the enzyme that copies the genetic sequence and synthesizes the mRNA, attaches to the DNA strand, which it does at a spot called a promoter . Some genes are modulated by activators , which have the opposite effect on gene expression as repressors. Inducers can also bind to activator proteins, allowing them to bind to the operator DNA where they promote RNA transcription. Ligands that bind to deactivate activator proteins are not, in the technical sense, classified as inducers, since they have the effect of preventing transcription. The inducer in the lac operon is allolactose . [ 2 ] If lactose is present in the medium, then a small amount of it will be converted to allolactose by a few molecules of β-galactosidase that are present in the cell. [ 3 ] Allolactose binds to the repressor and decreases the repressor's affinity for the operator site. [ 3 ] However, when lactose and glucose are both available in the system, the lac operon is repressed. This is because glucose actively prevents the induction of lacZYA . [ 2 ] In the ara operon (also known as the ara or araBAD operon), arabinose acts as both an inducer and a repressor. When arabinose is present, it allosterically binds to the regulatory protein AraC, which then helps to recruit RNA polymerase for transcription. Index inducer or just inducer predictably induce metabolism via a given pathway and are commonly used in prospective clinical drug-drug interaction studies. [ 4 ] Strong, moderate, and weak inducers are drugs that decreases the AUC of sensitive index substrates of a given metabolic pathway by ≥80%, ≥50% to <80%, and ≥20% to <50%, respectively. [ 4 ]
https://en.wikipedia.org/wiki/Inducer
In first-order arithmetic , the induction principles , bounding principles , and least number principles are three related families of first-order principles, which may or may not hold in nonstandard models of arithmetic . These principles are often used in reverse mathematics to calibrate the axiomatic strength of theorems. Informally, for a first-order formula of arithmetic φ ( x ) {\displaystyle \varphi (x)} with one free variable, the induction principle for φ {\displaystyle \varphi } expresses the validity of mathematical induction over φ {\displaystyle \varphi } , while the least number principle for φ {\displaystyle \varphi } asserts that if φ {\displaystyle \varphi } has a witness , it has a least one. For a formula ψ ( x , y ) {\displaystyle \psi (x,y)} in two free variables, the bounding principle for ψ {\displaystyle \psi } states that, for a fixed bound k {\displaystyle k} , if for every n < k {\displaystyle n<k} there is m n {\displaystyle m_{n}} such that ψ ( n , m n ) {\displaystyle \psi (n,m_{n})} , then we can find a bound on the m n {\displaystyle m_{n}} 's. Formally, the induction principle for φ {\displaystyle \varphi } is the sentence: [ 1 ] There is a similar strong induction principle for φ {\displaystyle \varphi } : [ 1 ] The least number principle for φ {\displaystyle \varphi } is the sentence: [ 1 ] Finally, the bounding principle for ψ {\displaystyle \psi } is the sentence: [ 1 ] More commonly, we consider these principles not just for a single formula, but for a class of formulae in the arithmetical hierarchy . For example, I Σ 2 {\displaystyle {\mathsf {I}}\Sigma _{2}} is the axiom schema consisting of I φ {\displaystyle {\mathsf {I}}\varphi } for every Σ 2 {\displaystyle \Sigma _{2}} formula φ ( x ) {\displaystyle \varphi (x)} in one free variable. It may seem that the principles I φ {\displaystyle {\mathsf {I}}\varphi } , I ′ φ {\displaystyle {\mathsf {I}}'\varphi } , L φ {\displaystyle {\mathsf {L}}\varphi } , B ψ {\displaystyle {\mathsf {B}}\psi } are trivial, and indeed, they hold for all formulae φ {\displaystyle \varphi } , ψ {\displaystyle \psi } in the standard model of arithmetic N {\displaystyle \mathbb {N} } . However, they become more relevant in nonstandard models. Recall that a nonstandard model of arithmetic has the form N + Z ⋅ K {\displaystyle \mathbb {N} +\mathbb {Z} \cdot K} for some linear order K {\displaystyle K} . In other words, it consists of an initial copy of N {\displaystyle \mathbb {N} } , whose elements are called finite or standard , followed by many copies of Z {\displaystyle \mathbb {Z} } arranged in the shape of K {\displaystyle K} , whose elements are called infinite or nonstandard . Now, considering the principles I φ {\displaystyle {\mathsf {I}}\varphi } , I ′ φ {\displaystyle {\mathsf {I}}'\varphi } , L φ {\displaystyle {\mathsf {L}}\varphi } , B ψ {\displaystyle {\mathsf {B}}\psi } in a nonstandard model M {\displaystyle {\mathcal {M}}} , we can see how they might fail. For example, the hypothesis of the induction principle I φ {\displaystyle {\mathsf {I}}\varphi } only ensures that φ ( x ) {\displaystyle \varphi (x)} holds for all elements in the standard part of M {\displaystyle {\mathcal {M}}} - it may not hold for the nonstandard elements, who can't be reached by iterating the successor operation from zero. Similarly, the bounding principle B ψ {\displaystyle {\mathsf {B}}\psi } might fail if the bound u {\displaystyle u} is nonstandard, as then the (infinite) collection of y {\displaystyle y} could be cofinal in M {\displaystyle {\mathcal {M}}} . The following relations hold between the principles (over the weak base theory P A − + I Σ 0 {\displaystyle {\mathsf {PA}}^{-}+{\mathsf {I}}\Sigma _{0}} ): [ 1 ] [ 2 ] Over P A − + I Σ 0 + e x p {\displaystyle {\mathsf {PA}}^{-}+{\mathsf {I}}\Sigma _{0}+{\mathsf {exp}}} , Slaman proved that B Σ n ≡ L Δ n ≡ I Δ n {\displaystyle {\mathsf {B}}\Sigma _{n}\equiv {\mathsf {L}}\Delta _{n}\equiv {\mathsf {I}}\Delta _{n}} . [ 3 ] The induction, bounding and least number principles are commonly used in reverse mathematics and second-order arithmetic . For example, I Σ 1 {\displaystyle {\mathsf {I}}\Sigma _{1}} is part of the definition of the subsystem R C A 0 {\displaystyle {\mathsf {RCA}}_{0}} of second-order arithmetic. Hence, I ′ Σ 1 {\displaystyle {\mathsf {I}}'\Sigma _{1}} , L Σ 1 {\displaystyle {\mathsf {L}}\Sigma _{1}} and B Σ 1 {\displaystyle {\mathsf {B}}\Sigma _{1}} are all theorems of R C A 0 {\displaystyle {\mathsf {RCA}}_{0}} . The subsystem A C A 0 {\displaystyle {\mathsf {ACA}}_{0}} proves all the principles I φ {\displaystyle {\mathsf {I}}\varphi } , I ′ φ {\displaystyle {\mathsf {I}}'\varphi } , L φ {\displaystyle {\mathsf {L}}\varphi } , B ψ {\displaystyle {\mathsf {B}}\psi } for arithmetical φ {\displaystyle \varphi } , ψ {\displaystyle \psi } . The infinite pigeonhole principle is known to be equivalent to B Π 1 {\displaystyle {\mathsf {B}}\Pi _{1}} and B Σ 2 {\displaystyle {\mathsf {B}}\Sigma _{2}} over R C A 0 {\displaystyle {\mathsf {RCA}}_{0}} . [ 4 ]
https://en.wikipedia.org/wiki/Induction,_bounding_and_least_number_principles
An induction furnace is an electrical furnace in which the heat is applied by induction heating of metal . [ 1 ] [ 2 ] [ 3 ] Induction furnace capacities range from less than one kilogram to one hundred tons, and are used to melt iron and steel , copper , aluminum , and precious metals . The advantage of the induction furnace is a clean, energy-efficient and well-controlled melting process, compared to most other means of metal melting. Most modern foundries use this type of furnace, and many iron foundries are replacing cupola furnaces with induction furnaces to melt cast iron , as the former emit much dust and other pollutants . [ 4 ] Induction furnaces do not require an arc, as in an electric arc furnace , or combustion, as in a blast furnace . As a result, the temperature of the charge (the material entered into the furnace for heating, not to be confused with electric charge ) is no higher than required to melt it; this can prevent the loss of valuable alloying elements. [ 5 ] The one major drawback to induction furnace usage in a foundry is the lack of refining capacity: charge materials must be free of oxides and be of a known composition, and some alloying elements may be lost due to oxidation, so they must be re-added to the melt. In the coreless type, [ 6 ] metal is placed in a crucible surrounded by a water-cooled alternating current solenoid coil. A channel-type induction furnace has a loop of molten metal, which forms a single-turn secondary winding through an iron core. [ 7 ] [ 8 ] An induction furnace consists of a nonconductive crucible holding the charge of metal to be melted, surrounded by a coil of copper wire. A powerful alternating current flows through the wire. The coil creates a rapidly reversing magnetic field that penetrates the metal. The magnetic field induces eddy currents , circular electric currents, inside the metal, by electromagnetic induction . [ 9 ] The eddy currents, flowing through the electrical resistance of the bulk metal, heat it by Joule heating . In ferromagnetic materials like iron , the material may also be heated by magnetic hysteresis , the reversal of the molecular magnetic dipoles in the metal. Once melted, the eddy currents cause vigorous stirring of the melt, assuring good mixing. An advantage of induction heating is that the heat is generated within the furnace's charge itself rather than applied by a burning fuel or other external heat source, which can be important in applications where contamination is an issue. Operating frequencies range from utility frequency (50 or 60 Hz ) to 400 kHz or higher, usually depending on the material being melted, the capacity (volume) of the furnace and the melting speed required. Generally, the smaller the volume of the melts, the higher the frequency of the furnace used; this is due to the skin depth which is a measure of the distance an alternating current can penetrate beneath the surface of a conductor . For the same conductivity, the higher frequencies have a shallow skin depth—that is less penetration into the melt. Lower frequencies can generate stirring or turbulence in the metal. A preheated, one-ton furnace melting iron can melt cold charge to tapping readiness within an hour. Power supplies range from 10 kW to 42 MW, with melt sizes of 20 kg to 65 tons of metal respectively. [ citation needed ] An operating induction furnace usually emits a hum or whine (due to fluctuating magnetic forces and magnetostriction ), the pitch of which can be used by operators to identify whether the furnace is operating correctly or at what power level. [ citation needed ] There is a disposable refractory lining used during casting.
https://en.wikipedia.org/wiki/Induction_furnace
An induction heater is a key piece of equipment used in all forms of induction heating . Typically an induction heater operates at either medium frequency (MF) or radio frequency (RF) ranges. [ 1 ] Four main component systems form the basis of a modern induction heater Induction heating is a non contact method of heating a conductive body by utilising a strong magnetic field . Supply (mains) frequency 50 Hz or 60 Hz induction heaters incorporate a coil directly fed from the electricity supply, typically for lower power industrial applications where lower surface temperatures are required. Some specialist induction heaters operate at 400 Hz , the Aerospace power frequency. Induction heating should not be confused with induction cooking, as the two heating systems are mostly very physically different from each other. Notably, induction heating systems work by applying an alternating magnetic field to a ferrous material to induce an alternating current in the material, so exciting the atoms in the material heating it up. [ citation needed ] An induction heater typically consists of three elements. Often referred to as the inverter or generator. This part of the system is used to take the mains frequency and increase it to anywhere between 10 Hz and 400 kHz . Typical output power of a unit system is from 2 kW to 500 kW . [ 2 ] This contains a combination of capacitors and transformers and is used to mate the power unit to the work coil. [ 3 ] Also known as the inductor, the coil is used to transfer the energy from the power unit and work head to the work piece. Inductors range in complexity from a simple wound solenoid consisting of a number of turns of copper tube wound around a mandrel, to a precision item machined from solid copper, brazed and soldered together. As the inductor is the area where the heating takes place, coil design is one of the most important elements of the system and is a science in itself. [ 4 ] Radio frequency ( RF ) induction generators work in the frequency range from 100 kHz up to 10 MHz . Most induction heating devices (with induction frequency control) have a frequency range of 100 kHz to 200 kHz. The output range typically incorporates 2.5 kW to 40 kW. Induction heaters in this range are used for smaller components and applications such as induction hardening an engine valve. [ 5 ] MF induction generators work from 1 kHz to 10 kHz. The output range typically incorporates 50 kW to 500 kW. Induction heaters within these ranges are used on medium to larger components and applications such as the induction forging of a shaft. [ 1 ] Mains (or supply ) frequency induction coils are driven directly from the standard AC supply. Most mains-frequency induction coils are designed for single phase operation, and are low-current devices intended for localised heating, or low-temperature surface area heating, such as in a drum heater . The basic principle involved in induction heating was discovered by Michael Faraday as early as 1831. Faraday's work involved the use of a switched DC supply provided by a battery and two windings of copper wire wrapped around an iron core. It was noted that when the switch was closed a momentary current flowed in the secondary winding, which could be measured by means of a galvanometer . If the circuit remained energized then the current ceased to flow. On opening the switch a current again flowed in the secondary winding, but in the opposite direction. Faraday concluded that since no physical link existed between the two windings, the current in the secondary coil must be caused by a voltage that was induced from the first coil, and that the current produced was directly proportional to the rate of change of the magnetic flux . [ 6 ] Initially the principles were put to use in the design of transformers , motors and generators where undesirable heating effects were controlled by the use of a laminated core . Early in the 20th century engineers started to look for ways to harness the heat-generating properties of induction for the purpose of melting steel. This early work used motor generators to create the medium frequency (MF) current, but the lack of suitable alternators and capacitors of the correct size held back early attempts. However, by 1927 the first MF induction melting system had been installed by EFCO in Sheffield, England. At around the same time engineers at Midvale Steel and The Ohio Crankshaft Company in America were attempting to use the surface-heating effect of the MF current to produce localized surface case hardening in crankshafts . Much of this work took place at the frequencies of 1920 and 3000 Hz as these were the easiest frequencies to produce with the equipment available. As with many technology-based fields it was the advent of World War II which led to huge developments in the utilization of induction heating in the production of vehicle parts and munitions. [ 7 ] Over time, the technology advanced and units in the 3 to 10 kHz frequency range with powers outputs to 600 kW became common place in induction forging and large induction hardening applications. The motor generator would remain the mainstay of MF power generation until the advent of high voltage semiconductors in the late 1960s and early 1970s. Early in the evolutionary process it became obvious to engineers that the ability to produce a higher radio frequency range of equipment would result in greater flexibility and open up a whole range of alternative applications. Methods were sought to produce these higher RF power supplies to operate in the 200 to 400 kHz range. Development in this particular frequency range has always mirrored that of the radio transmitter and television broadcasting industry and indeed has often used component parts developed for this purpose. Early units utilised spark gap technology, but due to limitations the approach was rapidly superseded by the use of multi-electrode thermionic triode (valve) based oscillators. Indeed, many of the pioneers in the industry were also very involved in the radio and telecommunications industry and companies such as Phillips , English Electric and Redifon were all involved in manufacturing induction heating equipment in the 1950s and 1960s. The use of this technology survived until the early 1990s at which point the technology was all but replaced by power MOSFET and IGBT solid state equipment. However, there are still many valve oscillators still in existence, and at extreme frequencies of 5 MHz and above they are often the only viable approach and are still produced. [ 8 ] Mains frequency induction heaters are still widely used throughout manufacturing industry due to their relatively low cost and thermal efficiency compared to radiant heating where piece parts or steel containers need to be heated as part of a batch process line. Due to its flexibility and potential frequency range, the valve oscillator based induction heater was until recent years widely used throughout industry. [ 9 ] Readily available in powers from 1 kW to 1 MW and in a frequency range from 100 kHz to many MHz, this type of unit found widespread use in thousands of applications including soldering and brazing, induction hardening, tube welding and induction shrink fitting . The unit consists of three basic elements: The DC ( direct current ) power supply consists of a standard air or water cooled step-up transformer and a high voltage rectifier unit capable of generating voltages typically between 5 and 10 kV to power the oscillator. The unit needs to be rated at the correct kilovolt-ampere (kVA) to supply the necessary current to the oscillator. Early rectifier systems featured valve rectifiers such as GXU4 (high power high voltage half wave rectifier) but these were ultimately superseded by high voltage solid state rectifiers. [ 10 ] The oscillator circuit is responsible for creating the elevated frequency electric current, which when applied to the work coil creates the magnetic field which heats the part. The basic elements of the circuit are an inductance (tank coil) and a capacitance (tank capacitor) and an oscillator valve. Basic electrical principles dictate that if a voltage is applied to a circuit containing a capacitor and inductor the circuit will oscillate in much the same way as a swing which has been pushed. Using our swing as an analogy if we do not push again at the right time the swing will gradually stop this is the same with the oscillator. The purpose of the valve is to act as a switch which will allow energy to pass into the oscillator at the correct time to maintain the oscillations. In order to time the switching, a small amount of energy is fed back to the grid of the triode effectively blocking or firing the device or allow it to conduct at the correct time. This so-called grid bias can be derived, either capacitively, conductively or inductively depending on whether the oscillator is a Colpitts, Hartley oscillator , Armstrong tickler or a Meissner. [ 11 ] Power control for the system can be achieved by a variety of methods. Many latter day units feature thyristor power control which works by means of a full wave AC ( alternating current ) drive varying the primary voltage to the input transformer. More traditional methods include three phase variacs ( autotransformer ) or motorised Brentford type voltage regulators to control the input voltage. Another very popular method was to use a two part tank coil with a primary and secondary winding separated by an air gap. Power control was affected by varying the magnetic coupling of the two coils by physically moving them relative to each other. [ 12 ] In the early days of induction heating, the motor-generator was used extensively for the production of MF power up to 10 kHz. While it is possible to generate multiples of the supply frequency such as 150 Hz using a standard induction motor driving an AC generator, there are limitations. This type of generator featured rotor mounted windings which limited the peripheral speed of the rotor due to the centrifugal forces on these windings. This had the effect of limiting the diameter of the machine and therefore its power and the number of poles which can be physically accommodated, which in turn limits the maximum operating frequency. [ 13 ] To overcome these limitations the induction heating industry turned to the inductor-generator. This type of machine features a toothed rotor constructed from a stack of punched iron laminations. The excitation and AC windings are both mounted on the stator, the rotor is therefore a compact solid construction which can be rotated at higher peripheral speeds than the standard AC generator above thus allowing it to be greater in diameter for a given RPM . This larger diameter allows a greater number of poles to be accommodated and when combined with complex slotting arrangements such as the Lorenz gauge condition or Guy slotting which allows the generation of frequencies from 1 to 10 kHz. As with all rotating electrical machines, high rotation speeds and small clearances are utilised to maximise flux variations. This necessitates that close attention is paid to the quality of bearings utilised and the stiffness and accuracy of rotor. Drive for the alternator is normally provided by a standard induction motor for convention and simplicity. Both vertical and horizontal configurations are utilised and in most cases the motor rotor and generator rotor are mounted on a common shaft with no coupling. The whole assembly is then mounted in a frame containing the motor stator and generator stator. The whole construction is mounted in a cubicle which features a heat exchanger and water cooling systems as required. The motor-generator became the mainstay of medium frequency power generation until the advent of solid state technology in the early 1970s. In the early 1970s the advent of solid state switching technology saw a shift from the traditional methods of induction heating power generation. Initially this was limited to the use of thyristors for generating the 'MF range of frequencies using discrete electronic control systems. State of the art units now employ SCR ( silicon-controlled rectifier ), [ 14 ] IGBT or MOSFET technologies for generating the 'MF' and 'RF' current. The modern control system is typically a digital microprocessor based system utilising PIC, PLC ( programmable logic controller ) technology and surface mount manufacturing techniques for production of the printed circuit boards. Solid state now dominates the market and units from 1 kW to many megawatts in frequencies from 1 kHz to 3 MHz including dual frequency units are now available. [ 8 ] A whole range of techniques are employed in the generation of MF and RF power using semiconductors, the actual technique employed depends often on a complex range of factors. The typical generator will employ either a current or a voltage fed topology. The actual approach employed will be a function of the required power, frequency, individual application, the initial cost and subsequent running costs. Irrespective of the approach employed however, all units tend to feature four distinct elements: [ 15 ] This takes the mains supply voltage and converts it from the supply frequency of 50 or 60 Hz and also converts it to 'DC'. This can supply a variable DC voltage, a fixed DC voltage or a variable DC current. In the case of a variable systems, they are used to provide overall power control for the system. Fixed voltage rectifiers need to be used in conjunction with an alternative means of power control. This can be done by utilising a switch mode regulator or a by using a variety of control methods within the inverter section. The inverter converts the DC supply to a single phase AC output at the relevant frequency. This features the SCR, IGBT or MOSFETS and in most cases is configured as an H-bridge . The H-bridge has four legs each with a switch, the output circuit is connected across the centre of the devices. When the relevant two switches are closed current flows through the load in one direction, these switches then open and the opposing two switches close allowing current to flow in the opposite direction. By precisely timing the opening and closing of the switches, it is possible to sustain oscillations in the load circuit. The output circuit has the job of matching the output of the inverter to that required by the coil. This can in it simplest form be a capacitor or in some cases will feature a combination of capacitors and transformers. The control section monitors all the parameters in the load circuit, the inverter and supplies switching pulses at the appropriate time to supply energy to the output circuit. Early systems featured discrete electronics with variable potentiometers to adjust switching times, current limits, voltage limits and frequency trips. However, with the advent of microcontroller technology, the majority of advanced systems now feature digital control. The voltage-fed inverter features a filter capacitor on the input to the inverter and a series resonant output circuits. The voltage-fed system is extremely popular and can be used with either SCRs up to frequencies of 10 kHz, IGBTs to 100 kHz and MOSFETs up to 3 MHz. A voltage-fed inverter with a series connection to a parallel load is also known as a third order system. Basically this is similar to solid state, but in this system the series connected internal capacitor and inductor are connected to a parallel output tank circuit. The principal advantage of this type of system is the robustness of the inverter due to the internal circuit effectively isolating the output circuit making the switching components less susceptible to damage due to coil flash-overs or mismatching. [ 16 ] The current-fed inverter is different from the voltage-fed system in that it utilizes a variable DC input followed by a large inductor at the input to the inverter bridge. The power circuit features a parallel resonant circuit and can have operating frequencies typically from 1 kHz to 1 MHz. As with the voltage-fed system, SCRs are typically used up to 10 kHz with IGBTs and MOSFETs being used at the higher frequencies. [ 17 ] Suitable materials are those with high permeability (100-500) which are heated below the Curie temperature of that material.
https://en.wikipedia.org/wiki/Induction_heater
An induction period in chemical kinetics is an initial slow stage of a chemical reaction ; after the induction period, the reaction accelerates. [ 1 ] Ignoring induction periods can lead to runaway reactions . In some catalytic reactions, a pre- catalyst needs to undergo a transformation to form the active catalyst, before the catalyst can take effect. Time is required for this transformation, hence the induction period. For example, with Wilkinson's catalyst , one triphenylphosphine ligand must dissociate to give the coordinatively unsaturated 14-electron species which can participate in the catalytic cycle: Similarly, for an autocatalytic reaction , where one of the reaction products catalyzes the reaction itself, the rate of reaction is low initially until sufficient products have formed to catalyze the reaction. Reactions generally accelerate when heat is applied. Where a reaction is exothermic , the rate of the reaction may initially be low. As the reaction proceeds, heat is generated, and the rate of reaction increases. This type of reaction often exhibits an induction period as well. The reactions to form Grignard reagents are notorious for having induction periods. This is usually due to two reasons: Firstly, the thin film of oxide on the magnesium reagent must be removed before the bulk magnesium can react. Secondly, Grignard reactions, while exothermic, are typically conducted at low temperature for better selectivity. For these two reasons, Grignard reactions often can have a long induction period, followed by a thermal runaway , even causing the reaction solvent to boil-off.
https://en.wikipedia.org/wiki/Induction_period
Induction sealing is the process of bonding thermoplastic materials by induction heating . This involves controlled heating an electrically conducting object (usually aluminum foil) by electromagnetic induction , through heat generated in the object by eddy currents . Induction sealing is used in many types of manufacturing. In packaging , it is used for package fabrication such as forming tubes from flexible materials, attaching plastic closures to package forms, etc. Perhaps the most common use of induction sealing is cap sealing , a non-contact method of heating an inner seal [ 1 ] [ 2 ] to hermetically seal the top of plastic and glass containers. This sealing process takes place after the container has been filled and capped. [ 3 ] The closure is supplied to the bottler with an aluminum foil layer liner already inserted. Although there are various liners to choose from, a typical induction liner is multi-layered. The top layer is a paper pulp that is generally spot-glued to the cap. The next layer is wax that is used to bond a layer of aluminum foil to the pulp. The bottom layer is a polymer film laminated to the foil. After the cap or closure is applied, the container passes under an induction coil , which emits an oscillating electromagnetic field . As the container passes under the induction coil (sealing head), the conductive aluminum foil liner begins to heat as a result of the eddy currents being induced. The heat melts the wax, which is absorbed into the pulp backing and releases the foil from the cap. The polymer film also heats and flows onto the lip of the container. When cooled, the polymer creates a bond with the container resulting in a hermetically sealed product. Neither the container nor its contents are negatively affected, and the heat generated does not harm the contents. [ 4 ] It is possible to overheat the foil and thereby cause damage to the seal layer and to any protective barriers. This could result in faulty seals, even weeks after the initial sealing process, so proper sizing of the induction sealing is vital to determine the exact system necessary to run a particular product. Sealing can be done with either a handheld unit or on a conveyor system. A more recent development (which better suits a small number of applications) allows for induction sealing to be used to apply a foil seal to a container without the need for a closure. In this case, foil is supplied pre-cut or in a reel. Where supplied in a reel, it is die cut and transferred onto the container neck. When the foil is in place, it is pressed down by the seal head, the induction cycle is activated, and the seal is bonded to the container. This process is known as direct application or sometimes "capless" induction sealing. There are a variety of reasons companies choose to use induction sealing: With the U.S. Food and Drug Administration (FDA) regulations concerning tamper-resistant packaging , pharmaceutical packagers must find ways to comply as outlined in Sec. 450.500 Tamper-Resistant Packaging Requirements for Certain over-the-counter (OTC) Human Drug Products (CPG 7132a.17). Induction sealing systems meet or exceed these government regulations. As stated in section 6 of Packaging Systems: "...6. CONTAINER MOUTH INNER SEALS. Paper, thermal plastic, plastic film, foil, or a combination thereof, is sealed to the mouth of a container (e.g., bottle) under the cap. The seal must be torn or broken to open the container and remove the product. The seal cannot be removed and reapplied without leaving visible evidence of entry. Seals applied by heat induction to plastic containers appear to offer a higher degree of tamper-resistance than those that depend on an adhesive to create the bond..." Some shipping companies require liquid chemical products to be sealed prior to shipping to prevent hazardous chemicals from spilling on other shipments. Induction sealing keeps unwanted pollutants from seeping into food products and may assist in extending shelf life of certain products. Induction-sealed containers help prevent the product from being broken into by leaving a noticeable residue on plastic containers from the liner itself. Pharmaceutical companies purchase liners that will purposely leave liner film/foil residue on bottles. Food companies that use induction seals do not want the liner residue as it could potentially interfere with the product itself upon dispensing. They, in turn, put a notice on the product that it has been induction-sealed for their protection; letting the consumer know it was sealed upon leaving the factory and they should check for an intact seal before using. In some applications, induction sealing can be considered to contribute towards sustainability goals by allowing lower bottle weights as the pack relies on the presence of an induction foil seal for its security, rather than a mechanically strong bottle neck and closure. [ 5 ] Some manufacturers have produced devices which can monitor the magnetic field strength present at the induction head (either directly or indirectly via such mechanisms as pick up coils), dynamically predicting the heating effect in the foil. Such devices provide quantifiable data post-weld in a production environment where uniformity – particularly in parameters such as foil peel-off strength, is important. Analysers may be portable or designed to work in conjunction with conveyor belt systems. High speed power analysis techniques (voltage and current measurement in near real time) can be used to intercept power delivery from mains to generator or generator to head in order to calculate energy delivered to the foil and the statistical profile of that process. As the thermal capacity of the foil is typically static, such information as true power, apparent power and power factor may be used to predict foil heating with good relevance to final weld parameters and in a dynamic manner. Many other derivative parameters may be calculated for each weld, yielding confidence in a production environment that is notably more difficult to achieve in conduction transfer systems, where analysis, if present is generally post-weld as relatively large thermal mass of heating and conduction elements combined impair rapid temperature change. Inductive heating with quantitative feedback such as that provided by power analysis techniques further allows for the possibility of dynamic adjustments in energy delivery profile to the target. This opens the possibility of feed-forward systems where the induction generator properties are adjusted in near real-time as the heating process proceeds, allowing for a specific heating profile track and subsequent compliance feedback – something that is not generally practical for conduction heating processes. Conduction sealing requires a hard metal plate to make perfect contact with the container being sealed. Conduction sealing systems delay production time because of required system warm-up time. [ citation needed ] They also have complex temperature sensors and heaters. Unlike conduction sealing systems, induction sealing systems require very little power resources, deliver instant startup time, and have a sealing head which can conform to "out of specification" containers when sealing. [ citation needed ] Induction sealing also offers advantages when sealing to glass: Using a conduction sealer to seal a simple foil structure to glass gives no tolerance or compressibility to allow for any irregularity in the glass surface finish. With an induction sealer, the contact face can be of a compressible material, ensuring a perfect bond each time. [ citation needed ] Induction sealing is broadly used to preserve the freshness and integrity of various products, such as:
https://en.wikipedia.org/wiki/Induction_sealing
Induction shrink fitting refers to the use of induction heater technology to pre-heat metal components between 150 °C (302 °F) and 300 °C (572 °F) thereby causing them to expand and allow for the insertion or removal of another component. [ 1 ] Typically the lower temperature range is used on metals such as aluminium and higher temperatures are used on metals such as low/medium carbon steels . The process avoids the changing of mechanical properties whilst allowing components to be worked. Metals typically expand in response to heating and contract on cooling; this dimensional response to temperature change is expressed as a coefficient of thermal expansion . [ 2 ] Induction heating is a non contact heating process which uses the principle of electromagnetism induction to produce heat in a work-piece. In this case thermal expansion is used in a mechanical application to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. By placing a conductive material into a strong alternating magnetic field , electric current can be made to flow in the metal thereby creating heat due to the I 2 R losses in the material. The current generated flows predominantly in the surface layer. The depth of this layer being dictated by the frequency of the alternating field and the permeability of the material. [ 3 ] Induction heaters for shrink fitting fall into two broad categories: Often referred to as a bearing heater, the mains frequency unit employs standard transformer principles for its operation. An internal winding is wound around a laminated core similar to a standard mains transformer. The core is then passed through the work-piece and when the primary coil is energised, a magnetic flux is created around the core. The work-piece acts as a short circuit secondary of the transformer created, and due to the laws of induction, a current flows in the work-piece and heat is generated. The core is normally hinged or clamped in some way to allow loading or unloading, which is usually a manual operation. To cover variations in part diameter, the majority of units will have spare cores available which help to optimise performance. Once the part is heated to the correct temperature, assembly can take place either by hand or in the relevant jig or machine press . [ 4 ] Bearing heaters typically range from 1 kVA to 25 kVA and are used to heat parts from 1 to 650 kg (2.2 to 1,433.0 lb), dependent upon the application. The power required is a function of the weight, target temperature and cycle time to aid selection many manufacturers publish graphs and charts. Due to the need to insert a core and also that to be effective, the core has to be in relatively close proximity to the bore of the part to be heated, there are many application in which the above bearing heater type approach is not feasible. In those cases where operational complexities negate the use of a cored mains frequency approach, the standard RF or MF induction heater can be used. This type of unit uses turns of copper tube wound into an electromagnetic coil . [ 5 ] There are no cores required, the coil needs to simply surround or be inserted into the part to be heated this makes automating the process straightforward. A further advantage is the ability to not only shrink fit parts but also remove them. The RF and MF heaters used for induction shrink fitting vary in power from a few kilowatts to many megawatts and depending on the component geometry/diameter/cross section can vary in frequency from 1 kHz to 200 kHz, although the majority of applications use the range between 1 kHz and 100 kHz. [ 5 ] In general terms, it is best to use the lowest practical frequency and a low power density when undertaking shrink fitting as this will generally provide more evenly distributed heat. The exception to this rule is when using heat to remove parts from shafts. In these cases it is often best to shock the component with a rapid heat, this also has the advantage of shortening the time cycle and preventing heat build up in the shaft which can lead to problems with both parts expanding. In order to select the correct power it is necessary to first calculate the thermal energy required to raise the material to the required temperature in the time allotted. This can be done using the heat content of the material which is normal expressed in kW hours per tonne, the weight of metal to be processed and the time cycle. [ 6 ] Once this has been established other factors such as radiated losses from the component, coil losses and other system losses need to be factored in. Traditionally this process involved lengthy and complex calculations in conjunction with a mixture of practical experience and empirical formula. Modern techniques use finite element analysis and other computer-aided manufacturing techniques, however as with all such methods a thorough working knowledge of the induction heating process is still required. When deciding on the correct approach it is often necessary to consider the overall size and thermal conductivity of the work-piece and its expansion characteristics in order to ensure that enough soak time is allowed to create an even heat throughout the component. As shrink fitting requires a uniform heating of the component to be expanded, it is best to try to use the lowest practical frequency when approaching heating for shrink fitting. Again the exception to this rule can be when removing parts from shafts. There are a huge number of industries and applications which benefit from induction shrink fitting or removal using solid state RF and MF heaters. In practice, the methodology employed can vary from a simple manual approach where an operator assembles or disassembles the parts to fully automatic pneumatic and hydraulic press arrangements. [ 7 ] Advantages: The main disadvantage of this process is that, in general, it is limited to components which have a cylindrical shape. [ 4 ]
https://en.wikipedia.org/wiki/Induction_shrink_fitting
Inductionism is the scientific philosophy where laws are " induced " from sets of data. As an example, one might measure the strength of electrical forces at varying distances from charges and induce the inverse square law of electrostatics . This concept is considered one of the two pillars of the old view of the philosophy of science , together with verifiability . [ 1 ] An application of inductionism can show how experimental evidence can confirm or inductively justify the belief in generalization and the laws of nature. [ 2 ] Some aspects of induction has been credited to Aristotle. For example, in Prior Analytics , he proposed an inductive syllogism , which served to establish the primary and immediate proposition. [ 3 ] For scholars, this constitutes the principle of demonstrative science. [ 3 ] The Greek philosopher, however, did not develop a detailed theory of induction. [ 4 ] Some sources even state that the Aristotelian conceptualization of induction is different from its modern mainstream interpretations due to its position that inductive arguments are deductively valid. [ 5 ] The early form of modern inductionism is associated with the philosophies of thinkers such as Francis Bacon . [ 6 ] This can be demonstrated in the way Bacon favored the steady and incremental collection of empirical evidence using a method that derives general principles from the senses and particulars, gradually leading to the most general principles. [ 5 ] Inductionism is also said to be based on Newtonian physics . [ 1 ] This is evident in Isaac Newton 's Rule of Reasoning in Philosophy , which articulated his belief that it is imperative to cover the unobservably small features of the world through a methodology that has a strong empirical base. [ 7 ] Here, the speculative hypothesis was replaced by induction from premises obtained through observation and experiment. [ 7 ] It is noted that no law of science can be considered mere inductive generalization of facts because each law does not exist in isolation. [ 8 ] This is for, instance, demonstrated by thinkers such as John Stuart Mill , who maintained that inductionism is the initial act in the formulation of a general law using the deductive approaches to science. [ 9 ] There are thinkers who propose a model that is considered anti-inductionism. These include Karl Popper , who argued that science could progress without making any use of induction [ 10 ] and that there is a fundamental asymmetry between induction and deduction. [ 11 ] This philosophy of science -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inductionism
Inductive cleavage , in organic chemistry , is the charge-initiated counterpoint to radical initiated alpha-cleavage . Since inductive cleavage does not require unpairing and re-pairing electrons it can occur at both radical cationic and cationic sites. Generally this topic is discussed when covering mass spectrometry and occurs generally by the same mechanisms. [ 1 ] [ 2 ] To neutralize the positive charge on the ionization site a single two-electron transfer must be made. Neutralization of the positive charge at the ionization site is performed at the expense of the atom adjacent to the ionization site, transferring the positive charge to this atom as a result of the bond cleavage . This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inductive_cleavage
In electrical engineering , two conductors are said to be inductively coupled or magnetically coupled [ 1 ] when they are configured in a way such that change in current through one wire induces a voltage across the ends of the other wire through electromagnetic induction . A changing current through the first wire creates a changing magnetic field around it by Ampere's circuital law . The changing magnetic field induces an electromotive force (EMF) voltage in the second wire by Faraday's law of induction . The amount of inductive coupling between two conductors is measured by their mutual inductance . The coupling between two wires can be increased by winding them into coils and placing them close together on a common axis, so the magnetic field of one coil passes through the other coil. Coupling can also be increased by a magnetic core of a ferromagnetic material like iron or ferrite in the coils, which increases the magnetic flux. The two coils may be physically contained in a single unit, as in the primary and secondary windings of a transformer , or may be separated. Coupling may be intentional or unintentional. Unintentional inductive coupling can cause signals from one circuit to be induced into a nearby circuit, this is called cross-talk , and is a form of electromagnetic interference . An inductively coupled transponder consists of a solid state transceiver chip connected to a large coil that functions as an antenna . When brought within the oscillating magnetic field of a reader unit, the transceiver is powered up by energy inductively coupled into its antenna and transfers data back to the reader unit inductively. Magnetic coupling between two magnets can also be used to mechanically transfer power without contact, as in the magnetic gear . [ 2 ] Inductive coupling is widely used throughout electrical technology; examples include: Low-frequency induction can be a dangerous form of inductive coupling when it happens inadvertently. For example, if a long-distance metal pipeline is installed along a right of way in parallel with a high-voltage power line, the power line can induce current on the pipe. Since the pipe is a conductor, insulated by its protective coating from the earth, it acts as a secondary winding for a long, drawn out transformer whose primary winding is the power line. Voltages induced on the pipe are then a hazard to people operating valves or otherwise touching metal parts of the metal pipeline. [ citation needed ] Reducing low-frequency magnetic fields may be necessary when dealing with electronics, as sensitive circuits in close proximity to an instrument with a power transformer may pickup the mains frequency . Twisted wires (e.g. in networking cables ) are an effective way of reducing the interference as signals induced in the successive twists cancel. Magnetic shielding is also an effective way of reducing unwanted inductive coupling, though moving the source of the magnetic field away from sensitive electronics is the simplest solution if possible. [ 3 ] Although induced currents can be harmful, they can also be helpful. Electrical distribution line engineers use inductive coupling to tap power for cameras on towers and at substations that allow remote monitoring of the facilities. Using this they can watch from anywhere and not need to worry about changing camera batteries or solar panel maintenance. [ citation needed ]
https://en.wikipedia.org/wiki/Inductive_coupling
In Organic chemistry , the inductive effect in a molecule is a local change in the electron density due to electron-withdrawing or electron-donating groups elsewhere in the molecule, resulting in a permanent dipole in a bond. [ 1 ] It is present in a σ (sigma) bond , unlike the electromeric effect which is present in a π (pi) bond . The halogen atoms in an alkyl halide are electron withdrawing while the alkyl groups have electron donating tendencies. If the electronegative atom (missing an electron, thus having a positive charge) is then joined to a chain of atoms , typically carbon , the positive charge is relayed to the other atoms in the chain. This is the electron-withdrawing inductive effect, also known as the − I effect. In short, alkyl groups tend to donate electrons, leading to the + I effect. Its experimental basis is the ionization constant . It is distinct from and often opposite to the mesomeric effect . Covalent bonds can be polarized depending on the relative electronegativity of the two atoms forming the bond. The electron cloud in a σ-bond between two unlike atoms is not uniform and is slightly displaced towards the more electronegative of the two atoms. This causes a permanent state of bond polarization , where the more electronegative atoms has a fractional negative charge (δ − ) and the less electronegative atom has a fractional positive charge (δ + ). For example, the water molecule H 2 O has an electronegative oxygen atom that attracts a negative charge. This is indicated by δ − in the water molecule in the vicinity of the O atom, as well as by a δ + next to each of the two H atoms. The vector addition of the individual bond dipole moments results in a net dipole moment for the molecule. A polar bond is a covalent bond in which there is a separation of charge between one end and the other - in other words in which one end is slightly positive and the other slightly negative. Examples include most covalent bonds. The hydrogen-chlorine bond in HCl or the hydrogen-oxygen bonds in water are typical. The effect of the sigma electron displacement towards the more electronegative atom by which one end becomes positively charged and the other end negatively charged is known as the inductive effect. The − I effect is a permanent effect & generally represented by an arrow on the bond. [ citation needed ] The inductive effect of alkyl group , has long been a source of misunderstanding. Due to early experimentation, before an understanding of hyperconjugation , results such as the more rapid nitration of toluene compared to benzene, were deduced as being due to an inductively donating effect of alkyl groups. Effects such as the lower acidity of alcohols and higher basicity of substituted amines further deepened the misunderstanding, despite this being due to solvent or polarisability effects. [ 2 ] As the induced change in polarity is less than the original polarity, the inductive effect rapidly dies out and is significant only over a short distance. Moreover, the inductive effect is permanent but feeble since it involves the shift of strongly held σ-bond electrons and other stronger factors may overshadow this effect. Recent research combining wave functional theory calculations with experiment results (gas phase acidities, ion-specific effects in thermoresponsive polymers, and NMR spectroscopy) has re-examined haloacetic acids and salts. The study found that in trihaloacetates, the trichloro group—despite being less electronegative than fluoro groups—reduces the carboxylate oxygen charge density the most. This inversion of the traditional electronegativity–charge density relationship suggests that other factors beyond the simple inductive effect (such as hyperconjugation) may significantly influence acidity trends. [ 3 ] Relative inductive effects have been experimentally measured through the resulting pK a s of a nearby carboxylic acid group (see § Carboxylic acids ). In increasing order of − I effect or decreasing order of + I effect, common functional groups are: [ 4 ] Hydrogen substituents also exhibit an isotope effect : relative to the same order, where H is hydrogen, D deuterium , and T tritium . The strength of inductive effect is also dependent on the distance between the substituent group and the main group that react; the longer the distance, the weaker the effect. Inductive effects can be expressed quantitatively through the Hammett equation , which describes the relationship between reaction rates and equilibrium constants with respect to substituent. The inductive effect can be used to determine the stability of a molecule depending on the charge present on the atom and the groups bonded to the atom. For example, if an atom has a positive charge and is attached to a − I group its charge becomes 'amplified' and the molecule becomes more unstable. Similarly, if an atom has a negative charge and is attached to a + I group its charge becomes 'amplified' and the molecule becomes more unstable. In contrast, if an atom has a negative charge and is attached to a − I group its charge becomes 'de-amplified' and the molecule becomes more stable than if the I -effect was not taken into consideration. Similarly, if an atom has a positive charge and is attached to a + I group its charge becomes 'de-amplified' and the molecule becomes more stable than if the I -effect was not taken into consideration. The explanation for the above is given by the fact that more charge on an atom decreases stability and less charge on an atom increases stability. The inductive effect also plays a vital role in deciding the acidity and basicity of a molecule. Groups having + I effect (Inductive effect) attached to a molecule increases the overall electron density on the molecule and the molecule is able to donate electrons, making it basic. Similarly, groups having − I effect attached to a molecule decreases the overall electron density on the molecule making it electron deficient which results in its acidity. As the number of − I groups attached to a molecule increases, its acidity increases; as the number of + I groups on a molecule increases, its basicity will increase. The strength of a carboxylic acid depends on the extent of its ionization constant : the more ionized it is, the stronger it is. As an acid becomes stronger, the numerical value of its pK a drops. In acids, the electron-releasing inductive effect of the alkyl group increases the electron density on oxygen and thus hinders the breaking of the O-H bond, which consequently reduces the ionization. Due to its greater ionization, formic acid ( pK a =3.74 ) is stronger than acetic acid ( pK a =4.76 ). Monochloroacetic acid ( pK a =2.82 ), though, is stronger than formic acid, due to the electron-withdrawing effect of chlorine promoting ionization. In benzoic acid , the carbon atoms which are present in the ring are sp 2 hybridised . As a result, benzoic acid ( pK a =4.20 ) is a stronger acid than cyclohexanecarboxylic acid ( pK a =4.87 ). Also, in aromatic carboxylic acids, electron-withdrawing groups substituted at the ortho and para positions can enhance the acid strength. Since the carboxyl group is itself an electron-withdrawing group, dicarboxylic acids are, in general, stronger acids than their monocarboxyl analogues. The Inductive effect will also help in polarization of a bond making certain carbon atom or other atom positions.
https://en.wikipedia.org/wiki/Inductive_effect
Inductive reasoning refers to a variety of methods of reasoning in which the conclusion of an argument is supported not with deductive certainty, but with some degree of probability. [ 1 ] Unlike deductive reasoning (such as mathematical induction ), where the conclusion is certain , given the premises are correct, inductive reasoning produces conclusions that are at best probable , given the evidence provided. [ 2 ] [ 3 ] The types of inductive reasoning include generalization, prediction, statistical syllogism , argument from analogy, and causal inference. There are also differences in how their results are regarded. A generalization (more accurately, an inductive generalization ) proceeds from premises about a sample to a conclusion about the population . [ 4 ] The observation obtained from this sample is projected onto the broader population. [ 4 ] For example, if there are 20 balls—either black or white—in an urn: to estimate their respective numbers, a sample of four balls is drawn, three are black and one is white. An inductive generalization may be that there are 15 black and five white balls in the urn. However this is only one of 17 possibilities as to the actual number of each color of balls in the urn (the population) -- there may, of course, have been 19 black and just 1 white ball, or only 3 black balls and 17 white, or any mix in between. The probability of each possible distribution being the actual numbers of black and white balls can be estimated using techniques such as Bayesian inference , where prior assumptions about the distribution are updated with the observed sample, or maximum likelihood estimation (MLE), which identifies the distribution most likely given the observed sample. How much the premises support the conclusion depends upon the number in the sample group, the number in the population, and the degree to which the sample represents the population (which, for a static population, may be achieved by taking a random sample). The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. The hasty generalization and the biased sample are generalization fallacies. A statistical generalization is a type of inductive argument in which a conclusion about a population is inferred using a statistically representative sample . For example: The measure is highly reliable within a well-defined margin of error provided that the selection process was genuinely random and that the numbers of items in the sample having the properties considered are large. It is readily quantifiable. Compare the preceding argument with the following. "Six of the ten people in my book club are Libertarians. Therefore, about 60% of people are Libertarians." The argument is weak because the sample is non-random and the sample size is very small. Statistical generalizations are also called statistical projections [ 5 ] and sample projections . [ 6 ] An anecdotal generalization is a type of inductive argument in which a conclusion about a population is inferred using a non-statistical sample. [ 7 ] In other words, the generalization is based on anecdotal evidence . For example: This inference is less reliable (and thus more likely to commit the fallacy of hasty generalization) than a statistical generalization, first, because the sample events are non-random, and second because it is not reducible to a mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate the circumstances affecting performance that will occur in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes called Humean after the philosopher who was first to subject them to philosophical scrutiny. [ 8 ] An inductive prediction draws a conclusion about a future, current, or past instance from a sample of other instances. Like an inductive generalization, an inductive prediction relies on a data set consisting of specific instances of a phenomenon. But rather than conclude with a general statement, the inductive prediction concludes with a specific statement about the probability that a single instance will (or will not) have an attribute shared (or not shared) by the other instances. [ 9 ] A statistical syllogism proceeds from a generalization about a group to a conclusion about an individual. For example: This is a statistical syllogism . [ 10 ] Even though one cannot be sure Bob will attend university, the exact probability of this outcome is fully assured (given no further information). Two dicto simpliciter fallacies can occur in statistical syllogisms: " accident " and " converse accident ". The process of analogical inference involves noting the shared properties of two or more things and from this basis inferring that they also share some further property: [ 11 ] Analogical reasoning is very frequent in common sense , science , philosophy , law , and the humanities , but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning . [ 12 ] This is analogical induction , according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in his System of Logic , where he states, "[t]here can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what would otherwise exist, in favor of the conclusion." [ 13 ] See Mill's Methods . Some thinkers contend that analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events. [ citation needed ] Analogical induction requires an auxiliary examination of the relevancy of the characteristics cited as common to the pair. In the preceding example, if a premise were added stating that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity. A pitfall of analogy is that features can be cherry-picked : while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharply dis similar. Thus, analogy can mislead if not all relevant comparisons are made. A causal inference draws a conclusion about a possible or probable causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship. [ citation needed ] The two principal methods used to reach inductive generalizations are enumerative induction and eliminative induction. [ 14 ] [ 15 ] Enumerative induction is an inductive method in which a generalization is constructed based on the number of instances that support it. The more supporting instances, the stronger the conclusion. [ 14 ] [ 15 ] The most basic form of enumerative induction reasons from particular instances to all instances and is thus an unrestricted generalization. [ 16 ] If one observes 100 swans, and all 100 were white, one might infer a probable universal categorical proposition of the form All swans are white . As this reasoning form 's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central in philosophy of science , as enumerative induction has a pivotal role in the traditional model of the scientific method . This is enumerative induction , also known as simple induction or simple predictive induction . It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the conclusion All is a bold assertion. A single contrary instance foils the argument. And last, quantifying the level of probability in any mathematical form is problematic. [ 17 ] By what standard do we measure our Earthly sample of known life against all (possible) life? Suppose we do discover some new organism—such as some microorganism floating in the mesosphere or an asteroid—and it is cellular. Does the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes", and for a good many this "yes" is not only reasonable but incontrovertible. So then just how much should this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all with or without numerical quantification. This is enumerative induction in its weak form . It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive. Eliminative induction , also called variative induction, is an inductive method first put forth by Francis Bacon ; [ 18 ] in it a generalization is constructed based on the variety of instances that support it. Unlike enumerative induction, eliminative induction reasons based on the various kinds of instances that support a conclusion, rather than the number of instances that support it. As the variety of instances increases, the more possible conclusions based on those instances can be identified as incompatible and eliminated. This, in turn, increases the strength of any conclusion that remains consistent with the various instances. In this context, confidence is the function of how many instances have been identified as incompatible and eliminated. This confidence is expressed as the Baconian probability i|n (read as "i out of n") where n reasons for finding a claim incompatible has been identified and i of these have been eliminated by evidence or argument. [ 18 ] There are three ways of attacking an argument; these ways - known as defeaters in defeasible reasoning literature - are : rebutting, undermining, and undercutting. Rebutting defeats by offering a counter-example, undermining defeats by questioning the validity of the evidence, and undercutting defeats by pointing out conditions where a conclusion is not true when the inference is. By identifying defeaters and proving them wrong is how this approach builds confidence. [ 18 ] This type of induction may use different methodologies such as quasi-experimentation, which tests and, where possible, eliminates rival hypotheses. [ 19 ] Different evidential tests may also be employed to eliminate possibilities that are entertained. [ 20 ] Eliminative induction is crucial to the scientific method and is used to eliminate hypotheses that are inconsistent with observations and experiments. [ 14 ] [ 15 ] It focuses on possible causes instead of observed actual instances of causal connections. [ 21 ] For a move from particular to universal, Aristotle in the 300s BCE used the Greek word epagogé , which Cicero translated into the Latin word inductio . [ 22 ] Aristotle's Posterior Analytics covers the methods of inductive proof in natural philosophy and in the social sciences. The first book of Posterior Analytics describes the nature and science of demonstration and its elements: including definition, division, intuitive reason of first principles, particular and universal demonstration, affirmative and negative demonstration, the difference between science and opinion, etc. The ancient Pyrrhonists were the first Western philosophers to point out the Problem of induction : that induction cannot, according to them, justify the acceptance of universal statements as true. [ 22 ] The Empiric school of ancient Greek medicine employed epilogism as a method of inference. 'Epilogism' is a theory-free method that looks at history through the accumulation of facts without major generalization and with consideration of the consequences of making causal claims. [ 23 ] Epilogism is an inference which moves entirely within the domain of visible and evident things, it tries not to invoke unobservables . The Dogmatic school of ancient Greek medicine employed analogismos as a method of inference. [ 24 ] This method used analogy to reason from what was observed to unobservable forces. In 1620, early modern philosopher Francis Bacon repudiated the value of mere experience and enumerative induction alone. His method of inductivism required that minute and many-varied observations that uncovered the natural world's structure and causal relations needed to be coupled with enumerative induction in order to have knowledge beyond the present scope of experience. Inductivism therefore required enumerative induction as a component. The empiricist David Hume 's 1740 stance found enumerative induction to have no rational, let alone logical, basis; instead, induction was the product of instinct rather than reason, a custom of the mind and an everyday requirement to live. While observations, such as the motion of the sun, could be coupled with the principle of the uniformity of nature to produce conclusions that seemed to be certain, the problem of induction arose from the fact that the uniformity of nature was not a logically valid principle, therefore it could not be defended as deductively rational, but also could not be defended as inductively rational by appealing to the fact that the uniformity of nature has accurately described the past and therefore, will likely accurately describe the future because that is an inductive argument and therefore circular since induction is what needs to be justified. Since Hume first wrote about the dilemma between the invalidity of deductive arguments and the circularity of inductive arguments in support of the uniformity of nature, this supposed dichotomy between merely two modes of inference, deduction and induction, has been contested with the discovery of a third mode of inference known as abduction, or abductive reasoning , which was first formulated and advanced by Charles Sanders Peirce , in 1886, where he referred to it as "reasoning by hypothesis." [ 25 ] Inference to the best explanation is often, yet arguably, treated as synonymous to abduction as it was first identified by Gilbert Harman in 1965 where he referred to it as "abductive reasoning," yet his definition of abduction slightly differs from Pierce's definition. [ 26 ] Regardless, if abduction is in fact a third mode of inference rationally independent from the other two, then either the uniformity of nature can be rationally justified through abduction, or Hume's dilemma is more of a trilemma. Hume was also skeptical of the application of enumerative induction and reason to reach certainty about unobservables and especially the inference of causality from the fact that modifying an aspect of a relationship prevents or produces a particular outcome. Awakened from "dogmatic slumber" by a German translation of Hume's work, Kant sought to explain the possibility of metaphysics . In 1781, Kant's Critique of Pure Reason introduced rationalism as a path toward knowledge distinct from empiricism . Kant sorted statements into two types. Analytic statements are true by virtue of the arrangement of their terms and meanings , thus analytic statements are tautologies , merely logical truths, true by necessity . Whereas synthetic statements hold meanings to refer to states of facts, contingencies . Against both rationalist philosophers like Descartes and Leibniz as well as against empiricist philosophers like Locke and Hume , Kant's Critique of Pure Reason is a sustained argument that in order to have knowledge we need both a contribution of our mind (concepts) as well as a contribution of our senses (intuitions). Knowledge proper is for Kant thus restricted to what we can possibly perceive ( phenomena ), whereas objects of mere thought (" things in themselves ") are in principle unknowable due to the impossibility of ever perceiving them. Reasoning that the mind must contain its own categories for organizing sense data , making experience of objects in space and time ( phenomena ) possible, Kant concluded that the uniformity of nature was an a priori truth. [ 27 ] A class of synthetic statements that was not contingent but true by necessity, was then synthetic a priori . Kant thus saved both metaphysics and Newton's law of universal gravitation . On the basis of the argument that what goes beyond our knowledge is "nothing to us," [ 28 ] he discarded scientific realism . Kant's position that knowledge comes about by a cooperation of perception and our capacity to think ( transcendental idealism ) gave birth to the movement of German idealism . Hegel 's absolute idealism subsequently flourished across continental Europe and England. Positivism , developed by Henri de Saint-Simon and promulgated in the 1830s by his former student Auguste Comte , was the first late modern philosophy of science . In the aftermath of the French Revolution , fearing society's ruin, Comte opposed metaphysics . Human knowledge had evolved from religion to metaphysics to science, said Comte, which had flowed from mathematics to astronomy to physics to chemistry to biology to sociology —in that order—describing increasingly intricate domains. All of society's knowledge had become scientific, with questions of theology and of metaphysics being unanswerable. Comte found enumerative induction reliable as a consequence of its grounding in available experience. He asserted the use of science, rather than metaphysical truth, as the correct method for the improvement of human society. According to Comte, scientific method frames predictions, confirms them, and states laws—positive statements—irrefutable by theology or by metaphysics . Regarding experience as justifying enumerative induction by demonstrating the uniformity of nature , [ 27 ] the British philosopher John Stuart Mill welcomed Comte's positivism, but thought scientific laws susceptible to recall or revision and Mill also withheld from Comte's Religion of Humanity . Comte was confident in treating scientific law as an irrefutable foundation for all knowledge , and believed that churches, honouring eminent scientists, ought to focus public mindset on altruism —a term Comte coined—to apply science for humankind's social welfare via sociology , Comte's leading science. During the 1830s and 1840s, while Comte and Mill were the leading philosophers of science, William Whewell found enumerative induction not nearly as convincing, and, despite the dominance of inductivism, formulated "superinduction". [ 29 ] Whewell argued that "the peculiar import of the term Induction " should be recognised: "there is some Conception superinduced upon the facts", that is, "the Invention of a new Conception in every inductive inference". The creation of Conceptions is easily overlooked and prior to Whewell was rarely recognised. [ 29 ] Whewell explained: "Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception, men can no longer easily restore them back to detached and incoherent condition in which they were before they were thus combined." [ 29 ] These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termed consilience —that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate the prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes used the phrase "logic of induction", despite the fact that induction lacks rules and cannot be trained. [ 29 ] In the 1870s, the originator of pragmatism , C S Peirce performed vast investigations that clarified the basis of deductive inference as a mathematical proof (as, independently, did Gottlob Frege ). Peirce recognized induction but always insisted on a third type of inference that Peirce variously termed abduction or retroduction or hypothesis or presumption . [ 30 ] Later philosophers termed Peirce's abduction, etc., Inference to the Best Explanation (IBE). [ 31 ] Having highlighted Hume's problem of induction , John Maynard Keynes posed logical probability as its answer, or as near a solution as he could arrive at. [ 32 ] Bertrand Russell found Keynes's Treatise on Probability the best examination of induction, and believed that if read with Jean Nicod 's Le Probleme logique de l'induction as well as R B Braithwaite 's review of Keynes's work in the October 1925 issue of Mind , that would cover "most of what is known about induction", although the "subject is technical and difficult, involving a good deal of mathematics". [ 33 ] Two decades later, Russell followed Keynes in regarding enumerative induction as an "independent logical principle". [ 34 ] [ 35 ] [ 36 ] Russell found: "Hume's skepticism rests entirely upon his rejection of the principle of induction. The principle of induction, as applied to causation, says that, if A has been found very often accompanied or followed by B , then it is probable that on the next occasion on which A is observed, it will be accompanied or followed by B . If the principle is to be adequate, a sufficient number of instances must make the probability not far short of certainty. If this principle, or any other from which it can be deduced, is true, then the casual inferences which Hume rejects are valid, not indeed as giving certainty, but as giving a sufficient probability for practical purposes. If this principle is not true, every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's skepticism is inescapable for an empiricist. The principle itself cannot, of course, without circularity, be inferred from observed uniformities, since it is required to justify any such inference. It must, therefore, be, or be deduced from, an independent principle not based on experience. To this extent, Hume has proved that pure empiricism is not a sufficient basis for science. But if this one principle is admitted, everything else can proceed in accordance with the theory that all our knowledge is based on experience. It must be granted that this is a serious departure from pure empiricism, and that those who are not empiricists may ask why, if one departure is allowed, others are forbidden. These, however, are not questions directly raised by Hume's arguments. What these arguments prove—and I do not think the proof can be controverted—is that induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible." [ 36 ] In a 1965 paper, Gilbert Harman explained that enumerative induction is not an autonomous phenomenon, but is simply a disguised consequence of Inference to the Best Explanation (IBE). [ 31 ] IBE is otherwise synonymous with C S Peirce 's abduction . [ 31 ] Many philosophers of science espousing scientific realism have maintained that IBE is the way that scientists develop approximately true scientific theories about nature. [ 37 ] Inductive reasoning is a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of the premises are true. [ 38 ] This difference between deductive and inductive reasoning is reflected in the terminology used to describe deductive and inductive arguments. In deductive reasoning, an argument is " valid " when, assuming the argument's premises are true, the conclusion must be true. If the argument is valid and the premises are true, then the argument is "sound" . In contrast, in inductive reasoning, an argument's premises can never guarantee that the conclusion must be true. Instead, an argument is "strong" when, assuming the argument's premises are true, the conclusion is probably true. If the argument is strong and the premises are thought to be true, then the argument is said to be "cogent". [ 39 ] Less formally, the conclusion of an inductive argument may be called "probable", "plausible", "likely", "reasonable", or "justified", but never "certain" or "necessary". Logic affords no bridge from the probable to the certain. The futility of attaining certainty through some critical mass of probability can be illustrated with a coin-toss exercise. Suppose someone tests whether a coin is either a fair one or two-headed. They flip the coin ten times, and ten times it comes up heads. At this point, there is a strong reason to believe it is two-headed. After all, the chance of ten heads in a row is .000976: less than one in one thousand. Then, after 100 flips, every toss has come up heads. Now there is "virtual" certainty that the coin is two-headed, and one can regard it as "true" that the coin is probably two-headed. Still, one can neither logically nor empirically rule out that the next toss will produce tails. No matter how many times in a row it comes up heads, this remains the case. If one programmed a machine to flip a coin over and over continuously, at some point the result would be a string of 100 heads. In the fullness of time, all combinations will appear. As for the slim prospect of getting ten out of ten heads from a fair coin—the outcome that made the coin appear biased—many may be surprised to learn that the chance of any sequence of heads or tails is equally unlikely (e.g., H-H-T-T-H-T-H-H-H-T) and yet it occurs in every trial of ten tosses. That means all results for ten tosses have the same probability as getting ten out of ten heads, which is 0.000976. If one records the heads-tails sequences, for whatever result, that exact sequence had a chance of 0.000976. An argument is deductive when the conclusion is necessary given the premises. That is, the conclusion must be true if the premises are true. For example, after getting 10 heads in a row one might deduce that the coin had met some statistical criterion to be regarded as probably two-sided, a conclusion that would not be falsified even if the next toss yielded tails. If a deductive conclusion follows duly from its premises, then it is valid; otherwise, it is invalid (that an argument is invalid is not to say its conclusions are false; it may have a true conclusion, just not on account of the premises). An examination of the following examples will show that the relationship between premises and conclusion is such that the truth of the conclusion is already implicit in the premises. Bachelors are unmarried because we say they are; we have defined them so. Socrates is mortal because we have included him in a set of beings that are mortal. The conclusion for a valid deductive argument is already contained in the premises since its truth is strictly a matter of logical relations. It cannot say more than its premises. Inductive premises, on the other hand, draw their substance from fact and evidence, and the conclusion accordingly makes a factual claim or prediction. Its reliability varies proportionally with the evidence. Induction wants to reveal something new about the world. One could say that induction wants to say more than is contained in the premises. To better see the difference between inductive and deductive arguments, consider that it would not make sense to say: "all rectangles so far examined have four right angles, so the next one I see will have four right angles." This would treat logical relations as something factual and discoverable, and thus variable and uncertain. Likewise, speaking deductively we may permissibly say. "All unicorns can fly; I have a unicorn named Charlie; thus Charlie can fly." This deductive argument is valid because the logical relations hold; we are not interested in their factual soundness. The conclusions of inductive reasoning are inherently uncertain . It only deals with the extent to which, given the premises, the conclusion is "credible" according to some theory of evidence. Examples include a many-valued logic , Dempster–Shafer theory , or probability theory with rules for inference such as Bayes' rule . Unlike deductive reasoning, it does not rely on universals holding over a closed domain of discourse to draw conclusions, so it can be applicable even in cases of epistemic uncertainty (technical issues with this may arise however; for example, the second axiom of probability is a closed-world assumption). [ 40 ] Another crucial difference between these two types of argument is that deductive certainty is impossible in non-axiomatic or empirical systems such as reality , leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems. [ 41 ] Given that "if A is true then that would cause B , C , and D to be true", an example of deduction would be " A is true therefore we can deduce that B , C , and D are true". An example of induction would be " B , C , and D are observed to be true therefore A might be true". A is a reasonable explanation for B , C , and D being true. For example: Note, however, that the asteroid explanation for the mass extinction is not necessarily correct. Other events with the potential to affect global climate also coincide with the extinction of the non-avian dinosaurs . For example, the release of volcanic gases (particularly sulfur dioxide ) during the formation of the Deccan Traps in India . Another example of an inductive argument: This argument could have been made every time a new biological life form was found, and would have had a correct conclusion every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered. As a result, the argument may be stated as: A classical example of an "incorrect" statistical syllogism was presented by John Vickers: The conclusion fails because the population of swans then known was not actually representative of all swans. A more reasonable conclusion would be: in line with applicable conventions, we might reasonably expect all swans in England to be white, at least in the short-term. Succinctly put: deduction is about certainty/necessity ; induction is about probability . [ 10 ] Any single assertion will answer to one of these two criteria. Another approach to the analysis of reasoning is that of modal logic , which deals with the distinction between the necessary and the possible in a way not concerned with probabilities among things deemed possible. The philosophical definition of inductive reasoning is more nuanced than a simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms). Note that the definition of inductive reasoning described here differs from mathematical induction , which, in fact, is a form of deductive reasoning. Mathematical induction is used to provide strict proofs of the properties of recursively defined sets. [ 42 ] The deductive nature of mathematical induction derives from its basis in a non-finite number of cases, in contrast with the finite number of cases involved in an enumerative induction procedure like proof by exhaustion . Both mathematical induction and proof by exhaustion are examples of complete induction . Complete induction is a masked type of deductive reasoning. Although philosophers at least as far back as the Pyrrhonist philosopher Sextus Empiricus have pointed out the unsoundness of inductive reasoning, [ 43 ] the classic philosophical critique of the problem of induction was given by the Scottish philosopher David Hume . [ 44 ] Although the use of inductive reasoning demonstrates considerable success, the justification for its application has been questionable. Recognizing this, Hume highlighted the fact that our mind often draws conclusions from relatively limited experiences that appear correct but which are actually far from certain. In deduction, the truth value of the conclusion is based on the truth of the premise. In induction, however, the dependence of the conclusion on the premise is always uncertain. For example, let us assume that all ravens are black. The fact that there are numerous black ravens supports the assumption. Our assumption, however, becomes invalid once it is discovered that there are white ravens. Therefore, the general rule "all ravens are black" is not the kind of statement that can ever be certain. Hume further argued that it is impossible to justify inductive reasoning: this is because it cannot be justified deductively, so our only option is to justify it inductively. Since this argument is circular, with the help of Hume's fork he concluded that our use of induction is not logically justifiable . [ 45 ] Hume nevertheless stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism , Hume advocated a practical skepticism based on common sense , where the inevitability of induction is accepted. [ 46 ] Bertrand Russell illustrated Hume's skepticism in a story about a chicken who, fed every morning without fail and following the laws of induction, concluded that this feeding would always continue, until his throat was eventually cut by the farmer. [ 47 ] In 1963, Karl Popper wrote, "Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure." [ 48 ] [ 49 ] Popper's 1972 book Objective Knowledge —whose first chapter is devoted to the problem of induction—opens, "I think I have solved a major philosophical problem: the problem of induction ". [ 49 ] In Popper's schema, enumerative induction is "a kind of optical illusion" cast by the steps of conjecture and refutation during a problem shift . [ 49 ] An imaginative leap, the tentative solution is improvised, lacking inductive rules to guide it. [ 49 ] The resulting, unrestricted generalization is deductive, an entailed consequence of all explanatory considerations. [ 49 ] Controversy continued, however, with Popper's putative solution not generally accepted. [ 50 ] Donald A. Gillies argues that rules of inferences related to inductive reasoning are overwhelmingly absent from science, and describes most scientific inferences as "involv[ing] conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules." [ 51 ] Gillies also provides a rare counterexample "in the machine learning programs of AI ." [ 51 ] Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. [ citation needed ] As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic , confirmation bias , and the predictable-world bias . The availability heuristic is regarded as causing the reasoner to depend primarily upon information that is readily available. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents choose the causes that have been most prevalent in the media such as terrorism, murders, and airplane accidents, rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around them. Confirmation bias is based on the natural tendency to confirm rather than deny a hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is, in fact, a sociable individual. The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and therefore believe that they are able to predict outcomes based on what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. In general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth. [ 52 ] As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by considering an exhaustive list of possibilities, a definite probabilistic characterisation of each of them (in terms of likelihoods) and precise prior probabilities for them (e.g. based on logic or induction from previous experience) and, when faced with evidence, we adjust the strength of our belief in the given hypotheses in a precise manner using Bayesian logic to yield candidate 'a posteriori probabilities', taking no account of the extent to which the new evidence may happen to give us specific reasons to doubt our assumptions. Otherwise it is advisable to review and repeat as necessary the consideration of possibilities and their characterisation until, perhaps, a stable situation is reached. [ 53 ] Around 1960, Ray Solomonoff founded the theory of universal inductive inference , a theory of prediction based on observations, for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations and 'seems to be an inadequate tool for dealing with any reasonably complex or real-world environment', [ 54 ] and can be considered as a mathematically formalized Occam's razor . Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity . Inductive inference typically considers hypothesis classes with a countable size. A recent advance [ 55 ] established a sufficient and necessary condition for inductive inference: a finite error bound is guaranteed if and only if the hypothesis class is a countable union of online learnable classes. Notably, this condition allows the hypothesis class to have an uncountable size while remaining learnable within this framework.
https://en.wikipedia.org/wiki/Inductive_reasoning
Inductive reasoning aptitude (also called differentiation or inductive learning ability ) measures how well a person can identify a pattern within a large amount of data. It involves applying the rules of logic when inferring general principles from a constellation of particulars. Measurement is generally done in a timed test by showing four pictures or words and asking the test taker to identify which of the pictures or words does not belong in the set. The test taker is shown a large number of sets of various degrees of difficulty. The measurement is made by timing how many of these a person can properly identify in a set period of time. The test resembles the game 'Which of These Is Not Like the Others'. Inductive reasoning is very useful for scientists, auto mechanics, system integrators , lawyers, network engineers, medical doctors, system administrators and members of all fields where substantial diagnostic or data interpretation work is needed. Inductive reasoning aptitude is also useful for learning a graphical user interface quickly, because highly inductive people are very good at seeing others' categorization schemes. Here is an example question: Find the set of letters that doesn’t belong with the other sets. The correct answer is... "A", as it is the only set with four letters in sequential order, although set "D" arguably differs by both lacking a vowel and being separated from the previous set by more than one intervening letter.
https://en.wikipedia.org/wiki/Inductive_reasoning_aptitude
The finest locally convex topological vector space (TVS) topology on X ⊗ Y , {\displaystyle X\otimes Y,} the tensor product of two locally convex TVSs, making the canonical map ⋅ ⊗ ⋅ : X × Y → X ⊗ Y {\displaystyle \cdot \otimes \cdot :X\times Y\to X\otimes Y} (defined by sending ( x , y ) ∈ X × Y {\displaystyle (x,y)\in X\times Y} to x ⊗ y {\displaystyle x\otimes y} ) separately continuous is called the inductive topology or the ι {\displaystyle \iota } -topology . When X ⊗ Y {\displaystyle X\otimes Y} is endowed with this topology then it is denoted by X ⊗ ι Y {\displaystyle X\otimes _{\iota }Y} and called the inductive tensor product of X {\displaystyle X} and Y . {\displaystyle Y.} [ 1 ] Throughout let X , Y , {\displaystyle X,Y,} and Z {\displaystyle Z} be locally convex topological vector spaces and L : X → Y {\displaystyle L:X\to Y} be a linear map. Suppose that Z {\displaystyle Z} is a locally convex space and that I {\displaystyle I} is the canonical map from the space of all bilinear mappings of the form X × Y → Z , {\displaystyle X\times Y\to Z,} going into the space of all linear mappings of X ⊗ Y → Z . {\displaystyle X\otimes Y\to Z.} [ 1 ] Then when the domain of I {\displaystyle I} is restricted to B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} (the space of separately continuous bilinear maps) then the range of this restriction is the space L ( X ⊗ ι Y ; Z ) {\displaystyle L\left(X\otimes _{\iota }Y;Z\right)} of continuous linear operators X ⊗ ι Y → Z . {\displaystyle X\otimes _{\iota }Y\to Z.} In particular, the continuous dual space of X ⊗ ι Y {\displaystyle X\otimes _{\iota }Y} is canonically isomorphic to the space B ( X , Y ) , {\displaystyle {\mathcal {B}}(X,Y),} the space of separately continuous bilinear forms on X × Y . {\displaystyle X\times Y.} If τ {\displaystyle \tau } is a locally convex TVS topology on X ⊗ Y {\displaystyle X\otimes Y} ( X ⊗ Y {\displaystyle X\otimes Y} with this topology will be denoted by X ⊗ τ Y {\displaystyle X\otimes _{\tau }Y} ), then τ {\displaystyle \tau } is equal to the inductive tensor product topology if and only if it has the following property: [ 5 ]
https://en.wikipedia.org/wiki/Inductive_tensor_product
An inductively coupled plasma ( ICP ) or transformer coupled plasma ( TCP ) [ 1 ] is a type of plasma source in which the energy is supplied by electric currents which are produced by electromagnetic induction , that is, by time-varying magnetic fields . [ 2 ] There are three types of ICP geometries: planar (Fig. 3 (a)), cylindrical [ 4 ] (Fig. 3 (b)), and half-toroidal (Fig. 3 (c)). [ 5 ] In planar geometry, the electrode is a length of flat metal wound like a spiral (or coil). In cylindrical geometry, it is like a helical spring. In half-toroidal geometry, it is a toroidal solenoid cut along its main diameter to two equal halves. When a time-varying electric current is passed through the coil, it creates a time-varying magnetic field around it, with flux Φ = π r 2 H = π r 2 H 0 cos ⁡ ω t {\displaystyle \Phi =\pi r^{2}H=\pi r^{2}H_{0}\cos \omega t} , where r is the distance to the center of coil (and of the quartz tube). According to the Faraday–Lenz's law of induction , this creates azimuthal electromotive force in the rarefied gas: U = − d Φ d t {\displaystyle U=-{\frac {d\Phi }{dt}}} , which corresponds to electric field strengths of E = U 2 π r = ω r H 0 2 sin ⁡ ω t {\displaystyle E={\frac {U}{2\pi r}}={\frac {\omega rH_{0}}{2}}\sin \omega t} , [ 6 ] leading to the formation of the electron trajectories [ 5 ] providing a plasma generation. The dependence on r suggests that the gas ion motion is most intense in the outer region of the flame, where the temperature is the greatest. In the real torch, the flame is cooled by the cooling gas from the outside , so the hottest outer part is at thermal equilibrium. Temperature there reaches 5 000 – 6 000 K. [ 7 ] For more rigorous description, see Hamilton–Jacobi equation in electromagnetic fields. The frequency of alternating current used in the RLC circuit which contains the coil is usually 27–41 MHz . To induce plasma, a spark is produced at the electrodes at the gas outlet. Argon is one example of a commonly used rarefied gas. The high temperature of the plasma allows the atomization of molecules and thus determination of many elements, and in addition, for about 60 elements the degree of ionization in the torch exceeds 90%. The ICP torch consumes c. 1250–1550 W of power, and this depends on the element composition of the sample (due to different ionization energies ). [ 7 ] The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density. Transition from E to H heating mode occurs with external inputs. [ 8 ] Plasma electron temperatures can range between ~6,000 K and ~10,000 K [ citation needed ] and are usually several orders of magnitude greater than the temperature of the neutral species [ citation needed ] . Temperatures of argon ICP plasma discharge are typically ~5,500 to 6,500 K [ citation needed ] and are therefore comparable to those reached at the surface ( photosphere ) of the sun (~4,500 K to ~6,000 K). ICP discharges are of relatively high electron density, on the order of 10 15 cm −3 [ citation needed ] . As a result, ICP discharges have wide applications wherever a high-density plasma (HDP) is needed. Another benefit of ICP discharges is that they are relatively free of contamination, because the electrodes are completely outside the reaction chamber. By contrast, in a capacitively coupled plasma (CCP), the electrodes are often placed inside the reactor chamber and are thus exposed to the plasma and to subsequent reactive chemical species .
https://en.wikipedia.org/wiki/Inductively_coupled_plasma
Inductively coupled plasma atomic emission spectroscopy ( ICP-AES ), also referred to as inductively coupled plasma optical emission spectroscopy (ICP-OES), is an analytical technique used for the detection of chemical elements. It is a type of emission spectroscopy that uses the inductively coupled plasma to produce excited atoms and ions that emit electromagnetic radiation at wavelengths characteristic of a particular element . [ 1 ] The plasma is a high temperature source of ionised source gas (often argon). The plasma is sustained and maintained by inductive coupling from electrical coils at megahertz frequencies. The source temperature is in the range from 6000 to 10,000 K. The intensity of the emissions from various wavelengths of light are proportional to the concentrations of the elements within the sample. The ICP-AES is composed of two parts: the ICP and the optical spectrometer . The ICP torch consists of 3 concentric quartz glass tubes. [ 2 ] The output or "work" coil of the radio frequency (RF) generator surrounds part of this quartz torch. Argon gas is typically used to create the plasma . The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density, and E to H heating mode transition occurs with external inputs. [ 3 ] The torch is operated in the H mode. When the torch is turned on, an intense electromagnetic field is created within the coil by the high power radio frequency signal flowing in the coil. This RF signal is created by the RF generator which is, effectively, a high power radio transmitter driving the "work coil" the same way a typical radio transmitter drives a transmitting antenna. Typical instruments run at either 27 or 40 MHz. [ 4 ] The argon gas flowing through the torch is ignited with a Tesla unit that creates a brief discharge arc through the argon flow to initiate the ionization process. Once the plasma is "ignited", the Tesla unit is turned off. The argon gas is ionized in the intense electromagnetic field and flows in a particular rotationally symmetrical pattern towards the magnetic field of the RF coil. A stable, high temperature plasma of about 7000 K is then generated as the result of the inelastic collisions created between the neutral argon atoms and the charged particles. [ 5 ] A peristaltic pump delivers an aqueous or organic sample into an analytical nebulizer where it is changed into mist and introduced directly inside the plasma flame. The sample immediately collides with the electrons and charged ions in the plasma and is itself broken down into charged ions . The various molecules break up into their respective atoms which then lose electrons and recombine repeatedly in the plasma, giving off radiation at the characteristic wavelengths of the elements involved. In some designs, a shear gas, typically nitrogen or dry compressed air is used to 'cut' the plasma at a specific spot. One or two transfer lenses are then used to focus the emitted light on a diffraction grating where it is separated into its component wavelengths in the optical spectrometer. In other designs, the plasma impinges directly upon an optical interface which consists of an orifice from which a constant flow of argon emerges, deflecting the plasma and providing cooling while allowing the emitted light from the plasma to enter the optical chamber. Still other designs use optical fibers to convey some of the light to separate optical chambers. Within the optical chamber(s), after the light is separated into its different wavelengths (colours), the light intensity is measured with a photomultiplier tube or tubes physically positioned to "view" the specific wavelength(s) for each element line involved, or, in more modern units, the separated colors fall upon an array of semiconductor photodetectors such as charge coupled devices (CCDs). In units using these detector arrays, the intensities of all wavelengths (within the system's range) can be measured simultaneously, allowing the instrument to analyze for every element to which the unit is sensitive all at once. Thus, samples can be analyzed very quickly. The intensity of each line is then compared to previously measured intensities of known concentrations of the elements, and their concentrations are then computed by interpolation along the calibration lines (use of a calibration curve ). In addition, special software generally corrects for interferences caused by the presence of different elements within a given sample matrix. The first published attempt to use plasma emissions as a source for spectroscopic analysis were in 1956 by Eugen Bădărău . [ 6 ] In 1964 Stanley Greenfield working at Albright & Wilson was the first to use ICP for non experimental analysis. [ 6 ] The first commercial machine was produced by KONTRON in 1975. [ 6 ] Examples of the application of ICP-AES include the determination of metals in wine, [ 7 ] arsenic in food, [ 8 ] and trace elements bound to proteins. [ 9 ] ICP-AES methods are used to test for metals contamination in drinking water and wastewater. [ 10 ] ICP-AES is widely used in minerals processing to provide the data on grades of various streams, for the construction of mass balances. In 2008, the technique was used at Liverpool University to demonstrate that a Chi Rho amulet found in Shepton Mallet and previously believed to be among the earliest evidence of Christianity in England , [ 11 ] only dated to the nineteenth century. [ 12 ] [ 13 ] [ 14 ] ICP-AES is often used for analysis of trace elements in soil, and it is for that reason it is often used in forensics to ascertain the origin of soil samples found at crime scenes or on victims etc. Taking one sample from a control and determining the metal composition and taking the sample obtained from evidence and determine that metal composition allows a comparison to be made. While soil evidence may not stand alone in court it certainly strengthens other evidence. It is also fast becoming the analytical method of choice for the determination of nutrient levels in agricultural soils. This information is then used to calculate the amount of fertiliser required to maximise crop yield and quality. ICP-AES is used for motor oil analysis. Analyzing used motor oil reveals a great deal about how the engine is operating. Parts that wear in the engine will deposit traces in the oil which can be detected with ICP-AES. ICP-AES analysis can help to determine whether parts are failing. In addition, ICP-AES can determine what amount of certain oil additives remain and therefore indicate how much service life the oil has remaining. Oil analysis is often used by fleet manager or automotive enthusiasts who have an interest in finding out as much about their engine's operation as possible. ICP-AES is also used during the production of motor oils (and other lubricating oils) for quality control and compliance with production and industry specifications.
https://en.wikipedia.org/wiki/Inductively_coupled_plasma_atomic_emission_spectroscopy