id
stringlengths
1
4
question
stringlengths
6
1.87k
context
listlengths
5
5
choices
listlengths
2
18
answer
stringlengths
1
840
5
Which of the following scheduler policies are preemptive?
[ "Fixed-priority preemptive scheduling is a scheduling system commonly used in real-time systems. With fixed priority preemptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of all those tasks that are currently ready to execute. The preemptive scheduler has a clock interrupt task that can provide the scheduler with options to switch after the task has had a given period to execute—the time slice. This scheduling system has the advantage of making sure no task hogs the processor for any time longer than the time slice. However, this scheduling scheme is vulnerable to process or thread lockout: since priority is given to higher-priority tasks, the lower-priority tasks could wait an indefinite amount of time. One common method of arbitrating this situation is aging, which gradually increments the priority of waiting processes and threads, ensuring that they will all eventually execute. Most real-time operating systems (RTOSs) have preemptive schedulers. Also turning off time slicing effectively gives you the non-preemptive RTOS. Preemptive scheduling is often differentiated with cooperative scheduling, in which a task can run continuously from start to end without being preempted by other tasks. To have a task switch, the task must explicitly call the scheduler. Cooperative scheduling is used in a few RTOS such as Salvo or TinyOS.", "Run-to-completion scheduling or nonpreemptive scheduling is a scheduling model in which each task runs until it either finishes, or explicitly yields control back to the scheduler. Run-to-completion systems typically have an event queue which is serviced either in strict order of admission by an event loop, or by an admission scheduler which is capable of scheduling events out of order, based on other constraints such as deadlines. Some preemptive multitasking scheduling systems behave as run-to-completion schedulers in regard to scheduling tasks at one particular process priority level, at the same time as those processes still preempt other lower priority tasks and are themselves preempted by higher priority tasks. See also Preemptive multitasking Cooperative multitasking", "jobs to be interrupted (paused and resumed later) 39 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily 40 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily • Preemptive schedulers can take the control of CPU at any time, switching to another process according to the the scheduling policy • OS relies on timer interrupts and context switch for preemptive process/jobs 41 Shortest time to completion first (STCF) • STCF extends the SJF by adding preemption • Any time a new job enters the system: a. STCF scheduler determines which of the remaining jobs (including new job) has the least time left b. STCF then schedules the shortest job first 42 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C) = 10 • Tturnaround(A) = 120 • Tturnaround(B) = (20 - 10) = 10 • Tturnaround(C) = (30 - 10) = 20 Average turnaround time = (120 + 10 + 20) / 3 = 50 0 20 40 60 80 100 120 A B C [B, C arrive] A 43 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C", "ed to Q2 (R 5) • The same procedure happens until 100 ms • Process B and C also join Q2 • A scheduled for 10 ms • B is scheduled and then followed by C that are issuing IO requests as well MLFQ does not starve long running jobs and gives equal time to all jobs A B C boost boost boost Putting it together: the “uptime” utility - combines CPU and IO contention 70 71 Summary • Context switching and preemption are fundamental mechanisms that allow the OS to remain in control and to implement higher level scheduling policies • Schedulers need to optimize for different metrics: utilization, turnaround time, response time, fairness • FIFO: Simple, non-preemptive scheduler • SJF: non-preemptive, prevents process jams • STCF: preemptive, prevents jamming of late processes • RR: preemptive, great response time, bad turnaround time • MLFQ: preemptive, most realistic Insight: Past behavior is good predictor for future behavior", "the implementation of the higher-level scheduler. A compromise has to be made involving the following variables: Response time: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessly long. If this variable is not considered resource starvation may occur and a process may not complete at all. Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process. Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster. References Tanenbaum, Albert Woodhull, Operating Systems: Design and Implementation, p.92" ]
[ "FIFO (First In, First Out)", "SJF (Shortest Job First)", "STCF (Shortest Time to Completion First)", "RR (Round Robin)" ]
['STCF (Shortest Time to Completion First)', 'RR (Round Robin)']
6
Which of the following are correct implementation for acquire function ? Assume 0 means UNLOCKED and 1 means LOCKED. Initially l->locked = 0.
[ "is in the lexical scope of the anonymous function. Higher-order funcMons • Functions that operate on other functions, either by taking them as arguments or by returning them, are called higher-order functions. function myFunc() { const anotherFunc = function() { console.log(\"inner\"); } return anotherFunc; } const innerFunc = myFunc(); innerFunc(); // \"inner\" myFunc()(); // \"inner\" function forEach(array, callback) {... } Closures • A closure is the combination of a function and the lexical environment within which that function was declared. • The function defined in the closure ‘remembers’ the environment in which it was created. function greaterThan(n) { return function(m) { return m > n; }; } const greaterThan10 = greaterThan(10); greaterThan10(11); // true let counter = (function() { let privateCounter = 0; function changeBy(val) { privateCounter += val; } return { increment: function() { changeBy(1); }, decrement: function() { changeBy(-1); }, value: function() { return privateCounter; } }; })(); console.log(counter.value()); // logs 0 counter.increment(); counter.increment(); console.log(counter.value()); // logs 2 counter.decrement(); console.log(counter.value()); // logs 1 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures [ViralPatel.net] Arrow funcMons • An arrow function expression has a shorter syntax than a function expression let counter = (function() { let privateCounter = 0; changeBy = (val) => { return privateCounter += val; } return { increment: () => changeBy(1), // one liner can remove return and {} decrement: () => changeBy(-1), value: () => privateCounter, reset: (val=0) => { privateCounter = val; }, }", "1[A, B]: def apply(x: A): B So functions are objects with apply methods. There are also traits Function2, Function3,... for functions which take more parameters. Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: new Function1[Int, Int]: def apply(x: Int) = x * x Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: new Function1[Int, Int]: def apply(x: Int) = x * x This anonymous class can itself be thought of as a block that defines and instantiates a local class: { class $anonfun() extends Function1[Int, Int]: def apply(x: Int) = x * x $anonfun() } Expansion of Function Calls A function call, such as f(a, b), where f is a value of some class type, is expanded to f.apply(a, b) So the OO-translation of val f = (x: Int) => x * x f(7) would be val f = new Function1[Int, Int]: def apply(x: Int) = x * x f.apply(7) Functions and Methods Note that a method such as def f(x: Int): Boolean =. is not itself a function value. But if f is used in a place where a Function type is expected, it is converted automatically to the function value (x: Int) => f(x) or, expanded: new Function1[Int, Boolean]: def apply(x: Int) = f(x) Exercise In package week3, define an object IntSet:. with 3 functions in it so that users can create IntSets of lengths 0-2 using syntax IntSet() // the empty set IntSet(1) // the set with single", "q ~f(L)\\cap f(R)~} is strict: ∅ = f ( ∅ ) = f ( L ∩ R ) ≠ f ( L ) ∩ f ( R ) = { y } ∩ { y } = { y } {\\displaystyle \\varnothing ~=~f(\\varnothing )~=~f(L\\cap R)~\\neq ~f(L)\\cap f(R)~=~\\{y\\}\\cap \\{y\\}~=~\\{y\\}} In words: functions might not distribute over set intersection ∩ {\\displaystyle \\,\\cap \\,} (which can be defined as the set subtraction of two sets: L ∩ R = L <unk> ( L <unk> R ) {\\displaystyle L\\cap R=L\\setminus (L\\setminus R)} ). What the set operations in these four examples have in common is that they either are set subtraction <unk> {\\displaystyle \\setminus } (examples (1) and (2)) or else they can naturally be defined as the set subtraction of two sets (examples (3) and (4)). Mnemonic: In fact, for each of the above four set formulas for which equality is not guaranteed, the direction of the containment (that is, whether to use ⊆ or <unk> {\\displaystyle \\,\\subseteq {\\text{ or }}\\supseteq \\,} ) can always be deduced by imagining the function f {\\displaystyle f} as being constant and the two sets ( L {\\displaystyle L} and R {\\displaystyle R} ) as being non-empty disjoint subsets of its domain. This is because every equality fails for such a function and sets: one side will be always be ∅ {\\displaystyle \\varnothing } and the other non-empty − from this fact, the correct choice of ⊆ or <unk> {\\displaystyle \\,\\subseteq {\\text{ or }}\\supseteq \\,} can be deduced by answering: \"which side is empty?\" For example, to decide if the", "(LL) = l.l rx, mem[addr] o Store-conditional (SC) = s.c rx, mem[addr] u Interacts with cache-coherence protocol to guarantee no intervening writes to [addr] u Used in MIPS, DEC Alpha, and all ARM cores Alternative: Load-Locked & Store Conditional CS 307 – Fall 2018 Lec.07 - Slide 58 u Recall the incorrect first attempt: o Two cores could both see the lock as free, and enter the critical section u How does LL/SC solve the problem? How is LL/SC Atomic? Lock: Unlock: ld r1, mem[addr] // load word into r1 cmp r1, #0 // if 0, store 1 bnz Lock // else, try again st mem[addr], #1 st mem[addr], #0 // store 0 to address CS 307 – Fall 2018 Lec.07 - Slide 59 u LL puts the address and flag into a link register Remember and Validate the Address P0 Cache ll X BusRd X Link Register CS 307 – Fall 2018 Lec.07 - Slide 60 u LL puts the address and flag into a link register o Invalidations or evictions for that address clear the flag, and the SC will then fail o Signals that another core modified the address Remember and Validate the Address P0 Cache BusInv 0 Link Register CS 307 – Fall 2018 Lec.07 - Slide 61 u Consider the following case: o Processors 0 and 1 both execute the following code, with cache block X beginning in Shared o Both ll [X] read 0 o Both begin to issue sc [X] o Will we break mutual exclusion? Simultaneous SCs? Lock: ll r2, [X] cmp r2, #0 bnz Lock // if 1, spin addi r2, #1 sc [X], r2 CS 307 – Fall 2018 Lec.07 - Slide 62 u Will we break mutual exclusion? o Answer: No! Why? § Remember, cache coherence ensures the propagation of values to a single address. o So, when both processors try to BusInv, one of them will “win”, and clear the other’s link register flag § e.g., Say P1 wins", "t) du système d'acquisition du signal (filtres), conduisant généralement à du bruit de convolution. •= Bande passante fréquentielle limitée (par exemple dans le cas des lignes téléphoniques pour lesquelles les fréquences transmises sont naturellement limitées entre environ 350Hz et 3200Hz). •= Elocution inhabituelle ou altérée, comprenant entre autre: l'effet Lombard, (qui désigne toutes les modifications, souvent inaudibles, du signal acoustique lors de l'élocution en milieu bruité), le stress physique ou émotionnel, une vitesse d'élocution inhabituelle, ainsi que les bruits de lèvres ou de respiration. Certains systèmes peuvent être plus robustes que d'autres à l'une ou l'autre de ces perturbations, mais en règle générale, les reconnaisseurs de parole actuels restent encore trop sensibles à ces paramètres. 4.3 Principes généraux Le problème de la reconnaissance automatique de la parole consiste à extraire l'information contenue dans un signal de parole (signal électrique obtenu à la sortie d'un microphone et typiquement échantillonné à 8kHz dans le cas de lignes téléphoniques ou entre 10 et 16kHz dans le cas de saisie par microphone). Bien que ceci soulève également le problème de la compréhension de la parole, nous nous contenterons ici de discuter du problème de la reconnaissance des mots contenus dans une phrases. 4.3.1 Reconnaissance par comparaison à des exemples Les premiers succès en reconnaissance vocale ont été obtenus dans les années 70 à l’aide d’un paradigme de reconnaissance de mots « par l’exemple ». L’idée, très simple dans son principe, consiste à faire prononcer un ou plusieurs exemples de chacun des mots susceptibles d’être reconnus, et à les enregistrer sous forme de vecteurs acoustiques (typiquement : un vecteur de coefficients LPC ou assimilés toutes les 10 ms). Puisque cette suite de vecteurs acoustiques caractérisent complètement l’évolution de l’enveloppe spectrale du signal enregistré, on peut dire qu’elle correspond à un l’enregistrement d’un spectrogramme. L’étape de reconnaissance proprement dite consiste alors à analyser le signal inconnu sous la" ]
[ "c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n if(cas(&l->locked, 0, 1) == 0)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n for(;;)\n if(cas(&l->locked, 1, 0) == 1)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n if(l->locked == 0) \n return;\n }" ]
['c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }']
11
In which of the following cases does JOS acquire the big kernel lock?
[ "@1ntext Persa Now, suppose Persa is a packet switch that forwards Alice’s traffic to Bob. Suppose Persa *modifies* the {plaintext, ciphertext} pair sent by Alice. But because Persa does not know *Alice’s private key*, she cannot produce a “consistent” pair: when Bob decrypts [click] Persa’s ciphertext he will not get Alice’s plaintext, and he will not get Persa’s plaintext; he will get something that doesn’t make any sense. So, Bob will know that it is not Alice who sent this {plaintext, ciphertext} pair. As before: There is something silly about this approach: Alice sends twice the amount of data to Bob (relative to what she wants to actually say to him). Because the ciphertext is as large as the plaintext. → Is there a better way to achieve authenticity and data integrity? That does not require to send that much extra data to Bob? Bob 75 hash function Alice-key- Alice-key+ Alice plaintext hash ciphertext plaintext plaintext encryption algorithm ciphertext decryption algorithm hash hash hash function hash plaintext Alice and Bob can combine encryption/decryption with a cryptographic hash function*: - Alice provides her plaintext as input [click] to a hash function and obtains a hash [click]. - She provides the hash as input to an encryption algorithm (together with her private key) and obtains a ciphertext [click]. - She sends to Bob both the plaintext and the ciphertext [click]. - Bob provides the ciphertext as input to a decryption algorithm (together with *Alice’s public key*), and obtains the hash [click]. - Then Bob provides the plaintext that Alice sent as input [click] to his hash function, and obtains the same hash [click]. Bob knows that it is Alice who sent the {plaintext, ciphertext} pair, because only someone who knows Alice’s private key can produce a pair where the plaintext and the ciphertext yield the same hash. We call this ciphertext a... Bob 58 hash function Alice-key- Alice-key+ Alice", "that it is not Alice who sent this {plaintext, ciphertext} pair. However: There is something silly about this approach: Alice sends twice the amount of data to Bob (relative to what she wants to actually say to him). Because the ciphertext is as large as the plaintext. → Is there a better way to achieve authenticity and data integrity? That does not require to send that much extra data to Bob? Bob 68 hash function hash function key key Alice plaintext hash << ciphertext hash hash plaintext plaintext Instead of encryption/decryption algorithms, Alice and Bob can use a *cryptographic hash function*. - Alice provides her plaintext as input [click] to her hash function (together with the shared secret key), and she obtains a *hash* [click] of her plaintext. By definition, a hash is smaller (typically significantly more) than the input to the hash function. So, Alice obtains a hash that is (typically significantly) smaller than her plaintext. - Alice sends *both* the plaintext and the hash to Bob [click]. - Bob provides the plaintext as input to his hash function (together with the shared secret key), and obtains a hash. Bob knows that it is Alice who sent the {plaintext, MAC} pair, because only someone who knows the shared secret key can produce a pair where the plaintext yields the MAC if hashed with this particular key. We call this hash... Bob 53 hash function hash function key key Alice plaintext MAC MAC plaintext MAC plaintext Message Authentication Code or MAC. Bob 54 hash function hash function key key Alice plaintext |Ç#@ M@C Persa plaintext pla1nt3xt MAC Now, suppose Persa is a packet switch that forwards Alice’s traffic to Bob. Suppose Persa *modifies* the {plaintext, MAC} pair sent by Alice. But because Persa does not know the shared secret key, she cannot produce a “consistent” pair: when Bob hashes [click] Persa’s plaintext he will not get Alice’s MAC, and he will not get Persa’s MAC; he will get something that doesn’t make any sense. So, Bob will know that it", "the film faced delays due to rewrites and the COVID-19 pandemic. Spielberg was initially set to direct but stepped down in 2020, with Mangold taking over. Filming began in June 2021 in various locations including the United Kingdom, Italy, and Morocco, wrapping in February 2022. Franchise composer John Williams returned to score the film, earning nominations for Best Original Score at the 96th Academy Awards and Best Score Soundtrack for Visual Media at the 66th Annual Grammy Awards. Williams won the Grammy Award for Best Instrumental Composition for \"Helena's Theme\". Indiana Jones and the Dial of Destiny premiered out of competition at the 76th Cannes Film Festival on May 18, 2023, and was theatrically released in the United States on June 30, by Walt Disney Studios Motion Pictures. The film received mixed reviews and grossed $384 million worldwide, becoming a box-office bomb due to a lack of wide audience appeal and being one of the most expensive films ever made, with an estimated loss of $143 million for Disney. Plot Toward the end of World War II, Nazis capture Indiana Jones and Oxford archaeologist Basil Shaw as they attempt to retrieve the Lance of Longinus from a castle in the French Alps. Astrophysicist Jürgen Voller informs his superiors the Lance is fake, but he has found half of Archimedes' Dial, an Antikythera mechanism built by the ancient Syracusan mathematician Archimedes which reveals time fissures, thereby allowing for possible time travel. Jones escapes onto a Berlin-bound train filled with looted antiquities and frees Basil. He obtains the Dial piece, and the two escape just before Allied forces derail the train. In 1969, Jones, who is retiring from Hunter College in New York City, has been separated from his wife Marion Ravenwood since their son Mutt's death in the Vietnam War. Jones' goddaughter, archaeologist Helena Shaw, unexpectedly visits and wants to research the Dial. Jones warns that her late father, Basil, became obsessed with studying the Dial before relinquishing it to Jones to destroy, which he never did. As Jones and Helena retrieve the Dial half from the college archives, Voller's accomplices attack them", "\\mathcal {C}}_{YY}^{\\pi }=\\mathbb {E} [\\varphi (Y)\\otimes \\varphi (Y)].} In practical implementations, the kernel chain rule takes the following form C ^ X Y π = C ^ X ∣ Y C ^ Y Y π = Υ ( G + λ I ) − 1 G ~ diag ⁡ ( α ) Φ ~ T {\\displaystyle {\\widehat {\\mathcal {C}}}_{XY}^{\\pi }={\\widehat {\\mathcal {C}}}_{X\\mid Y}{\\widehat {\\mathcal {C}}}_{YY}^{\\pi }={\\boldsymbol {\\Upsilon }}(\\mathbf {G} +\\lambda \\mathbf {I} )^{-1}{\\widetilde {\\mathbf {G} }}\\operatorname {diag} ({\\boldsymbol {\\alpha }}){\\boldsymbol {\\widetilde {\\Phi }}}^{T}} Kernel Bayes' rule In probability theory, a posterior distribution can be expressed in terms of a prior distribution and a likelihood function as Q ( Y ∣ x ) = P ( x ∣ Y ) π ( Y ) Q ( x ) {\\displaystyle Q(Y\\mid x)={\\frac {P(x\\mid Y)\\pi (Y)}{Q(x)}}} where Q ( x ) = ∫ Ω P ( x ∣ y ) d π ( y ) {\\displaystyle Q(x)=\\int _{\\Omega }P(x\\mid y)\\,\\mathrm {d} \\pi (y)} The analog of this rule in the kernel embedding framework expresses the kernel embedding of the conditional distribution in terms of conditional embedding operators which are modified by the prior distribution μ Y ∣ x π = C Y ∣ X π φ ( x ) = C Y X π ( C X X π ) − 1 φ ( x ) {\\displaystyle \\mu _{Y\\mid x}^{\\pi }={\\mathcal {C}}_{Y\\mid X", "the boot process. The secure boot process begins with secure flash, which ensures that unauthorized changes cannot be made to the firmware. Authorized releases of Junos OS carry a digital signature produced by either Juniper Networks directly or one of its authorized partners." ]
[ "Processor traps in user mode", "Processor traps in kernel mode", "Switching from kernel mode to user mode", "Initialization of application processor" ]
['Processor traps in user mode', 'Initialization of application processor']
15
In an x86 multiprocessor with JOS, how many Bootstrap Processors (BSP) is it possible to have at most? And how many Application Processors (AP) at most?
[ "use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. In modern systems, the term has been extended to refer to packages that only deal with one processor, not the whole motherboard. Windows CE and Android also use a BSP. Example The Wind River Systems board support package for the ARM Integrator 920T single-board computer contains, among other things, these elements: A config.h file, which defines constants such as ROM_SIZE and RAM_HIGH_ADRS. A Makefile, which defines binary versions of VxWorks ROM images for programming into flash memory. A boot ROM file, which defines the boot line parameters for the board. A target.ref file, which describes board-specific information such as switch and jumper settings, interrupt levels, and offset bias. A VxWorks image. Various C files, including: flashMem.c—the device driver for the board's flash memory pciIomapShow.c—mapping file for the PCI bus primeCellSio.c—TTY driver sysLib.c—system-dependent routines specific to this board romInit.s—ROM initialization module for the board; contains entry code for images that start running from ROM Additionally the BSP is supposed to perform the following operations: Initialize the processor Initialize the board Initialize the RAM Configure the segments Load and run OS from flash See also BIOS UEFI", "In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific boot loaders, device drivers, in sometimes operating system kernels, and other routines that allow a given embedded operating system, for example a real-time operating system (RTOS), to function in a given hardware environment (a motherboard), integrated with the embedded operating system. The board support package is usually provided by the SoC manufacturer (such as Qualcomm), and it can be modified by the OEM. Software Third-party hardware developers who wish to support a given embedded operating system must create a BSP that allows that embedded operating system to run on their platform. In most cases, the embedded operating system image and software license, the BSP containing it, and the hardware are bundled together by the hardware vendor. BSPs are typically customizable, allowing the user to specify which drivers and routines should be included in the build based on their selection of hardware and software options. For instance, a particular single-board computer might be paired with several peripheral chips; in that case the BSP might include drivers for peripheral chips supported; when building the BSP image the user would specify which peripheral drivers to include based on their choice of hardware. Some suppliers also provide a root file system, a toolchain for building programs to run on the embedded system, and utilities to configure the device (while running) along with the BSP. Many embedded operating system providers provide template BSP's, developer assistance, and test suites to aid BSP developers to set up an embedded operating system on a new hardware platform. History The term BSP has been in use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. In modern systems, the term has been extended to refer to packages that only deal with one processor, not the whole motherboard. Windows CE and Android also use a BSP. Example The Wind River Systems board support package for the ARM", "processors that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures. Code that must interact directly with the hardware, for example in device drivers and interrupt handlers. In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second. Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition. Stand-alone executables that are required to execute without recourse to the run-time components or libraries associated with a high-level language, such as the firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, and security systems. Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264). Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor. Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details. Cryptographic algorithms that must", "even included more than one processor core to work in parallel. Other DSPs from 1995 are the TI TMS320C541 or the TMS 320C80. The fourth generation is best characterized by the changes in the instruction set and the instruction encoding/decoding. SIMD extensions were added, and VLIW and the superscalar architecture appeared. As always, the clock-speeds have increased; a 3 ns MAC now became possible. Modern DSPs Modern signal processors yield greater performance; this is due in part to both technological and architectural advancements like lower design rules, fast-access two-level cache, (E)DMA circuitry, and a wider bus system. Not all DSPs provide the same speed and many kinds of signal processors exist, each one of them being better suited for a specific task, ranging in price from about US$1.50 to US$300. Texas Instruments produces the C6000 series DSPs, which have clock speeds of 1.2 GHz and implement separate instruction and data caches. They also have an 8 MiB 2nd level cache and 64 EDMA channels. The top models are capable of as many as 8000 MIPS (millions of instructions per second), use VLIW (very long instruction word), perform eight operations per clock-cycle and are compatible with a broad range of external peripherals and various buses (PCI/serial/etc). TMS320C6474 chips each have three such DSPs, and the newest generation C6000 chips support floating point as well as fixed point processing. Freescale produces a multi-core DSP family, the MSC81xx. The MSC81xx is based on StarCore Architecture processors and the latest MSC8144 DSP combines four programmable SC3400 StarCore DSP cores. Each SC3400 StarCore DSP core has a clock speed of 1 GHz. XMOS produces a multi-core multi-threaded line of processor well suited to DSP operations, They come in various speeds ranging from 400 to 1600 MIPS. The processors have a multi-threaded architecture that allows up to 8 real-time threads per core, meaning that a 4 core device would support up to 32", "Hoare, inventor of Rust. Ken Thompson, inventor of B and Go. Kenneth E. Iverson, developer of APL, co-developer of J with Roger Hui. Konrad Zuse, designed the first high-level programming language, Plankalkül (which influenced ALGOL 58). Kristen Nygaard, pioneered object-oriented programming, co-invented Simula. Larry Wall, creator of the Perl programming language (see Perl and Raku). Martin Odersky, creator of Scala, and previously a contributor to the design of Java. Martin Richards developed the BCPL programming language, forerunner of the B and C languages. Nathaniel Rochester, inventor of first assembler (IBM 701). Niklaus Wirth, inventor of Pascal, Modula and Oberon. Ole-Johan Dahl, pioneered object-oriented programming, co-invented Simula. Rasmus Lerdorf, creator of PHP. Rich Hickey, creator of Clojure. Robert Gentleman, co-creator of R. Robert Griesemer, co-creator of Go. Robin Milner, inventor of ML, and sharing credit for Hindley–Milner polymorphic type inference. Rob Pike, co-creator of Go, Inferno (operating system) and Plan 9 (operating system) Operating System co-author. Ross Ihaka, co-creator of R. Stanley Cohen, inventor of Speakeasy, which was created with an OOPS, object-oriented programming system, the first instance, in 1964. Stephen Wolfram, creator of Mathematica. Walter Bright, creator of D. Yukihiro Matsumoto, creator of Ruby. See also References Further reading Rosen, Saul, (editor), Programming Systems and Languages, McGraw-Hill, 1967. Sammet, Jean E., Programming Languages: History and Fundamentals, Prentice-Hall, 1969. Sammet, Jean E. (July 1972). \"Programming Languages: History and Future\". Communications of the ACM. 15 (7): 601–610. doi:10.1145/361454.361485. S2CID 2003242. Richard L. Wexelblat (ed.): History of Programming Languages, Academic Press 1981. Thomas" ]
[ "BSP: 0, AP: 4", "BSP: 1, AP: 4", "BSP: 2, AP: 4", "BSP: 0, AP: infinite", "BSP: 1, AP: infinite", "BSP: 2, AP: infinite" ]
['BSP: 1, AP: infinite']
20
Assume a user program executes following tasks. Select all options that will use a system call.
[ "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", ", functions (routines, subroutines, procedures, methods, etc.) are used to encapsulate code and make it reusable. Calling a function involves these steps: 1. Place arguments where the called function can access them. 2. Jump to the function. 3. Acquire storage resources the function needs. 4. Perform the desired task of the function. 5. Communicate the result value back to the calling program. 6. Release any local storage resources. 7. Return control to the calling program. 2.3.1 Jump to the Function/Retun control to the calling program The too simple not working approach A simple (not working) approach for creating functions would be to do this: 19 CHAPTER 2. PART I(B) - ISA, FUNCTIONS, AND STACK - W 1.2 With this approach the function doesn’t know where to return to after being called (back2 or back) For the next part, remember, the Program Counter is distinct from general-purpose registers. It is dedicated to managing the flow of instruction execution, while general registers are used for data manipulation. The Good Approach The right approach involves using the Jump and Link instruction jal, here loading PC + 4 (remem- ber 4 bytes per Instruction) into x1 as a way to come back from the function. 1 main: 2. 3 jal x1, sqrt 4. 5. 6 jal x1, sqrt 1 sqrt: 2. 3. 4 jr x1 Both times x1 was used to store the return adress, and there is a reason for that (Register Conven- tions Sections). 2.3.2 Jump Instructions There are only two core real jump instructions in RISCV, jal (jump and link) and jalr (jump and link register), the rest are pseudo instructions using them. 20 Notes by Ali EL AZDI 2.3.3 Register Conventions Register conventions are rules that dictate how registers are used in a program, here are the ones we’ve seen for now 2.3.4 Back to the good (not so good) approach There’s still a problem with the previous approach, say for example you want to call a function from another function. Here the allocated space for the return address is overwritten by the second function call, and", "program. For that, the OS needs to create a new process and create a new address space to load the program Let’s divide and conquer: • fork() creates a new process (replica) with a copy of its own address space • exec() replaces the old program image with a new program image fork() exec() exit() wait() Why do we need fork() and exec()? 38 Multiple programs can run simultaneously Better utilization of hardware resources Users can perform various operations between fork() and exec() calls to enable various use cases: • To redirect standard input/output: • fork, close/open file descriptors, exec • To switch users: • fork, setuid, exec • To start a process with a different current directory: • fork, chdir, exec fork() exec() exit() wait() Why do we need fork() and exec()? open/close are special file-system calls Set user ID (change user who can be the owner of the process) Go to a specified directory 39 wait(): Waiting for a child process • Child processes are tied to their parent • There exists a hierarchy among processes on forking A parent process uses wait() to suspend its execution until one of its children terminates. The parent process then gets the exit status of the terminated child pid_t wait (int *status); • If no child is running, then the wait() call has no effect at all • Else, wait() suspends the caller until one of its children terminates • Returns the PID of the terminated child process fork() exec() exit() wait() 40 exit(): Terminating a process When a process terminates, it executes exit(), either directly on its own, or indirectly via library code void exit (int status); • The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminat", "• The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminates a misbehaving process • Terminated process exists as a zombie • When a parent process calls wait(), the zombie child is cleaned up or “reaped” • If a parent terminates before child, the child becomes an orphan • init (pid: 1) process adopts orphans and reaps them fork() exec() exit() wait() P1 C1 wait() P1 reaps C1 Waiting for children to die... 42 • Scenarios under which a process terminates • By calling exit() itself • OS terminates a misbehaving process • Terminated process exists as a zombie • When a parent process calls wait(), the zombie child is cleaned up or “reaped” • If a parent terminates before child, the child becomes an orphan • init (pid: 1) process adopts orphans and reaps them fork() exec() exit() wait() P1 init C1 wait() P1 reaps C1 P1 C1 init eventually reaps C1 43 Process state transition (full lifecycle) Running Ready Blocked Descheduled Scheduled I/O done A process can be in one of several states during its life cycle: • Running • Ready • Blocked • Zombie I/O start fork() exit() A tree of processes 44 • Each process has a parent process • init is the first process (pid: 1) without any parent process • A process can have many child processes • Each process again can have child processes L02.3: fork() illustrated 3 examples, step-by-step CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU 46 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <unistd.h> 4 5 int main(int argc, char *arg", "6 Task switching mechanism: context switch • The OS can be in the kernel mode, it cannot return back to the same process • Process is finished or must be terminated (e.g., invalid operations) • Process did a system call and it is waiting for it to complete (IO operation) • The OS does not want to run the same process • The process has run for too long • There are other processes present and they should be scheduled The OS performs a context switch to stop running one process and start running another, i.e., switch from one process to another 7 Task switching mechanism: context switch 8 Reminder: Process state transitions Running Ready Blocked Descheduled Scheduled I/O done I/O start 9 Context switch A context switch is a mechanism that allows the OS to store the current process state and switch to some other, previously stored context. • The context of the process is represented in the process control block (PCB) • The OS maintains the PCB for each process • The process control block (PCB) includes hardware registers • All registers available to user code (e.g. x86 general registers) • All process-specific registers (e.g. on x86 cr3 -- the base of the page table) • Stored in the PCB when the process is not currently running 10 Context switch procedure The OS does the following operations during the context switch: 1. Saves the running process’ execution state in the PCB 2. Selects the next thread 3. Restores the execution state of the next process 4. Passes the control using return from trap to resume next process Process 0 OS (CPU0) Process 1 Interrupt / system call Save state into PCB0 Reload state from PCB1 Save state into PCB1 Reload state from PCB0 Interrupt / system call executing executing de-sched. executing Context switch Context switch PCB: Process control block Note: the de-scheduled process is in either Ready or Blocked state de-sched. de-sched. 11 Preemption for process scheduling* • A process may never give up control, exits, or performs IO • This leads to the process running forever and the OS cannot gain control • OS sets a timer before scheduling a process • Hardware generates an" ]
[ "Read the user's input \"Hello world\" from the keyboard.", "Write \"Hello world\" to a file.", "Encrypt \"Hello world\" by AES.", "Send \"Hello world\" to another machine via Network Interface Card." ]
['Read the user\'s input "Hello world" from the keyboard.', 'Write "Hello world" to a file.', 'Send "Hello world" to another machine via Network Interface Card.']
22
What is the content of the inode?
[ "ed to the file by the file system Note Inodes are unique for a file system but not globally Recycled after deletion An inode contains metadata of a file Permissions length access time Location of data block and indirection block Each file ha exactly one associated inode OS view Inode persistent ID Storage space is split into inode table and data storage Files are statically allocated Require inode number to access file content Inode table Metadata location size location size location size location size location size data F data F data F Storage space is split into inode table and data storage Files are statically allocated Require inode number to access file content Idea Use a dedicated place at the beginning of the storage medium mostly initial block Inode table Metadata location size location size location size location size location size data F data F data F Inode and device number persistent ID Path human readable File descriptor process view The file abstraction perspective Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets Each file ha a human readable format file name Humans are better at remembering name than number Files are organized into hierarchy of directory pathame Humans like to organize thing logically A filename is unique locally to a directory a full pathname is globally unique Modern file system mostly use untyped file array of byte File is a sequence of byte OS file system doe neither understand nor care about content User view file name A special file directory store mapping between file name and inodes Extend to hierarchy Mark if a file map to a regular file Access tmp test txt in step tmp test txt content Path inode Metadata location size location size location size location size location size tmp etc test txt Hello world Inode doe NOT contain the file name Each directory is a file stored like regular file Flag in the Inode separate directory from regular file Flag restricts API to process e g cannot write to a directory Contains array of filename Indode Multiple file name can map to the same inode inode ha a reference count Called shortcut in Windows or hard link in UNIX Linux Inodes and Directories A special file that store the mapping between human friendly name of", "de Metadata location size location size location size location size location size tmp etc test txt Hello world Inode doe NOT contain the file name Each directory is a file stored like regular file Flag in the Inode separate directory from regular file Flag restricts API to process e g cannot write to a directory Contains array of filename Indode Multiple file name can map to the same inode inode ha a reference count Called shortcut in Windows or hard link in UNIX Linux Inodes and Directories A special file that store the mapping between human friendly name of file and their inode number Contains subdirectory List of directory file indicates the root typically inode The path abstraction Directory bin l home sanidhya linuxbrew map to the current directory map to the parent directory More about directory Nine character after d or are permission bit rwx for owner group everyone Owner can read and write group and others can just read x set on a file mean that the file is executable x set on a directory user group others are allowed to cd to that directory Permission bit Inode and device number persistent ID Path name human readable File descriptor process view The file abstraction perspective Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets The combination of file name and inode device IDs are sufficient to implement persistent storage Drawback constant lookup from file name to inode device IDs are costly Idea do expensive tree traversal once store final inode device number in a per process table Also keep additional information such a file offset Per process table of open file Use linear number fd reuse when freed Process view file descriptor int fd open out txt return read fd buf Example Operations on a file fd table offset inode X device Y location A size B Each process ha it fd table are mapped to STDIN STDOUT and STDERR fd is and inode is X read update the offset to from int fd open mydir out txt return read fd buf int fd open out txt return Example Operations on a file fd table offset inode X device Y location A size B Each process ha it fd table are mapped to STDIN STDOUT and", "into hierarchies of directories: pathame • Humans like to organize things logically • A filename is unique locally to a directory; a full pathname is globally unique • Modern file systems mostly use untyped files: array of bytes • File is a sequence of bytes • OS/file system does neither understand nor care about contents 23 User view: file name • A special file (directory) stores mapping between file names and inodes • Extend to hierarchy: Mark if a file maps to a regular file • Access ‘/tmp/test.txt’ in 3 steps: ‘tmp’, ‘test.txt’, contents 24 Path → inode Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 ‘tmp’: 2, ‘etc’: 15,... ‘test.txt’: 3 ‘Hello world!’ • Inode does NOT contain the file name • Each directory is a file (stored like regular files) • Flag in the Inode separates directories from regular files • Flag restricts API to processes (e.g., cannot write to a directory) • Contains array of { filename, Indode} • Multiple file names can map to the same inode • → inode has a reference count • Called shortcut in Windows, or hard link in UNIX/Linux 25 Inodes and Directories • A special file that stores the mapping between human-friendly names of files and their inode numbers • Contains subdirectories: • List of directories, files • / indicates the root (typically inode:1) 26 The path abstraction: Directory / bin ls home sanidhya linuxbrew • “.” maps to the current directory • “.” maps to the parent directory 27 More about directories • Nine characters (after ‘d’ or ‘.’) are permission bits • rwx for owner, group, everyone • Owner can read and write; group and others can just read • x set on a file means that the file is executable • x set on a directory: user/group/others are allowed to cd to that directory 28 Permission bits 1. Inode and device number (pers", "inode content are not on consecutive locations on disk. 54 Batching operations A process must block on a read operation. But what about a write? Idea: Delay all write operations ●perform them asynchronously (typical: wait at most 30 seconds) ●Reorder operations to maximize throughput (insert within the elevator algorithm) Consequence: content will be lost if the OS crashes 55 Delaying operations ●Multi-level indexing was introduced with early UNIX systems ●Early 1990s : introduction of log-structured filesystems • Insight: because of caching and increased memory sizes, most I/O is actually writes, not reads. • Idea: all writes should be to a log... then reconstruct file from the log • Today, modern file systems leverage ideas from log-structured file systems for meta-data operations • e.g. ext4 on Linux 56 Modern file systems Operating Systems wear multiple hats ●General-purpose abstractions and implementations ●Good performance for a wide range of operations. 57 Alternative view point : bypass the kernel Alternative design ●Expose resource directly to applications ●“Raw IO” -- direct access to the disk ○No file system, no buffer cache, no indirection Approach favored by high-end transactional databases ●Caching, logging, buffering, indexing, etc is all done by the database application, not the operating system 58 Summary • Overlap IO and computation as much as possible! • Use interrupts • Use DMA • Driver classes provide common interface • Storage: read/write/seek of blocks • File system design is informed by IO performance • Eliminate IO, batch IO, delay IO • Carefully schedule IOs on slow devices (minimize seek time on rotating HDD)", "Path (human readable) 3. File descriptor (process view) 18 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Low-level unique ID assigned to the file by the file system • Note: Inodes are unique for a file system but not globally • Recycled after deletion • An inode contains metadata of a file • Permissions, length, access times • Location of data blocks and indirection blocks • Each file has exactly one associated inode 19 OS view: Inode (persistent ID) • Storage space is split into inode table and data storage • Files are statically allocated • Require inode number to access file content 20 Inode table Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 data F1 data F2 data F3 • Storage space is split into inode table and data storage • Files are statically allocated • Require inode number to access file content Idea: Use a dedicated place at the beginning of the storage media, mostly initial block 21 Inode table Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 data F1 data F2 data F3 1. Inode and device number (persistent ID) 2. Path (human readable) 3. File descriptor (process view) 22 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Each file has a human readable format: file name • Humans are better at remembering names than numbers • Files are organized into hierarchies of directories: pathame • Humans like to organize things logically • A filename is unique locally to a directory; a full pathname is globally unique • Modern file systems mostly use untyped files: array of bytes • File is a sequence of bytes • OS/file system does neither understand nor care about contents 23 User view: file name • A special file (directory) stores mapping between file names and inodes • Extend to hierarchy: Mark if a file maps to a regular file • Access ‘" ]
[ "Filename", "File mode", "Hard links counter", "String with the name of the owner", "File size", "Capacity of the whole file system", "Index structure for data blocks" ]
['File mode', 'Hard links counter', 'File size', 'Index structure for data blocks']
23
In x86, what are the possible ways to transfer arguments when invoking a system call? For example, in the following code, string and len are sys_cputs’s arguments.
[ "a letter through the postal system, you have to follow certain rules. - You need to put your letter in an envelope and write a correct address on a particular part of the envelope. - You need to drop your letter in a mailbox. These rules are the “interface” between you and the postal system -- your only way of using the postal system successfully. Similarly, when a process wants to send a message over the Internet, it has to use certain syscalls in a certain way. So, these syscalls are the “interface” between the process and the Internet, this is why they are called an “Application Programming Interface”. You will learn how to write code that uses this API in the second half of your project with Jean-Cédric. → What happens when a process does a system call? Alice’s computer IO controller DMA controller NIC NIC controller memory data data data CPU user mode kernel mode Process running Makes syscall More network functions OS interacts with NIC Syscall handler runs, calls network functions NIC does its thing Embeds data into physical signal NIC interacts with physical communication medium data Consider a general-purpose computer; its CPU, main memory, and Network Interface Card (NIC). The NIC has a NIC controller (same way a disk has a disk controller). Somewhere in there, there is also the I/O controller (through which the CPU communicates with the peripheral devices), and the DMA controller (which manages data transfer between main memory and peripheral devices). Suppose there is a process running (the CPU is in user mode, executing the instructions of this process). At some point, the process creates some data, and it 48 wants to send it somewhere over the Internet. To do that, it makes a network syscall (the instructions of the process include a trap instruction; when the CPU executes that instruction, it switches to kernel mode). As a result, the syscall handler for the given syscall starts running. This invokes network-related functions, which typically add metadata (the yellow and green chunks) to the data created by the process. When the OS finishes preparing the data, it interacts with the NIC and, as a result, the data is copied from main memory into the NIC’", "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", "strings (no escaping), or to disable or enable variable interpolation, but has other uses, such as distinguishing character sets. Most often this is done by changing the quoting character or adding a prefix or suffix. This is comparable to prefixes and suffixes to integer literals, such as to indicate hexadecimal numbers or long integers. One of the oldest examples is in shell scripts, where single quotes indicate a raw string or \"literal string\", while double quotes have escape sequences and variable interpolation. For example, in Python, raw strings are preceded by an r or R – compare 'C:\\\\Windows' with r'C:\\Windows' (though, a Python raw string cannot end in an odd number of backslashes). Python 2 also distinguishes two types of strings: 8-bit ASCII (\"bytes\") strings (the default), explicitly indicated with a b or B prefix, and Unicode strings, indicated with a u or U prefix. while in Python 3 strings are Unicode by default and bytes are a separate bytes type that when initialized with quotes must be prefixed with a b. C#'s notation for raw strings is called @-quoting. While this disables escaping, it allows double-up quotes, which allow one to represent quotes within the string: C++11 allows raw strings, unicode strings (UTF-8, UTF-16, and UTF-32), and wide character strings, determined by prefixes. It also adds literals for the existing C++ string, which is generally preferred to the existing C-style strings. In Tcl, brace-delimited strings are literal, while quote-delimited strings have escaping and interpolation. Perl has a wide variety of strings, which are more formally considered operators, and are known as quote and quote-like operators. These include both a usual syntax (fixed delimiters) and a generic syntax, which allows a choice of delimiters; these include: REXX uses suffix characters to specify characters or strings using their hexadecimal or binary code. E.g., all yield the space character, avoiding the function call X2C(20). Str", "strings (no escaping), or to disable or enable variable interpolation, but has other uses, such as distinguishing character sets. Most often this is done by changing the quoting character or adding a prefix or suffix. This is comparable to prefixes and suffixes to integer literals, such as to indicate hexadecimal numbers or long integers. One of the oldest examples is in shell scripts, where single quotes indicate a raw string or \"literal string\", while double quotes have escape sequences and variable interpolation. For example, in Python, raw strings are preceded by an r or R – compare 'C:\\\\Windows' with r'C:\\Windows' (though, a Python raw string cannot end in an odd number of backslashes). Python 2 also distinguishes two types of strings: 8-bit ASCII (\"bytes\") strings (the default), explicitly indicated with a b or B prefix, and Unicode strings, indicated with a u or U prefix. while in Python 3 strings are Unicode by default and bytes are a separate bytes type that when initialized with quotes must be prefixed with a b. C#'s notation for raw strings is called @-quoting. While this disables escaping, it allows double-up quotes, which allow one to represent quotes within the string: C++11 allows raw strings, unicode strings (UTF-8, UTF-16, and UTF-32), and wide character strings, determined by prefixes. It also adds literals for the existing C++ string, which is generally preferred to the existing C-style strings. In Tcl, brace-delimited strings are literal, while quote-delimited strings have escaping and interpolation. Perl has a wide variety of strings, which are more formally considered operators, and are known as quote and quote-like operators. These include both a usual syntax (fixed delimiters) and a generic syntax, which allows a choice of delimiters; these include: REXX uses suffix characters to specify characters or strings using their hexadecimal or binary code. E.g., all yield the space character, avoiding the function call X2C(20). Str", "as arguments a pointer to the message, its length, and (within a special data structure) the destination IP address and destination port number. In response, the transport layer starts putting together a packet: the message that the process is sending [click], the destination IP address [click] and port number [click] that the process passed as arguments through the sendto syscall, and the source IP address [click] and port number [click] that are associated with this socket. - To be precise, the transport layer creates only the transport-layer header (which contains the source and destination port numbers, not the source and destination IP addresses), but it still keeps track of the source and destination IP addresses, because it needs to provide them to the network layer, which will create the network-layer header. - If the process does not need the socket any more, it makes a “close” sys call [click], i.e., asks the transport layer to close it. In response, the transport layer deletes [click] the socket. EPFL CS202 Computer Systems Process R 5 int sockedId = socket (..., UDP); int ret = bind (socketId, [IP address: 5.5.5.5, port: 5000],...); for process R IP address: 5.5.5.5 port: 5000 UDP socket recvfrom (socketId, message, length,...); message Source port: 1000 Dest. port: 5000 Source IP address: 1.1.1.1 Dest. IP address: 5.5.5.5 close (socketId ); application layer transport layer Now consider the receiving end: A process R [click] wants to use UDP to receive a message from a remote process: - First, the process asks the transport layer to open a UDP socket [click]. In response, the transport layer creates [click] a UDP socket and associates it with this process. - Second, the process asks the transport layer to bind [click] the socket to a particular local IP address and port number. In response, the transport layer adds [click, click] this information to the socket. - At this point, the process is ready to receive a message through this socket. To do this, it makes a “recvfrom” syscall [click" ]
[ "Stack", "Registers", "Instructions" ]
['Stack', 'Registers']
26
What is the worst case complexity of listing files in a directory? The file system implements directories as hash-tables.
[ "In computer science, a hash list is typically a list of hashes of the data blocks in a file or set of files. Lists of hashes are used for many different purposes, such as fast table lookup (hash tables) and distributed databases (distributed hash tables). A hash list is an extension of the concept of hashing an item (for instance, a file). A hash list is a subtree of a Merkle tree. Root hash Often, an additional hash of the hash list itself (a top hash, also called root hash or master hash) is used. Before downloading a file on a p2p network, in most cases the top hash is acquired from a trusted source, for instance a friend or a web site that is known to have good recommendations of files to download. When the top hash is available, the hash list can be received from any non-trusted source, like any peer in the p2p network. Then the received hash list is checked against the trusted top hash, and if the hash list is damaged or fake, another hash list from another source will be tried until the program finds one that matches the top hash. In some systems (for example, BitTorrent), instead of a top hash the whole hash list is available on a web site in a small file. Such a \"torrent file\" contains a description, file names, a hash list and some additional data. Applications Hash lists can be used to protect any kind of data stored, handled and transferred in and between computers. An important use of hash lists is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and to check that the other peers do not \"lie\" and send fake blocks. Usually a cryptographic hash function such as SHA-256 is used for the hashing. If the hash list only needs to protect against unintentional damage unsecured checksums such as CRCs can be used. Hash lists are better than a simple hash of the entire file since, in the case of a data block being damaged, this is noticed, and only the damaged block needs to be redownloaded. With", "of a fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. hash table In computing, a hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. heap A specialized tree-based data structure which is essentially an almost complete tree that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the \"top\" of the heap (with no parents) is called the root node. heapsort A comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum. human-computer interaction (HCI) Researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. I identifier In computer languages, identifiers are tokens (also called symbols) which name language entities. Some of the kinds of entities an identifier might denote include variables, types", "syntax. A table is a set of key and data pairs, where the data is referenced by key; in other words, it is a hashed heterogeneous associative array. Tables are created using the {} constructor syntax. Tables are always passed by reference (see Call by sharing). A key (index) can be any value except nil and NaN, including functions. A table is often used as structure (or record) by using strings as keys. Because such use is very common, Lua features a special syntax for accessing such fields. By using a table to store related functions, it can act as a namespace. Tables are automatically assigned a numerical key, enabling them to be used as an array data type. The first automatic index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed). A numeric key 1 is distinct from a string key \"1\". The length of a table t is defined to be any integer index n such that t[n] is not nil and t[n+1] is nil; moreover, if t[1] is nil, n can be zero. For a regular array, with non-nil values from 1 to a given n, its length is exactly that n, the index of its last value. If the array has \"holes\" (that is, nil values between other non-nil values), then #t can be any of the indices that directly precedes a nil value (that is, it may consider any such nil value as the end of the array). A table can be an array of objects. Using a hash map to emulate an array is normally slower than using an actual array; however, Lua tables are optimized for use as arrays to help avoid this issue. Metatables Extensible semantics is a key feature of Lua, and the metatable concept allows powerful customization of tables. The following example demonstrates an \"infinite\" table. For any n, fibs[n] will give the n-th Fibonacci number using dynamic programming and memoization. Object-oriented programming Although Lua does not have a built-in concept of classes, object-oriented programming can be", "File verification is the process of using an algorithm for verifying the integrity of a computer file, usually by checksum. This can be done by comparing two files bit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate a hash of the copied file and comparing that to the hash of the original file. Integrity verification File integrity can be compromised, usually referred to as the file becoming corrupted. A file can become corrupted by a variety of ways: faulty storage media, errors in transmission, write errors during copying or moving, software bugs, and so on. Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions, hash collisions may result in false positives, but the likelihood of collisions is often negligible with random corruption. Authenticity verification It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such as viruses or backdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to be collision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called a preimage attack. For this purpose, cryptographic hash functions are employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively, digital signatures can be employed to assure tamper resistance. File formats A checksum file is a small file that contains the checksums of other files. There are a few well-known checksum file formats. Several utilities, such as md5deep, can use such checksum files to automatically verify an entire directory", "-peer network are received undamaged and unaltered, and to check that the other peers do not \"lie\" and send fake blocks. Usually a cryptographic hash function such as SHA-256 is used for the hashing. If the hash list only needs to protect against unintentional damage unsecured checksums such as CRCs can be used. Hash lists are better than a simple hash of the entire file since, in the case of a data block being damaged, this is noticed, and only the damaged block needs to be redownloaded. With only a hash of the file, many undamaged blocks would have to be redownloaded, and the file reconstructed and tested until the correct hash of the entire file is obtained. Hash lists also protect against nodes that try to sabotage by sending fake blocks, since in such a case the damaged block can be acquired from some other source. Hash lists are used to identify CSAM online. Protocols using hash lists Rsync Zsync Bittorrent See also Hash tree Hash table Hash chain Ed2k: URI scheme, which uses an MD4 top hash of an MD4 hash list to uniquely identify a file Cryptographic hash function List" ]
[ "$O(1)$", "$O(number of direntries in the directory)$", "$O(size of the file system)$", "$O(number of direntries in the file system)$", "$O(log(number of direntries in the directory))$" ]
['$O(number of direntries in the directory)$']
40
In JOS, suppose one Env sends a page to another Env. Is the page copied?
[ "tag probability e pronunciation b oa r d z etc set of record identi ed by a reference e g a database with primary key Words Tokens Introduction Words Tokens Lexicon N gram Conclusion c EPFL J C Chappelier Field representation External v Internal structure i e serialization v memory representation Internal structure suited for an ef cient implementation of the two access method by value and by reference for each eld not necessarily the same for all eld not even necessarily the same for the two method of a given eld Words Tokens Introduction Words Tokens Lexicon N gram Conclusion c EPFL J C Chappelier eld by value access f o value by reference access fo reference reference surface form PoS lemma prononc board Ns board Ns b oa r d board Np board Ns b oa r d z y Vx y Vx f l i y Ns y Ns f l i by valuesurface y by refPoS Np All PoS tag for fly by refPoS", "and Bob have already shared a secret key [click] “out of band,” i.e., without using the network. E.g., one may have given it physically to the other. Alice has a message (a “plaintext”) [click] for Bob: - She provides it as input to her encryption algorithm (together with the shared key) and obtains a “ciphertext” [click] (a “jumbled” version of the plaintext). - Alice sends the ciphertext to Bob [click]. - Bob provides the ciphertext as input to his decryption algorithm and obtains the plaintext [click]. Now, suppose Eve [click] is an evil packet switch that forward’s Alice’s traffic to Bob. She sees the packet(s) carrying Alice’s message, but she cannot read the message – as long as she does not know the shared secret key. This is how to achieve confidentiality using *symmetric-key cryptography*, where Alice and Bob have shared a secret key. 60 encryption algorithm decryption algorithm Bob-key+ Bob-key- Alice Bob plaintext ciphertext plaintext ciphertext Eve Confidentiality can also be achieved through *asymmetric-key cryptography*. In asymmetric-key cryptography, each entity has two keys: a public (which may be known by everyone) and a private one (which should be known only by the entity itself). When someone encrypts something with Bob’s public key, Bob decrypts it with his private key. Conversely, when Bob encrypts something with his private key, somebody else can decrypt it with Bob’s public key. The setup is similar to the previous slide. However, Alice and Bob have not shared a secret key; Bob has his private key (Bob-key-), and Alice knows Bob’s public key (Bob-key+). [click] Alice has a plaintext for Bob: - She provides it as input to her encryption algorithm (together with Bob’s public key) and obtains a ciphertext. - Alice sends the ciphertext to Bob. - Bob provides the ciphertext as input to his decryption algorithm (together with his private key) and obtains the plaintext. Once again, Eve cannot read the", "Kapralov and Ola Svensson EPFL Notes by Joachim Favre Quantum science and engineering master Semester Autumn I made this document for my own use but I thought that typed note might be of interest to others There are mistake it is impossible not to make any If you find some please feel free to share them with me grammatical and vocabulary error are of course also welcome You can contact me at the following e mail address joachim favre epfl ch If you did not get this document through my GitHub repository then you may be interested by the fact that I have one on which I put those typed note and their LATEX code Here is the link make sure to read the README to understand how to download the file you re interested in http github com JoachimFavre EPFLNotesIN Please note that the content doe not belong to me I have made some structural change reworded some part and added some personal note but the wording and explanation come mainly from the Professor and from the book on which they based their course I think it is worth mentioning that in order to get these note typed up I took my note in LATEX during the course and then made some correction I do not think typing handwritten note is doable in term of the amount of work To take note in LATEX I took my inspiration from the following link written by Gilles Castel If you want more detail feel free to contact me at my e mail address mentioned hereinabove http castel dev post lecture note I would also like to specify that the word trivial and simple do not have in this course the definition you find in a dictionary We are at EPFL nothing we do is trivial Something trivial is something that a random person in the street would be able to do In our context understand these word more a simpler than the rest Also it is okay if you take a while to understand something that is said to be trivial especially a I love using this word everywhere hihi Since you are reading this I will give you a little advice Sleep is a much more powerful tool than you may imagine so do not neglect a good night of sleep in favour of studying especially the night before an exam I wish you to have fun during your exam Version To Gilles Castel whose work ha inspired me this note taking method Rest in peace nobody deserves to go so young Contents Summary by lecture Greed", "I made this document for my own use but I thought that typed note might be of interest to others There are mistake it is impossible not to make any If you find some please feel free to share them with me grammatical and vocabulary error are of course also welcome You can contact me at the following e mail address joachim favre epfl ch If you did not get this document through my GitHub repository then you may be interested by the fact that I have one on which I put those typed note and their LATEX code Here is the link make sure to read the README to understand how to download the file you re interested in http github com JoachimFavre EPFLNotesIN Please note that the content doe not belong to me I have made some structural change reworded some part and added some personal note but the wording and explanation come mainly from the Professor and from the book on which they based their course I think it is worth mentioning that in order to get these note typed up I took my note in LATEX during the course and then made some correction I do not think typing handwritten note is doable in term of the amount of work To take note in LATEX I took my inspiration from the following link written by Gilles Castel If you want more detail feel free to contact me at my e mail address mentioned hereinabove http castel dev post lecture note I would also like to specify that the word trivial and simple do not have in this course the definition you find in a dictionary We are at EPFL nothing we do is trivial Something trivial is something that a random person in the street would be able to do In our context understand these word more a simpler than the rest Also it is okay if you take a while to understand something that is said to be trivial especially a I love using this word everywhere hihi Since you are reading this I will give you a little advice Sleep is a much more powerful tool than you may imagine so do not neglect a good night of sleep in favour of studying especially the night before an exam I wish you to have fun during your exam Version To Gilles Castel whose work ha inspired me this note taking method Rest in peace nobody deserves to go so young Contents Summary by lecture Structural complexity Recalls P complexity class NP complexity class NP completeness Cook Levin theorem Time", "? The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication. The sender encodes and sends a packet of information. The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again. This two-way process continues until all the packets in the message have been transferred successfully. Certain networks, such as ones used for cellular wireless broadcasting, do not have a feedback channel. Applications on these networks still require reliability. Fountain codes in general, and LT codes in particular, get around this problem by adopting an essentially one-way communication protocol. The sender encodes and sends packet after packet of information. The receiver evaluates each packet as it is received. If there is an error, the erroneous packet is discarded. Otherwise the packet is saved as a piece of the message. Eventually the receiver has enough valid packets to reconstruct the entire message. When the entire message has been received successfully the receiver signals that transmission is complete. As mentioned above, the RaptorQ code specified in IETF RFC 6330 outperforms an LT code in practice. LT encoding The encoding process begins by dividing the uncoded message into n blocks of roughly equal length. Encoded packets are then produced with the help of a pseudorandom number generator. The degree d, 1 ≤ d ≤ n, of the next packet is chosen at random. Exactly d blocks from the message are randomly chosen. If Mi is the i-th block of the message, the data portion of the next packet is computed as M i 1 ⊕ M i 2 ⊕ ⋯ ⊕ M i d {\\displaystyle M_{i_{1}}\\oplus M_{i_{2}}\\oplus \\cdots \\oplus M_{i_{d}}\\,} where {i1, i2,..., id} are the randomly chosen indices for the d blocks included in this packet. A prefix is appen" ]
[ "Yes", "No" ]
['No']
41
In JOS and x86, please select all valid options for a system call.
[ "extra wv2 x2x xalan-java xbill xbitmaps xcb-proto xclip xerces2-java xf86-input-acecad xf86-input-aiptek xf86-input-joystick xf86-input-keyboard xf86-input-mouse xf86-input-synaptics xf86-input-vmmouse xf86-input-void xf86-video-apm xf86-video-ark xf86-video-ast xf86-video-chips xf86-video-cirrus xf86-video-dummy xf86-video-fbdev xf86-video-glint xf86-video-i128 xf86-video-i740 xf86-video-mach64 xf86-video-mga xf86-video-neomagic xf86-video-nv xf86-video-r128 xf86-video-rendition xf86-video-s3 xf86-video-s3virge xf86-video-savage xf86-video-siliconmotion xf86-video-sis xf86-video-sisusb xf86-video-tdfx xf86-video-trident xf86-video-tseng xf86-video-unichrome xf86-video-v4l xf86-video-vesa xf86-video-vmware xf86-video-voodoo xf86-video-xgi xf86-video-xgixp xf86dgaproto xf86vidmodeproto xfce4-taskmanager xfwm4-themes xineramaproto xkeyboard-config xmahjongg xorg-apps xorg-bdftopcf xorg-setxkbmap xorg-xcalc xorg-xcmsdb xorg-xdriinfo xorg-xev xorg-xkbcomp xorg-xkbevd xorg-xlsatoms xorg-xlsclients xorg-xlsfonts xorg-xmodmap xorg-xrefresh xorg-xset xorg", "–16 bytes to set up the values to be loaded. More common is to use the loop setup instruction (represented in assembly as either LOOP with pseudo-instruction LOOP_BEGIN and LOOP_END, or in a single line as LSETUP), which optionally initializes LCx and sets LTx and LBx to the desired values. This only requires 4–6 bytes, but can only set LTx and LBx within a limited range relative to where the loop setup instruction is located. x86 The x86 assembly language REP prefixes implement zero-overhead loops for a few instructions (namely MOVS/STOS/CMPS/LODS/SCAS). Depending on the prefix and the instruction, the instruction will be repeated a number of times with (E)CX holding the repeat count, or until a match (or non-match) is found with AL/AX/EAX or with DS:[(E)SI]. This can be used to implement some types of searches and operations on null-terminated strings.", "Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time On the x86, the return will pop the program counter, flags, and general-purpose registers off the per-process stack 32 Putting everything together for a system call OS Kernel mode User mode Process trap Time Process 1. A system call is a trap instruction 2. OS saves registers to per-process stack 3. Change mode from Ring 3 to Ring 0 1. Execute privileged operations 1. Change mode from Ring 0 to Ring 3 2. Restore the state of the process by popping registers in the return from trap Return- from- trap L03.4: Traps and Interrupts CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU 34 Traps/Exceptions • Traps are also referred as exceptions • Handle internal program errors • Overflow, division by zero, accessing not allowed memory region • Exceptions are produced by the CPU while executing instructions • Exceptions are synchronous: CPU invokes them only after terminating the invocation of an instruction Question How does a trap know which code to run in the OS? 35 36 OS configures hardware at boot time During boot... • The OS tells hardware what code to run when certain exceptional events occur • OS configures specific handlers that hardware remembers • Hardware then know what to do when certain exceptional events occur • System call Code to run when a hard disk interrupt occurs Code to run when a keyboard interrupt occurs Code to run for a system call Trap table Trap entries 37 Requesting OS services using system call numbers Code to run for a system call Trap table Trap entries Only one handler routine for system call, but multiples of system calls are possible! • Each system call has a specific number •", "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", "86-video-dummy xf86-video-fbdev xf86-video-glint xf86-video-i128 xf86-video-i740 xf86-video-mach64 xf86-video-mga xf86-video-neomagic xf86-video-nv xf86-video-r128 xf86-video-rendition xf86-video-s3 xf86-video-s3virge xf86-video-savage xf86-video-siliconmotion xf86-video-sis xf86-video-sisusb xf86-video-tdfx xf86-video-trident xf86-video-tseng xf86-video-unichrome xf86-video-v4l xf86-video-vesa xf86-video-vmware xf86-video-voodoo xf86-video-xgi xf86-video-xgixp xf86dgaproto xf86vidmodeproto xfce4-taskmanager xfwm4-themes xineramaproto xkeyboard-config xmahjongg xorg-apps xorg-bdftopcf xorg-setxkbmap xorg-xcalc xorg-xcmsdb xorg-xdriinfo xorg-xev xorg-xkbcomp xorg-xkbevd xorg-xlsatoms xorg-xlsclients xorg-xlsfonts xorg-xmodmap xorg-xrefresh xorg-xset xorg-xwd xorg-xwininfo xorg-xwud xpdf-arabic xpdf-chinese-simplified xpdf-chinese-traditional xpdf-cyrillic xpdf-greek xpdf-hebrew xpdf-japanese xpdf-korean xpdf-latin2 xpdf-thai xpdf-turkish xsnow yasm yelp-xsl zd1211-firmware zile zope-interface konq-plugins tidyhtml python-nose python-pip python-virtualenv gnome-python-desktop apache-ant junit perl-xml" ]
[ "A system call is for handling interrupts like dividing zero error and page fault.", "In user mode, before and after a system call instruction(such as int 0x30), the stack pointer(esp in x86) stays the same.", "During the execution of a system call, when transfering from user mode to kernel mode, the stack pointer(esp in x86) stays the same." ]
['In user mode, before and after a system call instruction(such as int 0x30), the stack pointer(esp in x86) stays the same.']
44
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
[ "Run-to-completion scheduling or nonpreemptive scheduling is a scheduling model in which each task runs until it either finishes, or explicitly yields control back to the scheduler. Run-to-completion systems typically have an event queue which is serviced either in strict order of admission by an event loop, or by an admission scheduler which is capable of scheduling events out of order, based on other constraints such as deadlines. Some preemptive multitasking scheduling systems behave as run-to-completion schedulers in regard to scheduling tasks at one particular process priority level, at the same time as those processes still preempt other lower priority tasks and are themselves preempted by higher priority tasks. See also Preemptive multitasking Cooperative multitasking", "background - where foreground processes are given high priority) to understand non pre-emptive and pre-emptive multilevel scheduling in depth with FCFS algorithm for both the queues: See also Fair-share scheduling Lottery scheduling", "Fixed-priority preemptive scheduling is a scheduling system commonly used in real-time systems. With fixed priority preemptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of all those tasks that are currently ready to execute. The preemptive scheduler has a clock interrupt task that can provide the scheduler with options to switch after the task has had a given period to execute—the time slice. This scheduling system has the advantage of making sure no task hogs the processor for any time longer than the time slice. However, this scheduling scheme is vulnerable to process or thread lockout: since priority is given to higher-priority tasks, the lower-priority tasks could wait an indefinite amount of time. One common method of arbitrating this situation is aging, which gradually increments the priority of waiting processes and threads, ensuring that they will all eventually execute. Most real-time operating systems (RTOSs) have preemptive schedulers. Also turning off time slicing effectively gives you the non-preemptive RTOS. Preemptive scheduling is often differentiated with cooperative scheduling, in which a task can run continuously from start to end without being preempted by other tasks. To have a task switch, the task must explicitly call the scheduler. Cooperative scheduling is used in a few RTOS such as Salvo or TinyOS.", "jobs to be interrupted (paused and resumed later) 39 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily 40 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily • Preemptive schedulers can take the control of CPU at any time, switching to another process according to the the scheduling policy • OS relies on timer interrupts and context switch for preemptive process/jobs 41 Shortest time to completion first (STCF) • STCF extends the SJF by adding preemption • Any time a new job enters the system: a. STCF scheduler determines which of the remaining jobs (including new job) has the least time left b. STCF then schedules the shortest job first 42 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C) = 10 • Tturnaround(A) = 120 • Tturnaround(B) = (20 - 10) = 10 • Tturnaround(C) = (30 - 10) = 20 Average turnaround time = (120 + 10 + 20) / 3 = 50 0 20 40 60 80 100 120 A B C [B, C arrive] A 43 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C", "on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called a priority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation. In computer networks, especially wireless networks, scheduling algorithms may suffer from scheduling starvation. An example is maximum throughput scheduling. Starvation is normally caused by deadlock in that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into a critical section and picks one arbitrarily is deadlock-free, but not starvation-free. A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses the aging technique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. See also Dining philosophers problem" ]
[ "It can lead to starvation especially for those real-time tasks", "Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.", "Bugs in one process can cause a machine to freeze up", "It can lead to poor response time for processes" ]
['It can lead to starvation especially for those real-time tasks', 'Bugs in one process can cause a machine to freeze up', 'It can lead to poor response time for processes']
45
Select valid answers about file descriptors (FD):
[ "\"Everything is a file\" is an approach to interface design in Unix derivatives. While this turn of phrase does not as such figure as a Unix design principle or philosophy, it is a common way to analyse designs, and informs the design of new interfaces in a way that prefers, in rough order of import: representing objects as file descriptors in favour of alternatives like abstract handles or names, operating on the objects with standard input/output operations returning byte streams to be interpreted by applications (rather than explicitly structured data), and allowing the usage or creation of objects by opening or creating files in the global filesystem name space. The lines between the common interpretations of \"file\" and \"file descriptor\" are often blurred when analysing Unix, and nameability of files is the least important part of this principle; thus, it is sometimes described as \"Everything is a file descriptor\". This approach is interpreted differently with time, philosophy of each system, and the domain to which it's applied. The rest of this article demonstrates notable examples of some of those interpretations, and their repercussions. Objects as file descriptors Under Unix, a directory can be opened like a regular file, containing fixed-size records of (i-node, filename), but directories cannot be written to directly, and are modified by the kernel as a side-effect of creating and removing files within the directory. Some interfaces only follow a subset of these guidelines, for example pipes do not exist on the filesystem — pipe() creates a pair of unnameable file descriptors. The later invention of named pipes (FIFOs) by POSIX fills this gap. This does not mean that the only operations on an object are reading and writing: ioctl() and similar interfaces allow for object-specific operations (like controlling tty characteristics), directory file descriptors can be used to alter path look-ups (with a growing number of *at() system call variants like openat()) or to change the working directory to the one represented by the file descriptor, in both cases preventing race conditions and being faster than the alternative of looking up the entire", "df -- report filesystem disk space df. 40 Benefit of using mount points / bin ls home sanidhya linuxbrew A single name space! ●Uniform access with the same API Important commands ●mount <device> <dir> mount /dev/cdrom /media/cdrom mount -t ext4 /dev/sda5 /home ●df -- report filesystem disk space df. 41 Benefit of using mount points / bin ls home sanidhya linuxbrew A file can be moved efficiently within a filesystem ●Keep the inode and blocks on disk ●Requires a copy+delete across filesystems Command “mv” handles this transparently to users 42 The mount point is an abstraction Built by adding a level of indirection ●Mostly transparent to users ●... except when it is not 43 The mount point is an abstraction A file can be moved efficiently within a filesystem ●Keep the inode and blocks on disk ●Requires a copy+delete across filesystems Command “mv” handles this transparently to users Very different filesystem types can co-exist in the same namespace ●ext3, ext4, NTFS → filesystems optimised for hard disks ●iso96000 → filesystem standards for CDROM ●FAT → the legacy, universal MS-DOS filesystem 44 Benefits of mounting: pseudo filesystems The filesystem abstraction can be used to manage non-persistent content ●tmpfs (/run) -- uses memory; cleared at reboot ●procfs (/proc) -- exposes process state as a set of files. L07.4 file system implementation CS-202 Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU • File system manages data for users • Given: a large set (N) of blocks • Need: data structures to encode file hierarchy and per file metadata • Overhead (metadata vs file data size) should be low • Internal fragmentation should be low • Efficient access of file contents: external fragmentation, # metadata access • Implement file system APIs • Several choices are available (simi", "ering options (setvbuf) • Unix/Linux system calls • open,read, write, lseek • Operate on file descriptors 12 < (3) → library function part of libc > man fread or man 3 fread ●Benefits of using FILE* calls ○ Portability across operating systems ○ Higher-level abstractions such as buffering 13 (3) → library function part of libc $ man fread or man 3 fread ●Benefits of using FILE* calls ○ Portability across operating systems ○ Higher-level abstractions such as buffering (2) → system calls $ man read or man 2 read ●Uses file descriptors ●Same code works for files, pipes, and sockets (covered in networking lectures) < 14 Example L07.2 The file system abstraction CS-202 Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU • Addresses need for long-term information storage: • Store large amounts of information • Do it in a way that outlives the program • Can support concurrent accesses from multiple processes • Presents applications with persistent, named data • Two main components: • Files • Directories 16 File system abstraction • A file is named collection of related information that is recorded in secondary storage • Or, a linear persistent array of bytes • Has two parts: • Data: what a user or application puts in it • Array of bytes • Metadata: Information added and managed by the OS • Size, owner, security information, modification time, etc. 17 The file abstraction: File 1. Inode and device number (persistent ID) 2. Path (human readable) 3. File descriptor (process view) 18 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Low-level unique ID assigned to the file by the file system • Note: Inodes are unique for a file system but not globally • Recycled after deletion • An inode contains metadata of a file • Permissions, length, access times • Location of data blocks and indirection blocks • Each file has exactly one associated inode 19 OS view:", "A self-extracting archive (SFX or SEA) is a computer executable program which combines compressed data in an archive file with machine-executable code to extract the information. Running on a compatible operating system, it does not need a suitable extractor in the target computer to extract the data. The executable part of the file is known as a decompressor stub. Self-extracting files are used to share compressed files with a party that may not have the software needed to decompress a regular archive. Users can also use self-extracting archives to distribute their own software. For example, the WinRAR installation program is made using the graphical GUI RAR self-extracting module Default.sfx. Overview Self-extracting archives contain an executable file module, which is used to run uncompressed files from compressed files. The latter does not require an external program to decompress the contents of the self-extracting file and can run the operation itself. However, file archivers like WinRAR can still treat a self-extracting file as if it were any other type of compressed file. By using a file archiver, users can view or decompress self-extracting files they received without running executable code (for example, if they are concerned about viruses). A self-extracting archive is extracted and stored on a disk when executed under an operating system that supports it. Many embedded self-extractors support a number of command-line arguments, such as specifying the target location or selecting only specific files. Unlike self-extracting archives, non-self-extracting archives only contain archived files and must be extracted with a program that is compatible with them. While some formats of self-extracting archives cannot be extracted under another operating system, non-self-extracting ones can usually still be opened using a suitable extractor. This tool will disregard the executable part of the file and extract only the archive resource. The self-extracting executable may need to be renamed to contain a file extension associated with the corresponding packer; archive file formats known to support this include ARJ and ZIP. Typically, self", "not consider that fact dispositive. \"By providing a website with... well-developed search functions, easy uploading and storage possibilities, and with a tracker linked to the website, the accused have incited the crimes that the filesharers have committed,\" the court said in a statement. See also Bandwidth Copyright aspects of hyperlinking and framing Download manager Digital distribution HADOPI law Music download Peer-to-peer Progressive download Sideloading List of download managers (includes tools like Downr.org) References External links Media related to Download icons at Wikimedia Commons" ]
[ "The value of FD is unique for every file in the operating system.", "FD is usually used as an argument for read and write.", "FD is constructed by hashing the filename.", "FDs are preserved after fork() and can be used in the new process pointing to the original files." ]
['FD is usually used as an argument for read and write.', 'FDs are preserved after fork() and can be used in the new process pointing to the original files.']
46
What is the default block size for a traditional file system, e.g. ext3/4?
[ "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3" ]
[ "32 bits", "32 bytes", "512 bits", "512 bytes", "4096 bits", "4096 bytes" ]
['4096 bytes']
47
Suppose a file system used only for reading immutable files in random fashion. What is the best block allocation strategy?
[ "at the beginning of the partition Data blocks Data blocks Data blocks Data blocks Data blocks Data blocks Inodes Free lists • One logical superblock per file system • Stores the metadata about the file system • Number of inodes • Number of data blocks • Where the inode table begins • May contain information to manage free inodes/data blocks • Read first when mounting a file system 53 FS superblock • Various ways to allocate data to files: • Contiguous allocation: All bytes together, in order • Linked structure: Blocks ends with the next pointer • File allocation table: Table that contains block references • Multi-level indexed: Tree of pointers • Which approach is better? • Fragmentation, sequential access / random access, metadata overhead, ability to grow/shrink files, large files, small files 54 File allocation • All data blocks of each file is allocated contiguously • Simple: Only need start block and size • Efficient: One seek to read an entire file • Fragmentation: external fragmentation (can be serious) • Usability: User needs to know file’s size at the time of creation • Great for read-only file systems (CD/DVD/BlueRay) 55 File allocation: Contiguous file1 file2 file3 file4 Physical block • Each file consists of a linked list of blocks • Usually first word of each block points to the next block • In the above illustration, showing the next block pointer at the end • The rest of the block is data 56 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and meta", "block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and metadata: Keep linked list information in a single table Instead of storing the next pointer at the end of the block, store all next pointer in a central table 58 File allocation: File allocation table (FAT) File block 0 next File block 1 next File block 2 next File block 3 next File block 4 next 7 8 33 17 4 4 7 8 17 33 Data Metadata Proposed by Microsoft, in late 70s ● Still widely used today ○ Thumb drives, CD ROMs • Separate data and metadata • Space utilization: No external fragmentation • No conflating data and metadata in the same block • Simple: Only need to find the first block of a file • Performance: Poor random access • Overhead: Limited metadata • Many file seeks unless entire FAT is stored in memory: Example: 1TB (240 bytes) disk, 4 KB block size, FAT has 256 million entries 4 bytes per entry → 1 GB of main memory required for FS 59 File allocation: File allocation table (FAT) Have a mix of direct, indirect, double direct, and triple indirect pointers for data 60 File allocation: Multi-level indexing S I I I I I I I I i-node blocks Remaining blocks Inode array ● Inode array is present at a known location on disk ● file number = inode number = inode in the array • Each file is a fixed, asymmetric tree, with fixed sized data blocks (e.g., 4 KB) as its leaves • The root of the tree is the file’s inode, containing: • metadata • A set of 15 pointers • First 12 pointers point to data blocks • Last three point to intermedi", "block Each file consists of a linked list of block Space utilization No external fragmentation Simple Only need to find the first block of a file Performance Random access is slow high seek cost on the disk Implementation Blocks mix data and metadata Overhead One pointer per block metadata is required File allocation Linked block File block file next File block next File block next File block next File block next Physical block Decouple data and metadata Keep linked list information in a single table Instead of storing the next pointer at the end of the block store all next pointer in a central table File allocation File allocation table FAT File block next File block next File block next File block next File block next Data Metadata Proposed by Microsoft in late s Still widely used today Thumb drive CD ROMs Separate data and metadata Space utilization No external fragmentation No conflating data and metadata in the same block Simple Only need to find the first block of a file Performance Poor random access Overhead Limited metadata Many file seek unless entire FAT is stored in memory Example TB byte disk KB block size FAT ha million entry byte per entry GB of main memory required for FS File allocation File allocation table FAT Have a mix of direct indirect double direct and triple indirect pointer for data File allocation Multi level indexing S I I I I I I I I i node block Remaining block Inode array Inode array is present at a known location on disk file number inode number inode in the array Each file is a fixed asymmetric tree with fixed sized data block e g KB a it leaf The root of the tree is the file s inode containing metadata A set of pointer First pointer point to data block Last three point to intermediate block themselves containing pointer pointer to a block containing pointer to data block double indirect pointer triple indirect pointer File structure for multi level indexing File allocation Multi level indexing S I I I I I I I I i node block Remaining block Inode array File metadata I node Data block Indirect block Double indirect block Triple indirect block x KB KB K x KB MB K x K x KB GB K x K x K x KB TB Key idea Tree structure Efficient in finding block High degree Efficient in sequential read Once", ", setting the date for the transition from 512 to 4096 byte sectors as January 2011 for all manufacturers, and Advanced Format drives soon became prevalent. Related units Sectors versus blocks While sector specifically means the physical disk area, the term block has been used loosely to refer to a small chunk of data. Block has multiple meanings depending on the context. In the context of data storage, a filesystem block is an abstraction over disk sectors possibly encompassing multiple sectors. In other contexts, it may be a unit of a data stream or a unit of operation for a utility. For example, the Unix program dd allows one to set the block size to be used during execution with the parameter bs=bytes. This specifies the size of the chunks of data as delivered by dd, and is unrelated to sectors or filesystem blocks. In Linux, disk sector size can be determined with sudo fdisk -l | grep \"Sector size\" and block size can be determined with sudo blockdev --getbsz /dev/sda. Sectors versus clusters In computer file systems, a cluster (sometimes also called allocation unit or block) is a unit of disk space allocation for files and directories. To reduce the overhead of managing on-disk data structures, the filesystem does not allocate individual disk sectors by default, but contiguous groups of sectors, called clusters. On a disk that uses 512-byte sectors, a 512-byte cluster contains one sector, whereas a 4-kibibyte (KiB) cluster contains eight sectors. A cluster is the smallest logical amount of disk space that can be allocated to hold a file. Storing small files on a filesystem with large clusters will therefore waste disk space; such wasted disk space is called slack space. For cluster sizes which are small versus the average file size, the wasted space per file will be statistically about half of the cluster size; for large cluster sizes, the wasted space will become greater. However, a larger cluster size reduces bookkeeping overhead and fragmentation, which may improve reading and writing speed overall. Typical cluster sizes range from 1 sector (512 B) to 128 sectors (64 KiB). A cluster need not be", "a free block no allocation policy All that is required is a pointer to the border between the allocated and free area of from space Therefore allocation in a copying GC is a fast a stack allocation Forwarding pointer Objects must be copied to to space only once This is obtained by storing a forwarding pointer in the from space version of the object once it ha been copied checking for the presence of a forwarding pointer when visiting an object and copying it if no forwarding pointer is found using the forwarding pointer otherwise Cheney's copying GC Copying can be done by depth first traversal of the reachability graph but this can lead to stack overflow Cheney s copying GC doe a breadth first traversal of the reachability graph requires only one pointer a additional state Cheney's copying GC Breadth first traversal requires remembering the set of object that have been visited but whose child haven't been visited Cheney's observation This set can be represented a a pointer into to space called scan that partition pointer to object that have been visited and pointer to object that haven't been visited Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From" ]
[ "Linked-list allocation", "Continuous allocation", "Index allocation with B-tree", "Index allocation with Hash-table" ]
['Continuous allocation']
50
Which of the following operations would switch the user program from user space to kernel space?
[ "'s basic functions, such as scheduling processes and controlling peripherals. In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times. The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor. Kernel Program The kernel's main purpose is to manage the limited resources of a computer: The kernel program should perform process scheduling, which is also known as a context switch. The kernel creates a process control block when a computer program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency. The kernel program should perform memory management. When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes", "printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer", "memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable. To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address. The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory. The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes. The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files. The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors,", "return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does the OS stop running a process and switch to another one, which is required for virtualizing the CPU? 19 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does the OS stop running a process and switch to another one, which is required for virtualizing the CPU? 20 Limited direct execution: Two problems! Problem #1 Restricted operations How does the OS ensure that a process does not execute/run a privileged code, while running it efficiently? 21 Limited direct execution with dual mode Operating system kernel mode executes User process executes Kernel mode User mode CPU can execute only regular instructions Can execute regular and privileged instructions 22 Different names for different architectures A simplified table* not accounting for hardware virtualization and legacy modes Architecture Kernel mode User mode x86-64 Ring 0 Ring 3 Arm EL1 EL0 RISC-V S-mode U-mode 23 Privileged instructions Common examples of privileged instructions: ●Change the MMU register that controls page tables (mov %cr3 on x86-64) ●Enable or disable interrupts ●Access I/O devices ●Change privilege levels ●... When the CPU attempts to execute a privileged instruction from user mode: ●The instruction does not execute ●The instruction traps (#General protection fault on x86) ●The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to", "arity of control that the operating environment has over privileges for an individual process. In practice, it is rarely possible to control a process's access to memory, processing time, I/O device addresses or modes with the precision needed to facilitate only the precise set of privileges a process will require. The original formulation is from Jerome Saltzer: Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job. Peter J. Denning, in his paper \"Fault Tolerant Operating Systems\", set it in a broader perspective among \"The four fundamental principles of fault tolerance\". \"Dynamic assignments of privileges\" was earlier discussed by Roger Needham in 1972. Historically, the oldest instance of (least privilege) is probably the source code of login.c, which begins execution with super-user permissions and—the instant they are no longer necessary—dismisses them via setuid() with a non-zero argument as demonstrated in the Version 6 Unix source code. Implementation The kernel always runs with maximum privileges since it is the operating system core and has hardware access. One of the principal responsibilities of an operating system, particularly a multi-user operating system, is management of the hardware's availability and requests to access it from running processes. When the kernel crashes, the mechanisms by which it maintains state also fail. Therefore, even if there is a way for the CPU to recover without a hard reset, security continues to be enforced, but the operating system cannot properly respond to the failure because it was not possible to detect the failure. This is because kernel execution either halted or the program counter resumed execution from somewhere in an endless, and—usually—non-functional loop. This would be akin to either experiencing amnesia (kernel execution failure) or being trapped in a closed maze that always returns to the starting point (closed loops). If execution picks up after the crash by loading and running trojan code, the author of the trojan code can usurp control of all processes. The principle of least privilege forces code to run with the lowest privilege/permission level possible. This means that the code that resumes the code execution-whether trojan" ]
[ "Dividing integer by 0.", "Calling sin() in math library.", "Invoking read() syscall.", "Jumping to an invalid address." ]
['Dividing integer by 0.', 'Invoking read() syscall.', 'Jumping to an invalid address.']
61
Which flag prevents user programs from reading and writing kernel data?
[ "s was done without Institutional Review Board (IRB) approval. Despite undergoing review by the conference, this breach of ethical responsibilities was not detected during the paper's review process. This incident sparked criticism from the Linux community and the broader cybersecurity community. Greg Kroah-Hartman, one of the lead maintainers of the kernel, banned both the researchers and the university from making further contributions to the Linux project, ultimately leading the authors and the university to retract the paper and issue an apology to the community of Linux kernel developers. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers.", "vilniusquartet.com/\" \"Mozilla/55 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55\" 14.76.5.24 - [16/Jan/2018:02:19:06 +0200] \"GET /css/style.css HTTP/1.1\" 200 2480 \"http://www.krom.org/\" \"Mozilla/55 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55\" 132.29.235.184 - [16/Jan/2018:03:56:13 +0200] \"GET /vvk/ HTTP/1.1\" 200 5073 \"-\" \"Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1 (KHTML, like Gecko) Version/9.0 Mobile/13B143 Safari/601.1\" Logs of an Apache web server com-402 - Netops & Secops - Protecting History (logging) 40 Things you should not log Passwords! source: Bleeping computer com-402 - Netops & Secops - Protecting History (logging) 41 Things you should not log Swiss federal act on data protection requires strict security mechanisms for log containing sensitive personal information religious, ideological, political or trade union-related views or activities, health, the intimate sphere or the racial origin, social security measures, administrative or criminal proceedings and sanctions; Basically, the content of potentially private e-mail and Internet access logs can contain sensitive information Internet access logs should only be generated in an anonymous way. nominal analysis of Internet access is only allowed if there are tangible signs of abuse Mailboxes and logs should be protected against unauthorized access ProtectingData(backups) Netops & Secops com-402 - Netops & Secops - Protecting Data (backups) 42 Backups source: Gitlab Timeline 2017/01/31 6pm UTC: Spammers are hammering Git- Lab’s database, causing a lockup. 2017/01/31 10pm UTC: DB replication effectively stops. 2017/01/31 11pm-ish UTC: team-member-1 starts re- moving db1.cluster.gitlab.com by accident. 2017/01/31 11:27pm UTC: team-member", "printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer", "only affects performance – Memory view is split into two views: control and data plane ∗The control plane is a view that only contains code pointers (and transitively all related pointers) ∗The data plane contains only data, code pointers are left empty (void/unused data) – The two planes must be seperated and data in the control plance must be protected from pointer dereferences in the data plane – CPI protects pointers and sensitive pointers ∗CPI enforces memory safety for select data • Sandboxing – Kernel isolates process memory ∗The kernel provides the most well known form of sandboxinng ∗Process are sandboxed and connot access privileged intructions directly ∗To access resources, they must fo through a system call that elevated privileges and asks the kernel to handle the access ∗The kernel can then enforce security, fairness, access policies ∗Sandboxing in enable through HW, namely different privileges – chroot / containers isolate process from each other ∗Containers are lighweight form of virtualization ∗They isolate a group of processes from all other processes ∗Root us restricted to the container but not the full system ∗Sandboxing powered in SW, through kernel data structures – seccomp restricts process from interacting with the kernel ∗Seccomp restricts system calls and parameters accessible by a single process ∗Processes are sandboxed based on a policy ∗In the most constrained case, the allowed system calls are only : read, write, close, exit, sigreturn ∗Sandboxing powered in SW, through kernel data structures – Software absed Fault Isolation isolated components in a process ∗SFI restricts code execution/data access inside a single process ∗Application and untrusted code run in the same address space ∗The untrusted code may only read/write the untrusted data segment ∗Sandboxing is enabled through SW instrumentation Finding bugs Testing • Testing is the process of analyzing a program to find errors • An error is a deviation between observed bahaviour and specified behaviour • “Testing can only show the", "widely-used operating system component without Institutional Review Board (IRB) approval. The paper was accepted and was scheduled to be published, however, after criticism from the Linux kernel community, the authors of the paper retracted the paper and issued a public apology. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers. History The conference was initially conceived by researchers Stan Ames and George Davida in 1980 as a small workshop for discussing computer security and privacy. This workshop gradually evolved into a larger gathering within the field. Held initially at Claremont Resort, the first few iterations of the event witnessed a division between cryptographers and systems security researchers. Discussions during these early iterations predominantly focused on theoretical research, neglecting practical implementation considerations. This division persisted, to the extent that cryptographers would often leave sessions focused on systems security topics. In response, subsequent iterations of the conference integrated panels that encompassed both cryptography and systems security discussions within the same sessions. Over time, the conference's attendance grew, leading to a relocation to San Francisco in 2011 due to venue capacity limitations. Structure IEEE Symposium on Security and Privacy considers papers from a wide range of topics related to computer security and privacy. Every year, a list of topics of interest is published by the program chairs of the conference which changes based on the trends in the field. In past meetings, IEEE Symposium on Security and Privacy have considered papers from topics like web security, online abuse, blockchain security, hardware security, malware analysis and artificial intelligence. The conference follows a single-track model for its proceedings, meaning only one session takes place at any given time. This approach deviates from the multi-track format commonly used in other security and privacy conferences, where multiple sessions on different topics run concurrently. Papers submitted for consideration to the conference reviewed using a double-blind process to ensure fairness. However, this model constrains the conference in the number of papers it can accept, resulting in a low acceptance rate often in the single digits, unlike" ]
[ "PTE_P", "PTE_U", "PTE_D", "PTE_W" ]
['PTE_U']
62
In JOS, after finishing the execution of a user-level page fault handler, how is the program control flow transferred back to the program? (You may get insights from the code snippet of _pagefault_upcall.)
[ "s call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts. Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for better static analysis of programs. References External links Official website", "instruction for debugging Page fault Instruction page fault Code Indicates a fault during instruction fetch due to virtual memory issue Load page fault Code Raised during a load operation when a page related fault occurs Store page fault Code Raised during a store operation when a page related fault occurs Understanding and properly handling these interrupt and exception is crucial for effective RISC V programming and system design Possible Undefined Instruction Handler Below is a possible implementation of an undefined instruction handler in RISC V assembly Notes by Ali EL AZDI RISC V Machine Mode Interrupt Handling In RISC V architecture machine mode interrupt handling is managed through three key control and status register mie mip and mstatus These register play distinct role in enabling monitoring and controlling interrupt mie Machine Interrupt Enable This register determines which interrupt the processor can take and which it must ignore Key bit include MEIE Enables machine level external interrupt MTIE Enables machine level timer interrupt mip Machine Interrupt Pending This register list the interrupt that are currently pending Key bit include MEIP Indicates a pending machine level external interrupt MTIP Indicates a pending machine level timer interrupt mstatus Machine Status This register contains the global interrupt enable flag and other state information Important field include MIE Globally enables interrupt when set to and disables them when set to MPIE Holds the value of MIE prior to a trap The diagram below illustrates the structure of these register These register provide the foundation for interrupt handling in machine mode ensuring efficient and precise interrupt management The Stack Problem A few week ago we discussed a potential issue with the stack that wa What should we do when the stack hit it limit We might be able to find a solution to this problem now CHAPTER PART II D PROCESSOR I OS AND EXCEPTIONS W Stack Full Detection To detect when the stack is full we can use a watchpoint Writing Handlers is Very Very Tricky To write the exception handler for the stack full detection we cannot use the stack Writing interrupt or exception handler is inherently complex particularly due to the restriction that the stack cannot be used Additionally many register may be untouchable during execution This necessitates careful design to handle these constraint Challenges Stack usage Direct stack usage is prohibited necessitating alternative storage mechanism Register constraint In many case touching any general purpose register is disallowed Solutions", "TE “read-only” in parent and child trees; increment reference count of all frames ● Later, handle page faults due to disallowed write ● If the refcount >1, allocate a new frame, copy content; decrement original refcount; update PTE as writable with new PFN ● If the refcount = 1; update PTE as writable ● Invalidate corresponding TLB entry 57 Swapping: when main memory runs out • Observation: Main memory may not be enough for all memory of all processes Working set: Amount of memory that process needs at a given point in time. Can vary! • Idea: Store unused pages on the disk • Allows the OS to reclaim memory when necessary • Allows the OS to over-provision (hand out more memory than physically available) • When needed, the OS finds and pushes unused pages to disk • OS can create a special file or designate a region on the disk to store unused pages 58 Swapping: page fault • MMU translates virtual to physical addresses using the OS provided data structures (page tables) • The present bit for each page table entry at each level indicates if the reference is valid • MMU checks present bit during translation • If a page is not present then MMU triggers a page fault (exception) • OS then enforces its policy to handle the page fault 59 Swapping: page fault handling • Page fault handler checks where the fault occurred: • Which process? (locate data structure) • What address? (Search page in page table) • If the page is on the disk, OS issues a request to load the faulted page and tells the scheduler to switch to another process • If the page is not swapped out, the OS creates the mapping and updates data structures • OS then resumes the faulting process by re-executing the faulting instruction 60 Swapping out • OS mechanism is straightforward: • Copy the victim page to disk • Keep track of the on-disk location in the process structure • Invalidate the PTE (all PTEs) that point to the victim page • OS policy is much more complex • Selecting the wrong victims can have catastrophic performance impact (thrashing) • Selecting “good” victims - prediction based", "unknown to the adversary Stack canary are inserted in the stack helping to detect overflow attack Windows also us safe exception handler which aim at keeping the system safe even after error This countermeasure make sure that after an error there is no undefined behavior but the system only can execute a pre defined set of error handling function Bugs prevention is hard Current system deploy known countermeasure that have reasonable impact on performance These countermeasure however are far from ensuring that there are no vulnerability We will now study another approach to increase the security of software Software testing which can be used to find bug and fix then them Software testing executes code under different circumstance with the goal of finding configuration that raise an error An error is a deviation between how we expect the program would function and what actually happens This can be an error regarding functionality the program doe not provide the expected result and error regarding operation the program crash is too slow even never terminating But what about security We have learned in the first lecture of the course that testing for security is hard It cannot provide absence of bug Still finding a many bug a possible help increasing the safety of software Ideally we would like to test all possible Control flow all possible path through the program i e all possible outcome of branch in a program if else clause for clause while clause etc Data flow all possible value for the variable location that are used by the program Of course testing all possible path and data value is impossible these are too many state Here we have an example program The value a and a cover all flow The former implies that a a is True and the instruction within the if is executed The latter implies that a a is False and the instruction within the if is not executed However even all statement are executed and both flow are explored not all data flow are considered In particular the data flow a is not considered but is the one that would raise a bug a in this case a a would be True but x is not reserved for the program x ha position starting in position Thus a would make the program crash There are two way of testing for security property Manual review in which the test to be carried out is defined by a human trying to identify corner case that may appear in reality Whether these corner case could trigger a bug can be investigated by code review in which human read each others code to search for programming error or by implementing test case so that the check can be", "program. For that, the OS needs to create a new process and create a new address space to load the program Let’s divide and conquer: • fork() creates a new process (replica) with a copy of its own address space • exec() replaces the old program image with a new program image fork() exec() exit() wait() Why do we need fork() and exec()? 38 Multiple programs can run simultaneously Better utilization of hardware resources Users can perform various operations between fork() and exec() calls to enable various use cases: • To redirect standard input/output: • fork, close/open file descriptors, exec • To switch users: • fork, setuid, exec • To start a process with a different current directory: • fork, chdir, exec fork() exec() exit() wait() Why do we need fork() and exec()? open/close are special file-system calls Set user ID (change user who can be the owner of the process) Go to a specified directory 39 wait(): Waiting for a child process • Child processes are tied to their parent • There exists a hierarchy among processes on forking A parent process uses wait() to suspend its execution until one of its children terminates. The parent process then gets the exit status of the terminated child pid_t wait (int *status); • If no child is running, then the wait() call has no effect at all • Else, wait() suspends the caller until one of its children terminates • Returns the PID of the terminated child process fork() exec() exit() wait() 40 exit(): Terminating a process When a process terminates, it executes exit(), either directly on its own, or indirectly via library code void exit (int status); • The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminat" ]
[ "The control flow will be transferred to kernel first, then to Env that caused the page fault.", "The control flow will be transferred to Env that caused the page fault directly." ]
['The control flow will be transferred to Env that caused the page fault directly.']
64
What is the content of the superblock in the JOS file system?
[ "JEAN was a dialect of the JOSS programming language developed for and used on ICT 1900 series computers in the late 1960s and early 1970s; it was implemented under the MINIMOP operating system. It was used at universities including the University of Southampton. The name was an acronym derived from \"JOSS Extended and Adapted for Nineteen-hundred\". It was operated interactively from a Teletype terminal, as opposed to using batch processing. JEAN programs could include expressions (such as A*(B+C)), commands (such as TYPE to display the result of a calculation) and clauses (such as FOR, appended to an expression to evaluate it repeatedly).", "at the beginning of the partition Data blocks Data blocks Data blocks Data blocks Data blocks Data blocks Inodes Free lists • One logical superblock per file system • Stores the metadata about the file system • Number of inodes • Number of data blocks • Where the inode table begins • May contain information to manage free inodes/data blocks • Read first when mounting a file system 53 FS superblock • Various ways to allocate data to files: • Contiguous allocation: All bytes together, in order • Linked structure: Blocks ends with the next pointer • File allocation table: Table that contains block references • Multi-level indexed: Tree of pointers • Which approach is better? • Fragmentation, sequential access / random access, metadata overhead, ability to grow/shrink files, large files, small files 54 File allocation • All data blocks of each file is allocated contiguously • Simple: Only need start block and size • Efficient: One seek to read an entire file • Fragmentation: external fragmentation (can be serious) • Usability: User needs to know file’s size at the time of creation • Great for read-only file systems (CD/DVD/BlueRay) 55 File allocation: Contiguous file1 file2 file3 file4 Physical block • Each file consists of a linked list of blocks • Usually first word of each block points to the next block • In the above illustration, showing the next block pointer at the end • The rest of the block is data 56 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and meta", "the OS course from Cornell EPFL IITB UCB UMASS and UU File system manages data for user Given a large set N of block Need data structure to encode file hierarchy and per file metadata Overhead metadata v file data size should be low Internal fragmentation should be low Efficient access of file content external fragmentation metadata access Implement file system APIs Several choice are available similar to virtual memory File system implementation File system is stored on disk Disk can be divided into one or more partition Sector of disk master boot record MBR which contains Bootstrap code loaded and executed by the firmware Partition table address of where partition start and end First block of each partition ha a boot block Loaded by executing code in MBR and executed on boot File system layout Partition Partition Partition Boot block Superblock Free space management Inodes Files and directory Entire disk MBR Partition table Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Data block Data block Data block Data block Data block Data block Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D", "posites called 'Items' which are the unit of storage and retrieval. Higher-level structures that combine these Items are client-devised, and include for example unlimited size records of an unlimited number of columns or attributes, with complex attribute values of unlimited size. Keys may then be a composition of components. Attribute values can be ordered sets of composite components, character large objects (CLOB's), binary large objects (BLOB's), or unlimited sparse arrays. Other higher-level structures built of multiple Items include key/value associations like ordered maps, ordered sets, Entity-Attribute-Value nets of quadruples, trees, DAG's, taxonomies, or full-text indexes. Mixtures of these can occur along with other custom client-defined structures. Any ItemSpace may be represented as an extended JSON document, and JSON printers and parsers are provided. JSON documents are not native but are mapped to sets of Items when desired, at any scale determined by an Item prefix that represents the path to the sub-document. Hence, the entire database or any subtree of it down to a single value can be represented as extended JSON. Because Items are always kept sorted, the JSON keys of an object are always in order. Data encoding An 'ItemSpace' represents the entire database, and it is a simple ordered set of Items, with no other state. An Item is actually stored with each component encoded in variable-length binary form in a char array, with components being self-describing in a standard format which sorts correctly. Programmers deal with the components only as primitives, and the stored data is strongly typed. Data is not stored as text to be parsed with weak typing as in JSON or XML, nor is it parsed out of programmer-defined binary stream representations. There are no custom client-devised binary formats that can grow brittle, and which can have security, documentation, upgrade, testing, versioning, scaling, and debugging problems, such as is the case with Java Object serialization. Performance", "D D D D D D D D D Data block Data block Data block Data block Data block Data block Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Data block Data block Data block Data block Data block Data block Inodes Data block Peeking inside a partition storage block i d I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Bitmap tracking free inodes and data block free list Data block Data block Data block Data block Data block Data block Inodes Free list Data block Peeking inside a partition storage block B S i d I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Bitmap tracking free inodes and data block free list Boot block and superblock are at the beginning of the partition Data block Data block Data block Data block Data block Data block Inodes Free list One logical superblock per file system Stores the metadata about the file system Number of" ]
[ "List of all directories", "List of all files", "List of all blocks", "List of all inodes", "Total number of blocks on disk", "Magic number identifying the file system", "Node with the root directory ('/')" ]
['Total number of blocks on disk', 'Magic number identifying the file system', "Node with the root directory ('/')"]
69
In which of the following cases does the TLB need to be flushed?
[ "walks” page table 49 Translation lookaside buffer (TLB) A cache of recent virtual address to physical address mappings Translating virtual address to physical address: 1. MMU first looks up TLB 2. If TLB hit: physical address can be directly used 3. Only if TLB miss: MMU “walks” page table • TLB misses are expensive (multiple memory accesses) • Locality of reference helps to have high hit rate • TLB entries may become invalid on context switch and change of page tables 50 TLB: memory access cost Page table level With TLB & TLB hit Without TLB No paging 1 1 level 1 2 2 level 1 3 3 level 1 4 • Assume we have 64-bit address space; and all page table levels are cached when TLB is present (i.e., TLB hit) • Number of memory accesses to read/write at memory location X (a process accessing a virtual address X) Key: the TLB is NOT in memory, but rather a special circuit 51 How does the CPU execute a read/write operation? • CPU issues a load for a virtual address (as part of a memory load/store) • MMU checks TLB for virtual address • TLB miss: MMU executes page walk • Page table entry (PTE) is not present: page fault, switch to the OS, which raises segfault • PTE is present: update TLB, continue • TLB hit: obtain physical address, fetch memory location and return to CPU • Note: TLB also checks for the protection bit 52 TLB Invalidations Page tables are in physical memory ●PTBR (base register) is a PFN ●In a multi-level tree, each outer PTE points to the PFN of the next level entry ●The outer PTE hold the PFN used by the CPU changes on context switch HW invalidates TLB when PTBR changes changes on page faults and other conditions TLB is inside the CPU, ●a special circuit called a content-addressable memory (CAM) ●Copied from the outer PTE after the (expensive) walk OS must (selectively) invalidate TLB after changing PTE entries 53 Summary - page tables What is the typical content of the page table? • Page table entries (P", "BJ Flush CS 307 – Fall 2018 Lec.08 - Slide 80 TM Recovery u Run from recovery code ROB BR LD/ST queue CS 307 – Fall 2018 Lec.08 - Slide 81 TM Implementations u Several decades worth of research o Originally most done in software o Software overhead for detecting conflicts is high o Many HW and HW/SW hybrids have emerged o See the book by Rajwar & Larus u Real implementations in Intel, IBM CPUs o Keep speculative data in cache hierarchies (not just pipeline) CS 307 – Fall 2018 Lec.08 - Slide 82 Extension: Data in Caches u Recall: we assumed addresses tracked in LSQ o How can we extend that to storing it in the caches and store buffer? u Simple idea: add some bits to mark certain cache lines as speculative o Same coherence mechanism to detect conflicts Cache Tag Coherence State Speculative Bit CS 307 – Fall 2018 Lec.08 - Slide 83 Detection Policy u Check for conflict at every operation o Use coherence actions (e.g., BusRd, BusRdX, BusInv) o Intuition: “I suspect conflicts might happen, so always check to see if one has occurred upon each coherence operation.” u There are other options, TM open research area o See me if you are interested! CS 307 – Fall 2018 Lec.08 - Slide 84 A note on Software Transactional Memory u SW for speculation, buffering and detection in o No hardware support o Huge in the academic community o Mostly a research testbed before HTM emerged o Too slow for real-world deployment CS 307 – Fall 2018 Lec.08 - Slide 85 Summary u HW speculation can simplify software problems o Instead of focusing on finest grain locking (hard to program), place it conservatively and HW can elide u HW can enable declarative concurrency control o Transforms implementation (locks) into intention (transactions) o Uses the same hardware as lock elision o Programs can more easily manage concurrency u Open problems: detection, recovery policies", "send [R] to all forall pj wait until either recieve [R, (snj, idj), vj] or suspect pj v = vj with the highest (snj, idj) (sn, id) = highest (snj, idj) send [W, (sn, id), v] to all forall pj wait until either receive [W, (sn, id), ack] or detect [pj] return v At pi T1 : when receive [W] from pj send [W, sn] to pj when receive [R] from pj send [R, (sn, id), vi] to pj T2 : when receive [W, (snj, idj), v] from pj if (snj, idj) > (sn, id) then vi = v (sn, id) = (snj, idj) send [W, (sn, id), ack] to pj when receive [W, (snj, idj), v] from pj if (snj, idj) > (sn, id) then vi = v (sn, id) = (snj, idj) send [W, (sn, id), ack] to pj • From fail-stop to fail-silent – We assume a mojority of correct processes – In the 1-N algorithm, the writer writes in a majority using a timestamp determined locally and the reader selects a value from a majority and then imposes this value on a majority – In the N-N algorithm, the writers determines first the timestamp using a majority Terminating Reliable Broadcast (trb) • Like reliable broadcast, terminating reliable broadcast (TRB) is a communication primitive used to disseminate a message among a set of processes in a reliable way • TRB is however strictly stronger than (uniform) reliable broadcast • Like with reliable broadcast, correct processes in TRB agree on the set of messages they deliver • Like with (uniform) reliable broadcast, every correct process in TRB delivers every message delivered by any correct process • Unlike with reliable broadcast, every correct process delivers a message, event if the broadcaster crashes 11 • The", "∗ {\\displaystyle \\ast } Implication by the operation ⇒ {\\displaystyle \\Rightarrow } (which is called the residuum of ∗ {\\displaystyle \\ast } ) Weak conjunction and weak disjunction by the lattice operations ∧ {\\displaystyle \\wedge } and ∨, {\\displaystyle \\vee,} respectively (usually denoted by the same symbols as the connectives, if no confusion can arise) The truth constants zero (top) and one (bottom) by the constants 0 and 1 The equivalence connective is interpreted by the operation ⇔ {\\displaystyle \\Leftrightarrow } defined as x ⇔ y ≡ ( x ⇒ y ) ∧ ( y ⇒ x ) {\\displaystyle x\\Leftrightarrow y\\equiv (x\\Rightarrow y)\\wedge (y\\Rightarrow x)} Due to the prelinearity condition, this definition is equivalent to one that uses ∗ {\\displaystyle \\ast } instead of ∧, {\\displaystyle \\wedge,} thus x ⇔ y ≡ ( x ⇒ y ) ∗ ( y ⇒ x ) {\\displaystyle x\\Leftrightarrow y\\equiv (x\\Rightarrow y)\\ast (y\\Rightarrow x)} Negation is interpreted by the definable operation − x ≡ x ⇒ 0 {\\displaystyle -x\\equiv x\\Rightarrow 0} With this interpretation of connectives, any evaluation ev of propositional variables in L uniquely extends to an evaluation e of all well-formed formulae of MTL, by the following inductive definition (which generalizes Tarski's truth conditions), for any formulae A, B, and any propositional variable p: e ( p ) = e v ( p ) e ( ⊥ ) = 0 e ( <unk> ) = 1 e ( A <unk> B ) = e ( A ) ∗ e ( B ) e ( A → B ) = e ( A ) ⇒ e ( B ) e ( A ∧ B ) = e ( A ) ∧ e ( B ) e ( A ∨ B ) = e ( A ) ∨ e ( B", ", then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then R = T1→R2 for some R2 with Γ, x:T1 <unk> t2 : R2. Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then R = T1→R2 for some R2 with Γ, x:T1 <unk> t2 : R2. 6. If Γ <unk>t1 t2 : R" ]
[ "Inserting a new page into the page table for a user-space application.", "Deleting a page from the page table.", "Changing the read/write permission bit in the page table.", "Inserting a new page into the page table for kernel." ]
['Deleting a page from the page table.', 'Changing the read/write permission bit in the page table.']
71
Select all valid answers about UNIX-like shell.
[ "le. As all subjects assigned to a role are the same to the system, the system does not have the means to see if there are one or two users to enforce the separation of privilege. Another option, instead of looking at similarity between users, is to look at of permissions that are often needed together to run the system. These permissions are then put together in so‐called groups. Sometimes it makes sense for a subject to belong to a group, but this subject may not be allowed to access one of the resources in the group by the security policy. In this case we can implement what are called negative permissions, which indicate that a particular subject does not have a particular permission on an object. For instance in the example Alice needs access to file2 and file3, so group1 makes sense for her; but she should not read or write on file1. Instead of creating a new group, or breaching the security policy, we can add a negative permission that indicates that Negative permissions should always be tested first. If there is a negative permission, there is no need to check anything else. It guarantees that there is no error when checking and obtaining some positive permission (fail safe: if something is incorrect, the subject cannot access). In UNIX systems principals are users. Each user has an identity UID. There are some reserved UIDs, which we will see in the following slides. Users belong to groups, with identity GID. User accounts are defined in a file /etc/passwd. Each line defines a user as username:password:UID:GID:info:home:shell info is a comment field that can contain some information about the user; home, the absolute path to the directory where the user will appear when they log in; and shell the absolute path to the default shell of the user If users belong to more groups, those appear in the file /etc/group As in any group‐based access control, each group the user belongs to provides new permissions to the user. In UNIX, everything is a file. Files are created by users (they can be created by the root user) The system uses Discretionary access control. Each user owns their files and has access to them. UNIX has a very simple way of defining who else has access by defining three groups: owner:", "; and shell the absolute path to the default shell of the user If users belong to more groups, those appear in the file /etc/group As in any group‐based access control, each group the user belongs to provides new permissions to the user. In UNIX, everything is a file. Files are created by users (they can be created by the root user) The system uses Discretionary access control. Each user owns their files and has access to them. UNIX has a very simple way of defining who else has access by defining three groups: owner: the group is formed just by the file owner group: the file’s group other: anyone that is not the owner or on one of the owner’s group(s) The diference between sudo and su is that sudo – executes one action as super user su – changes the current user to another user. If there is no argument, it changes to root, the super user <unk>doing this is very dangerous, as any action you would realize would not undergo security checks To allow users to access systems files and services, there is the suid mechanism that we will see in a couple of slides Permission bits (see next slide) provide permissions for the 3 groups in UNIX: the user, the user’s group,others. The three permissions are read, write (modify or delete), execute When the file represents a directory the permissions change semantics: ‐ Read ‐> the user has the right to list the files inside the directory ‐ Write ‐> the user has the right to create a file in the directory (by creating a new file or moving a file ‐ Execute ‐> the user has the right to “move into” the directory, i.e., to execute the command “cd” Besides the 9 permission bits, there can be three attributes: suid/sgid – see slide below sticky bit – it only applies to directories, and it indicates that 1) the directory can only be deleted by the directory owner or the super user 2) files in the directory can only be renamed by the directory owner or the super user The sticky bit is on in /tmp, a folder share by all users, so that only owners or super users can", "for joining files horizontally Strip (Unix) – Shell command for removing non-essential information from executable code files References External links strings – Shell and Utilities Reference, The Single UNIX Specification, Version 5 from The Open Group strings(1) – Plan 9 Programmer's Manual, Volume 1 strings(1) – Inferno General commands Manual", "Brian W. Kernighan 1996 The Software Tools Users Group (Dennis E. Hall, Deborah Scherrer, Joe Sventek) 1995 The Creation of USENET by Jim Ellis, Steven M. Bellovin, and Tom Truscott 1994 Networking Technologies 1993 Berkeley UNIX See also AUUG LISA (conference) Marshall Kirk McKusick LISA SIG: Formerly SAGE (organization) Unix References External links USENIX: The Advanced Computing Systems Association Official USENIX YouTube Channel", "Brian W. Kernighan 1996 The Software Tools Users Group (Dennis E. Hall, Deborah Scherrer, Joe Sventek) 1995 The Creation of USENET by Jim Ellis, Steven M. Bellovin, and Tom Truscott 1994 Networking Technologies 1993 Berkeley UNIX See also AUUG LISA (conference) Marshall Kirk McKusick LISA SIG: Formerly SAGE (organization) Unix References External links USENIX: The Advanced Computing Systems Association Official USENIX YouTube Channel" ]
[ "The shell is a program, that runs in user-space.", "The shell is a program, that runs in kernel-space.", "The shell is a program, which reads from standard input.", "The shell is a function inside kernel.", "The shell is the layer, which has to be always used for communicating with kernel.", "The shell must run only in a single instance. Multiple running instances cause memory corruption.", "The shell is a user interface for UNIX-like systems." ]
['The shell is a program, that runs in user-space.', 'The shell is a program, which reads from standard input.', 'The shell is a user interface for UNIX-like systems.']
74
In x86, select all synchronous exceptions?
[ "Automatic mutual exclusion is a parallel computing programming paradigm in which threads are divided into atomic chunks, and the atomic execution of the chunks automatically parallelized using transactional memory. References See also Bulk synchronous parallel", "synchronous\", serial link. If you have an external modem attached to your home or office computer, the chances are that the connection is over an asynchronous serial connection. Its advantage is that it is simple — it can be implemented using only three wires: Send, Receive and Signal Ground (or Signal Common). In an RS-232 interface, an idle connection has a continuous negative voltage applied. A 'zero' bit is represented as a positive voltage difference with respect to the Signal Ground and a 'one' bit is a negative voltage with respect to signal ground, thus indistinguishable from the idle state. This means you need to know when a 'one' bit starts to distinguish it from idle. This is done by agreeing in advance how fast data will be transmitted over a link, then using a start bit to signal the start of a byte — this start bit will be a 'zero' bit. Stop bits are 'one' bits i.e. negative voltage. Actually, more things will have been agreed in advance — the speed of bit transmission, the number of bits per character, the parity and the number of stop bits (signifying the end of a character). So a designation of 9600-8-E-2 would be 9,600 bits per second, with eight bits per character, even parity and two stop bits. A common set-up of an asynchronous serial connection would be 9600-8-N-1 (9,600 bit/s, 8 bits per character, no parity and 1 stop bit) - a total of 10 bits transmitted to send one 8 bit character (one start bit, the 8 bits making up the byte transmitted and one stop bit). This is an overhead of 20%, so a 9,600 bit/s asynchronous serial link will not transmit data at 9600/8 bytes per second (1200 byte/s) but actually, in this case 9600/10 bytes per second (960 byte/s), which is considerably slower than expected. It can get worse. If parity is specified and we use 2 stop bits, the overhead for carrying one 8 bit character is 4 bits (one start bit, one parity bit and two stop bits) - or 50%! In this case a 9600 bit/s connection", "– all txn reads will see a consistent snapshot of the database – the txn successfully commits only if no updates it has made conflict with any concurrent updates made since that snapshot. • SI does not guarantee serializability! – SerializableSI: Stronger, more conservative protocol • Implemented in Oracle, MS SQL Server, Postgres. 61 Snapshot isolation • Conceptually, txn works on a copy of the db made at txn start time. – Very expensive à not implemented that way but still expensive. – Guarantees that reads in the txn see a consistent version of the db. • At commit time, verify that the values changed by the transaction have not been changed by other transactions since the snapshot was taken. • Write skew anomaly – Not serializable, but permitted by snapshot isolation! 62 T1: R(X)R(Y) W(X) C T2: R(X)R(Y) W(Y) C Write skew – (more concrete) example 63 [Source: Martin Kleppmann] Discussion • SI is related to optimistic CC, in that – Conceptually, snapshots are created at txn start. – There is an analysis phase at the end to decide whether a transaction may commit (do writesets overlap?). • Multiversion CC is a way to implement (a stronger) snapshot isolation. 64", "one-copy-serializability model. The \"CORBA Fault Tolerant Objects standard\" is based on the virtual synchrony model. Virtual synchrony was also used in developing the New York Stock Exchange fault-tolerance architecture, the French Air Traffic Control System, the US Navy AEGIS system, IBM's Business Process replication architecture for WebSphere and Microsoft's Windows Clustering architecture for Windows Longhorn enterprise servers. Systems that support virtual synchrony Virtual synchrony was first supported by Cornell University and was called the \"Isis Toolkit\". Cornell's most current version, Vsync was released in 2013 under the name Isis2 (the name was changed from Isis2 to Vsync in 2015 in the wake of a terrorist attack in Paris by an extremist organization called ISIS), with periodic updates and revisions since that time. The most current stable release is V2.2.2020; it was released on November 14, 2015; the V2.2.2048 release is currently available in Beta form. Vsync aims at the massive data centers that support cloud computing. Other such systems include the Horus system the Transis system, the Totem system, an IBM system called Phoenix, a distributed security key management system called Rampart, the \"Ensemble system\", the Quicksilver system, \"The OpenAIS project\", its derivative the Corosync Cluster Engine and several products (including the IBM and Microsoft ones mentioned earlier). Other existing or proposed protocols Data Distribution Service Pragmatic General Multicast (PGM) QuickSilver Scalable Multicast Scalable Reliable Multicast SMART Multicast Library support JGroups (Java API) Spread: C/C++ API, Java API RMF (C# API) hmbdc open source (headers only) C++ middleware, ultra-low latency/high throughput, scalable and reliable inter-thread, IPC and network messaging References Further reading Reliable Distributed Systems: Technologies, Web Services and Applications. K.P. Birman. Springer Verlag (1997). Textbook, covers a broad spectrum of distributed computing concepts, including virtual synchrony. Distributed Systems: Principles and Paradigms (2nd Edition). Andrew S. Tanenbaum, Maarten van Steen (2002). Textbook, covers a broad spectrum of distributed computing concept", "in a blocking state. Upon the completion of the task, the server is notified by a callback. The server unblocks the client and transmits the response back to the client. In case of thread starvation, clients are blocked waiting for threads to become available. See also Asynchronous system Asynchronous circuit" ]
[ "Divide error", "Timer", "Page Fault", "Keyboard" ]
['Divide error', 'Page Fault']
76
Once paging is enabled, load instruction / CR3 register / Page Table entry uses Virtual or Physical address?
[ "3 to physical page 2 • Virtual page 0 to physical page 3 • Virtual page 2 to physical page 5 • Virtual page 1 to physical page 7 16 Page table stores the address-translation information Physical memory OS mem Unused Pg 3 (P1) Pg 0 (P1) 64 B 48 B 32 B 16 B 0 B Unused Pg 2 (P1) Unused Pg 1 (P1) 128 B 112 B 96 B 80 B Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 Paging example (4): virtual address translation • Suppose P1 process references a virtual memory location using movl instruction • movl <virtual addr>, %eax 17 Page 0 Page 1 Page 2 Page 3 64 B 48 B 32 B 16 B 0 B Process (P1) (Logical view) Physical memory OS mem Unused Pg 3 (P1) Pg 0 (P1) 64 B 48 B 32 B 16 B 0 B Unused Pg 2 (P1) Unused Pg 1 (P1) 128 B 112 B 96 B 80 B Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 Paging example (4): virtual address translation • Suppose P1 process references a virtual memory location using movl instruction • movl <virtual addr>, %eax • Example: movl 21, %eax • Move 4 bytes into %eax register from page 1 of P1 virtual address space 18 Page 0 Page 1 Page 2 Page 3 64 B 48 B 32 B 16 B 0 B Process (P1) (Logical view) Physical memory OS mem Unused Pg 3 (P1) Pg 0 (P1) 64 B 48 B 32 B 16 B 0 B Unused Pg 2 (P1) Unused Pg 1 (P1) 128 B 112 B 96 B 80 B Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 Paging example (4): virtual address translation • Suppose P1 process references a virtual memory location using movl instruction • movl <virtual addr>, %eax • Example: movl 21, %eax • Move 4 bytes into %eax register from page 1 of P1 virtual address space 19 OS mem Unused Pg 3 (P1) Pg", "Process (P1) (Virtual address) 1 1 0 1 1 0 1 1 0 1 1 Translation Physical address • The physical page is 7 at which P1 can find the byte for address 21 • Binary value of 7 is 111 Final physical address OS mem Unused Pg 3 (P1) Pg 0 (P1) 64 B 48 B 32 B 16 B 0 B Unused Pg 2 (P1) Unused Pg 1 (P1) 128 B 112 B 96 B 80 B Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 L06.2 Page Tables CS-202 Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU 26 Page table • A page table stores virtual-to-physical address translations • Stores address translations for each virtual pages in the address space • Allows MMU to know where in physical memory each virtual page resides • Every process has one page table • The page table resides in memory and managed by the OS • The pointer to the page table is stored in a special register • Also called page-table base register (PTBR) / %cr3 by Intel • The value of PTBR is saved and restored in the process control block (PCB) on context switch 27 A page table consists of page table entries Page Table Entry (PTE) ●Index i corresponds to virtual page number (VPN) i ●Contains the page frame (PFN) ●Additional contains information bits: • Present bit: Indicates whether the address translation is valid • Initially, stack and heap are marked valid, while the rest (unused) address space is invalid • Protection bits: A page can have read/write/execute permission • U/S bit: a page may be accessible by user-level code (or only kernel code) • Dirty bit: Whether a page has been modified • Access/Reference bit: A page being accessed recently; tracks page popularity 28 32-bit Intel PTE format Present Protection User / Supervisor (kernel) Accessed Dirty Hardware caching for pages Page frame number Organizing PTE 29 30 Linear page tage table • Easiest implementation of page table: linear page table • MMU", "A page address register (PAR) contains the physical addresses of pages currently held in the main memory of a computer system. PARs are used in order to avoid excessive use of an address table in some operating systems. A PAR may check a page's number against all entries in the PAR simultaneously, allowing it to retrieve the pages physical address quickly. A PAR is used by a single process and is only used for pages which are frequently referenced (though these pages may change as the process's behaviour changes in accordance with the principle of locality). An example computer which made use of PARs is the Atlas. See also Translation Lookaside Buffer (TLB)", "s virtual pages to physical pages. For instance: Program #2 Page 0 →Physical Page 8, Page 1 →Physical Page 9 3. Use the mapping to locate the physical address. In this example, the virtual address resides in Page 0, which maps to Physical Page 8. Combining the physical page base address with the offset yields the final physical memory address. Sumarized, to find the physical address, we need to extract the virtual page number and the page offset from the virtual address. The virtual page number is used to look up the corresponding physical frame number in the page table. Finally, the physical address is computed using the formula: Physical Address = (Physical Frame Number × Page Size) + Page Offset where the page offset is directly derived from the virtual address, and the physical frame number is obtained from the page table. The page size is often a power of 2 (for example, 4 KB = 212), which makes extracting the page offset straightforward as it corresponds to the lower-order bits of the virtual address. 121 CHAPTER 12. PART III(A) - MEMORY HIERARCHY - VIRTUAL MEMORY - W.7.2 12.5.2 Virtual Adress Translation in a Paged MMU In a paged MMU, the virtual address generated by the processor is translated into a physical address using the page table stored in memory. Page Table: The page table, residing in main memory, contains: – Control Bits: Indicate the validity of a page and access permissions. – Physical Page Numbers: Map virtual pages to physical pages. 12.5.3 Memory Allocation is Easy Now Virtual memory systems simplify memory allocation by allowing noncontiguous physical memory to be mapped to contiguous virtual memory addresses. This enables efficient utilization of physical memory without requiring large, contiguous blocks. Virtual Memory: Each program operates in its own virtual address space, making it unaware of the physical memory layout. Virtual addresses are mapped to physical addresses using a page table. Physical Memory: Physical memory is divided into fixed-size blocks called pages. Any empty page in physi- cal memory can be allocated to a program’s virtual page. Advantages: – Programs can use noncontiguous memory", "ly protection at the page level The allocated physical memory can be non-contiguous on a page basis, while the virtual address space appears contiguous 7 What is the size of a page A page is the minimal unit of an address space • should be small enough to minimize internal fragmentation (4 KB–16 KB) Super pages • Multiple of page size (2MB, 1GB) • Useful to minimize cost of page translation 8 Paging: memory management scheme Page 0 Page 1 Page 2 Page 3 Page 4 Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Process (Logical view) Physical view • Paging eliminates the need for contiguous physical memory allocation • Logical pages can be mapped to available page frames: OS decides the allocation of frames • OS also maintains the mapping between logical pages and frames, called page table 9 Paging: address representation Virtual address has two components: 10 Paging: address representation Virtual address has two components: Virtual page number Page offset Number of bits specify the number of pages in virtual address (VA) space Number of bits specify the size of the page 11 Paging: virtual address example Virtual address has two components: 20 bits 12 bits 2^20 pages Page size: 2^12 = 4 KiB 12 Paging: address translation • MMU translates from virtual address to physical address; • High order bits designate page number • Low order bits designate offset in a page • Note: size of virtual and physical address may be different 12 Virtual page number (first p bits) Page offset (last o bits) Frame number Page offset HW Translation Process (Logical view) Physical view HW: Hardware 13 Paging: accessing a byte 13 Virtual page number (first p bits) Page offset (last o bits) Frame number Page offset HW Translation Process (Logical view) Physical view HW: Hardware • Extract the virtual page number (first p bits) • Map the virtual page number into a frame number using a page table • Extract the offset (last o bits) • Convert to physical memory location: access byte at the offset in the frame Paging example (1): virtual address space • Consider a minuscule virtual address space • 64 by" ]
[ "Physical / Physical / Physical", "Physical / Physical / Virtual", "Virtual / Physical / Physical", "Virtual / Virtual / Virtual", "Virtual / Virtual / Physical" ]
['Virtual / Physical / Physical']
78
Which of the execution of an application are possible on a single-core machine?
[ "machine with limitation CPUs execute an endless stream of instruction All system memory is a contiguous blob that can be accessed using load store instruction by CPUs The disk is a finite set of block Processor Memory Storage Hardware Program Infinite processor Infinite memory Storage manager Network manager Graphical interface Meanwhile from user perspective Multiple program execute on a single machine Running program do not want hardware limitation Also program should work across various hardware Executable file An executable file contains Executable code CPU instruction Data Information manipulated by these instruction Obtained by compiling a program Compiler Compiled program Executable image instruction and data From program to process Running a program creating a process When we run an executable the OS creates a process A process is an instance of an executable What constitutes a process A unique identifier Process ID PID Memory image Code and data static Stack and heap dynamic CPU context register Program counter current operand stack pointer File descriptor pointer to open file and descriptor stack heap data text PC SP x xffffffff Process memory PC Program counter aka IP Instruction pointer SP Stack pointer Logical view of a process memory layout stack heap data text max x xffffffff segment Call stack The heap grows from low address towards higher address Data segment statically known at compile time allocates global variable and data structure Read only text segment contains code and constant This is the program executable machine instruction Temporary data such a function parameter local variable and return address The stack grows from high address towards lower address Heap is used for dynamic memory allocation during program runtime How the OS creates a process stored in the disk Executable image instruction and data Physical memory $./cpu code data heap stack • Memory allocation: Allocate process memory regions (heap and stack) • Initialization: Initialize tasks related to IO (setting up STDIN, STDOUT, STDERR) • Ready: OS sets the stage for running the process by transferring the CPU control at beginning of the program’s entry point (e.g., main() function) • Loading: OS loads the static code and data into memory 11 Process ≠ Program A program consists of static code and data, e.g.", "possible with some restrictions. The direct execution is simple: Run the program directly on the CPU without any restrictions! 15 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Limited direct execution enables programs to execute as fast as possible with some restrictions. The direct execution is simple: Run the program directly on the CPU without any restrictions! 16 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list OS cleans up the resources used by the process after it exits Time This approach seems quite straightforward! BUT, there are problems... 17 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Problem #1 Restricted operations How does the OS ensure that a process does not execute/run a privileged code, while running it efficiently? 18 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does the OS stop running a process and switch to another one, which is required for virtualizing the CPU? 19 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does", "We have already discussed in detail the problems in executing transactions on a single machine However as you can imagine your banking account is not stored solely in one machine ☺ In this week's lecture we will discuss 1) What kind of architectures we can have when we are starting to have multiple instance sharing and inevitable fighting for resources 2) Subsequently we will discuss about concurrency control and how we can make sure that the results of a distributed transaction is durable 3) Then how we can make sure that we are failsafe 4) Eventual consistency or what you have probably heard about the “NoSQL” move... Here are the possible architectures for parallel databases. The first one is the simplest one: essentially you have a machine in a single-box possibly with multiple sockets and cores sharing memory disk network. CPUs have access to common memory address space via a fast interconnect. Each processor has a global view of all the in-memory data structures • Each DBMS instance on a processor has to “know” about the other instances Example: Oracle RAC -> network fabric to expose all the (shared) memory to all SQL Server papers on this too All CPUs can access a single logical disk directy via an interconnect, but each have their own private memories. • Can scale execution layer independently from the storage layer. • Have to send messages between CPUs to learn about their current state. Examples: Presto / Impala / NuoDB / Redshift / Hortonworks Stinger / Hbase / Snowflake / Spanner / Aurora HDFS can be thought of as a “shared-disk” backend Each DBMS instance has its own CPU, memory, and disk. Nodes only communicate with each other via network • Easy to increase capacity • Hard to ensure consistency Examples: MongoDB / cassandra / greenplum / memsql / coucbase / vertica AiAS Parallel architectures Distributed transactions - Concurrency control - Commit Replication Eventual Consistency Outline In a single node transaction All data required for the transaction is present on one node. Completing a transaction does not need any coordination with other nodes for such transactions Transaction protocol for the single node is sufficient In a distributed transaction data from multiple nodes are read and/or written.", "a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. booting The procedures implemented in starting up a computer or computer appliance until it can be used. It can be initiated by hardware such as a button press or by a software command. After the power is switched on, the computer is relatively dumb and can read only part of its storage called read-only memory. There, a small program is stored called firmware. It does power-on self-tests and, most importantly, allows access to other types of memory like a hard disk and main memory. The firmware loads bigger programs into the computer's main memory and runs it. C callback Any executable code that is passed as an argument to other code that is expected to \"call back\" (execute) the argument at a given time. This execution may be immediate, as in a synchronous callback, or it might happen at a later time, as in an asynchronous callback. central processing unit (CPU) The electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. The computer industry has used the term \"central processing unit\" at least since the early 1960s. Traditionally, the term \"CPU\" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry. character A unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. CI/CD See: continuous integration (CI) / continuous delivery (CD). cipher Also cypher. In cryptography, an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. class In object-oriented programming, an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods). In many language", "ction Multiple Thread u Simple execution hierarchy o GPUs comprised of SMs, which run TBs, which are comprised of warps u CUDA programming o Create a kernel, copy memory, invoke, copy back CS 307 – Fall 2018 Lec.10 - Slide 82 Assignment 4 Suggestions u Programming on GPUs is often tricky o Start early. o Check correctness often. u Make it work before you make it fast o Check thread indexing in array edges and corners o If you are using synchronization, be careful! u Next week, we will talk about GPU memory and performance optimizations as well" ]
[ "Concurrent execution", "Parallel execution", "Both concurrent and parallel execution", "Neither concurrent or parallel execution" ]
['Concurrent execution']
79
Which of the following lock acquisition orders (locks are acquired from left to right), for thread 1 (T1) and thread 2 (T2), will result in a deadlock ? Assume that A, B, C, D are lock instances.
[ "about locks mainly from a software perspective o What to lock, and when u In this lecture, we focus on how locks work and their interactions with hardware/OS o Ties back to cache coherence protocol u Because locks require communication... o Use memory locations to implement them! Focus on Implementation CS 307 – Fall 2018 Lec.07 - Slide 11 u Low latency o Processors should be able to acquire free locks quickly u Low traffic o Waiting for lock should generate little/no traffic o A busy lock should be handed off between processors with as little traffic as possible u Scalability o Latency/traffic should scale reasonably with number of processors u Low storage cost u Fairness o Avoid starvation or substantial unfairness o Ideal: processors should acquire lock in order they request access Desirable Lock Characteristics CS 307 – Fall 2018 Lec.07 - Slide 12 u Goal of locking is to force threads to access a location one at a time o This is called mutual exclusion because threads both exclude each other from executing u To implement: use a construct called a lock o Just a memory location where threads communicate wrt. who can execute and who must wait Conceptual Model of Locks Thread 1 Thread 2 Current Holder : - Lock Free CS 307 – Fall 2018 Lec.07 - Slide 13 u Goal of locking is to force threads to access a location one at a time o This is called mutual exclusion because threads both exclude each other from executing u To implement: use a construct called a lock o Just a memory location where threads communicate wrt. who can execute and who must wait Conceptual Model of Locks Thread 1 Thread 2 Current Holder : T1 Lock Acquired CS 307 – Fall 2018 Lec.07 - Slide 14 u Simple idea: o If memory location holds 0, lock is free o Store 1 into it to “acquire” the lock, other threads have to wait until the holder stores 0 again u Lock: u Unlock: First Try Creating a Lock while (lock!= 0); lock = 1; lock = 0; CS 307 – Fall 2018 Lec.07 - Slide 15 u Simple idea: o If memory location holds 0, lock is free o Store 1 into it to “acquire” the lock, other threads have to wait until", "graph and do not affect serializability. The wait-for-graph scheme is not applicable to a resource allocation system with multiple instances of each resource type. An arc from a transaction T1 to another transaction T2 represents that T1 waits for T2 to release a lock (i.e., T1 acquired a lock which is incompatible with a previously acquired lock from T2). A lock is incompatible with another if they are on the same object, one is a write, and they are from different transactions. A deadlock occurs in a schedule if and only if there is at least one cycle in the wait-for graph. Not every cycle necessarily represents a distinct deadlock instance.", "a memory location where thread communicate wrt who can execute and who must wait Conceptual Model of Locks Thread Thread Current Holder T Lock Acquired CS Fall Lec Slide u Simple idea o If memory location hold lock is free o Store into it to acquire the lock other thread have to wait until the holder store again u Lock u Unlock First Try Creating a Lock while lock lock lock CS Fall Lec Slide u Simple idea o If memory location hold lock is free o Store into it to acquire the lock other thread have to wait until the holder store again u Does this work Hint Think cache coherence First Try Creating a Lock Lock Unlock ld r mem addr load word into r cmp r if store bnz Lock else try again st mem addr st mem addr store to address CS Fall Lec Slide u Simple idea Load and check if lock is o Does this work Hint Think cache coherence o Answer No Instructions from load to store not executed atomically Two core can load believe the lock is free and we lose mutual exclusion First Try Creating a Lock Lock Unlock ld r mem addr load word into r cmp r if store bnz Lock else try again st mem addr st mem addr store to address CS Fall Lec Slide u Create a new instruction test and set TS o t reg mem addr u Atomically load memory location into reg and set content of location to o Exercise Sketch out how to create a lock using TS Need Atomics in Hardware CS Fall Lec Slide u Create a new instruction test and set TS o t reg mem addr u Atomically load memory location into reg and set content of location to o Exercise Sketch out how to create a lock using TS o Answer Need Atomics in Hardware Lock Unlock t r mem addr load word into r bnz Lock if lock obtained fall through critical section st mem addr store to address CS Fall Lec Slide u High level pseudo code o Actual code in assembly u Lock u Unlock Simple TS Lock void Lock int lock while Test and Set lock void Unlock volatile int lock lock CS Fall Lec Slide u x o XCHG swap memory value o LOCK prefix Add to ADD ADC AND BTC BTR BTS CMPXCHG CMPXCH B DEC INC NEG NOT OR SBB SUB XOR XADD when destination operand is memory Example TS in x", "A) R(A) S(B) R(B) T2: X(C) R(C) W(C) S(D) R(D) Lock-Based Concurrency Control Two-Phase Locking (2PL) Protocol • Rule 1: Each txn obtains – S (shared) lock before reading – X (exclusive) lock before writing – Sometimes also called read/write locks • Rule 2: A txn cannot request additional locks once it releases any locks. • 2PL allows only schedules whose precedence graph is acyclic => serializable. Strict Two-phase Locking (Strict 2PL) Protocol • Rule 3: All locks released when the txn completes. • Strict 2PL additionally simplifies transaction aborts – (Non-strict) 2PL involves more complex abort processing. 21 Deadlocks • Deadlock: Cycle of transactions waiting for locks to be released by each other. • Two ways of dealing with deadlocks – Deadlock detection: detect and resolve deadlocks when they are created. – Deadlock prevention: never let deadlocks happen. 22 Deadlock Detection • If a lock request cannot be satisfied, the transaction is suspended and must wait until the resource becomes available. • Create a waits-for graph: – Nodes are transactions – Edge from Ti to Tj if Ti is waiting for Tj to release a lock • Periodically check for cycles in the waits-for graph 23 Deadlock Detection (Continued) Example: T1: S(A) R(A) S(B) T2: X(B) W(B) X(C) T3: S(C) R(C) X(A) T4: X(B) T1 T2 T4 T3 T1 T2 T4 T3 24 Deadlock Prevention • Assign priorities based on timestamps. – Earlier timestamp à higher priority • Assume Ti wants a lock that Tj holds. Two policies: – Wait-Die: It Ti has higher priority, Ti waits for Tj. Otherwise Ti aborts – Wound-wait: If Ti has higher priority, Tj aborts. Otherwise Ti waits • If a transaction re", "thread int tid omp get thread num int i for i i i omp set lock lock printf T d begin locked region n tid printf T d end locked region n tid omp unset lock lock omp destroy lock lock CS Fall Lec Slide OpenMP Locks Example omp lock t lock omp init lock lock pragma omp parallel num thread int tid omp get thread num int i j for i i i omp set lock lock printf s T d begin locked region n tid printf s T d end locked region n tid omp unset lock lock omp destroy lock lock T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region T begin locked region T end locked region CS Fall Lec Slide u Building fast lock with hardware support o TS TTS LL SC o All of them directly interact with coherence protocol and consistency model too o Good exam understanding prep how to build one type of locking primitive out of another see exercise released today u Barriers an example of point to point synch o Use lock a a part of their implementation o Need to be careful of correctness and traffic Conclusion" ]
[ "T1: A,B,C,D T2: A,B,C,D", "T1: A,D,C,B T2: A,D,C,B", "T1: A,B,C,D T2: D,C,B,A", "T1: A,B,C,D T2: A,B,E,F", "T1: A,B,C,D T2: E,B,A,F" ]
['T1: A,B,C,D T2: D,C,B,A', 'T1: A,B,C,D T2: E,B,A,F']
81
In an x86 multiprocessor system with JOS, select all the correct options. Assume every Env has a single thread.
[ "extra wv2 x2x xalan-java xbill xbitmaps xcb-proto xclip xerces2-java xf86-input-acecad xf86-input-aiptek xf86-input-joystick xf86-input-keyboard xf86-input-mouse xf86-input-synaptics xf86-input-vmmouse xf86-input-void xf86-video-apm xf86-video-ark xf86-video-ast xf86-video-chips xf86-video-cirrus xf86-video-dummy xf86-video-fbdev xf86-video-glint xf86-video-i128 xf86-video-i740 xf86-video-mach64 xf86-video-mga xf86-video-neomagic xf86-video-nv xf86-video-r128 xf86-video-rendition xf86-video-s3 xf86-video-s3virge xf86-video-savage xf86-video-siliconmotion xf86-video-sis xf86-video-sisusb xf86-video-tdfx xf86-video-trident xf86-video-tseng xf86-video-unichrome xf86-video-v4l xf86-video-vesa xf86-video-vmware xf86-video-voodoo xf86-video-xgi xf86-video-xgixp xf86dgaproto xf86vidmodeproto xfce4-taskmanager xfwm4-themes xineramaproto xkeyboard-config xmahjongg xorg-apps xorg-bdftopcf xorg-setxkbmap xorg-xcalc xorg-xcmsdb xorg-xdriinfo xorg-xev xorg-xkbcomp xorg-xkbevd xorg-xlsatoms xorg-xlsclients xorg-xlsfonts xorg-xmodmap xorg-xrefresh xorg-xset xorg", "tion (move a boid, play a game, run web browser) ▶handle mouse and keyboard, draw on screen ▶read/write files, send/receive network packets How does OS do it? ▶preemptive multi-tasking (time-slicing): OS gives a slice of time to each process then interrupts it to give chance to others ▶when there are multiple cores: each core executes a different process at the same time →parallelism Java Virtual Machine (JVM) can start threads, which run on multiple OS processes. When OS schedules those processes on multiple cores, we get parallelism! Operating Systems (OS) Runs Threads on Cores OS sits between programs and machine. A key role: handle processes ▶run user computation (move a boid, play a game, run web browser) ▶handle mouse and keyboard, draw on screen ▶read/write files, send/receive network packets How does OS do it? ▶preemptive multi-tasking (time-slicing): OS gives a slice of time to each process then interrupts it to give chance to others ▶when there are multiple cores: each core executes a different process at the same time →parallelism Java Virtual Machine (JVM) can start threads, which run on multiple OS processes. When OS schedules those processes on multiple cores, we get parallelism! Operating Systems (OS) Runs Threads on Cores OS sits between programs and machine. A key role: handle processes ▶run user computation (move a boid, play a game, run web browser) ▶handle mouse and keyboard, draw on screen ▶read/write files, send/receive network packets How does OS do it? ▶preemptive multi-tasking (time-slicing): OS gives a slice of time to each process then interrupts it to give chance to others ▶when there are multiple cores: each core executes a different process at the same time →parallelism Java Virtual Machine (JVM) can start threads, which run on multiple OS processes. When OS schedules those processes on multiple cores, we get parallelism! Example with Threads class MyThread(val k: Int) extends Thread: override def run: Unit = var i", "components on ceramic substrates. x86 series In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are: Memory instructions to set and access numbers and strings in random-access memory. Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers. Floating point ALU instructions to perform the primary arithmetic operations on real numbers. Call stack instructions to push and pop words needed to allocate memory and interface with functions. Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data. Changing programming environment VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language. Programming paradigms and languages Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should: express ideas directly in the code. express independent ideas independently. express relationships among ideas directly in the code. combine ideas freely. combine ideas only where combinations make sense. express simple ideas simply. The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate: procedural languages, functional languages, and logical languages. different levels of data abstraction. different levels", "processors u One thread on each vector lane o Force “thread vectors” to run in lock step o Turn lanes off when threads take different code paths u What type of multithreading most effective? o CGMT, FGMT, SMT? o Answer: FGMT § Since one instruction controls all ALUs, there’s no horizontal waste to eliminate § Just need to ensure we have enough threads to hide latency CS 307 – Fall 2018 Lec.09 - Slide 59 Multi-threading to the Extreme: Massively threaded vector processors GPUs u One thread on each vector lane o Force “thread vectors” warps to run in lock step o Turn lanes off when threads take different code paths u Use fine grained multithreading to hide stalls o Allow different “thread vectors” warps to share the pipeline CS 307 – Fall 2018 Lec.09 - Slide 60 GPUs u Make cores simple o In order pipelines o No branch prediction u Put many cores in the die o Simple cores are smaller u Run many warps in a core u Take throughput-latency trade-off to extreme o Trillions of integer operations per second o... but, huge single thread latency CS 307 – Fall 2018 Lec.09 - Slide 61 GPUs u Two levels of multithreading o Across pipeline depth and width u Warps: groups of threads that run in lockstep o Multithreading across pipeline width u Multiple warps can share the pipeline o Multithreading across pipeline depth CS 307 – Fall 2018 Lec.09 - Slide 62 Thread divergence u Threads in a warp might execute different paths o Due to branches o... but threads within a warp must execute in lockstep!! u Solution: o execute both paths and disable diverging threads CS 307 – Fall 2018 Lec.09 - Slide 63 Multithreading in GPUs Younger Instructions CS 307 – Fall 2018 Lec.09 - Slide 64 Multithreading in GPUs Warp: group of threads running in lockstep Younger Instructions CS 307 – Fall 2018 Lec.09 - Slide 65 Multithreading in GPUs Diverging threads", "57) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips. Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor. The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates. x86 series In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors" ]
[ "Two Envs could run on the same processor simultaneously.", "Two Envs could run on two different processors simultaneously.", "One Env could run on two different processors simultaneously.", "One Env could run on two different processors at different times." ]
['Two Envs could run on two different processors simultaneously.', 'One Env could run on two different processors at different times.']
86
In JOS, suppose a value is passed between two Envs. What is the minimum number of executed system calls?
[ "-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic. If we have two functions F1 and F2: F 1 = A B + A C + A D, {\\displaystyle F_{1}=AB+AC+AD,\\,} F 2 = A ′ B + A ′ C + A ′ E. {\\displaystyle F_{2}=A'B+A'C+A'E.\\,} The above 2-level representation takes six product terms and 24 transistors in CMOS Rep. A functionally equivalent representation in multilevel can be: P = B + C. F1 = AP + AD. F2 = A'P + A'E. While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of the term B + C. Similarly, we distinguish between combinational circuits and sequential circuits. Combinational circuits produce their outputs based only on the current inputs. They can be represented by Boolean relations. Some examples are priority encoders, binary decoders, multiplexers, demultiplexers. Sequential circuits produce their output based on both current and past inputs, depending on a clock signal to distinguish the previous inputs from the current inputs. They can be represented by finite state machines. Some examples are flip-flops and counters. Example While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented. Consider the circuit used to represent ( A ∧ B ̄ ) ∨ ( A ̄ ∧ B ) {\\displaystyle (A\\wedge {\\bar {B}})\\vee ({\\bar {A}}\\wedge B)}. It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need two inverters, two", "whose intermediate vertices are in {1,2,.,k} Guessing: does shortest path use vertex k? c(k) uv = min(c(k-1) uv,c(k-1) uk +c(k-1) kv ) Initialization:c(0) uv = wuv How do we read off the answer? Runtime? 38 / 63 Floyd-Warshall algorithm A faster dynamic program. Subproblem: c(k) uv = weight of shortest path u →v whose intermediate vertices are in {1,2,.,k} Guessing: does shortest path use vertex k? c(k) uv = min(c(k-1) uv,c(k-1) uk +c(k-1) kv ) Initialization:c(0) uv = wuv How do we read off the answer? Runtime? Time: O(V 3) problems, 2 choices, so O(V 3) 39 / 63 Floyd-Warshall algorithm Initialize: C = (wuv) Main loop: For k from 1 to n: For u ∈V: For v ∈V: if cuv > cuk +ckv: cuv = cuk +ckv Runtime: O(V 3) Can we do better if graph is sparse, i.e. V ≪E? 40 / 63 All-pairs shortest paths Ï Dynamic programming Ï Matrix multiplication Ï Floyd-Warshall algorithm Ï Johnson’s algorithm Ï Difference constraints 41 / 63 Johnson’s algorithm Better than Floyd-Warshall for sparse graphs: O(VE +V 2logV) runtime 42 / 63 Single-source shortest paths Ï Given directed graph G = (V,E), vertex s ∈V, edge weights w : E →R Ï Find δ(s,v) =shortest-path weight s →v for all v ∈V (well-defined if no negative cycles) Situation Algorithm Time unweighted (w = 1) BFS O(V +E) nonneg. edge weights Dijkstra O(E +V logV) general Bellman-Ford O(VE) acyclic graph (DAG) topological sort O(V +E) +1 pass Bell", "\\;\\exp \\left[-2\\left(\\mid z_{1}\\mid ^{2}+\\mid z_{2}\\mid ^{2}\\right)\\right]\\;{\\mathcal {J}}_{0}\\left({\\sqrt {2}}\\;{k\\mid z_{1}-z_{2}\\mid }\\right)=} 1 ( 2 π ) 2 2 n n! ∫ d 2 u 12 d 2 v 12 ∣ u 12 ∣ 2 n exp ⁡ [ − 2 ( ∣ u 12 ∣ 2 + ∣ v 12 ∣ 2 ) ] J 0 ( 2 k ∣ u 12 ∣ ) = {\\displaystyle {1 \\over \\left(2\\pi \\right)^{2}\\;2^{n}\\;n!}\\int d^{2}u_{12}\\;d^{2}v_{12}\\;\\mid u_{12}\\mid ^{2n}\\;\\exp \\left[-2\\left(\\mid u_{12}\\mid ^{2}+\\mid v_{12}\\mid ^{2}\\right)\\right]\\;{\\mathcal {J}}_{0}\\left({2}k\\mid u_{12}\\mid \\right)=} M ( n + 1, 1, − k 2 2 ). {\\displaystyle M\\left(n+1,1,-{k^{2} \\over 2}\\right).} The interaction energy has minima for (Figure 1) l n = 1 3, 2 5, 3 7, etc., {\\displaystyle {{\\mathit {l}} \\over n}={1 \\over 3},{2 \\over 5},{3 \\over 7},{\\mbox{etc.,}}} and l n = 2 3, 3 5, 4 7, etc. {\\displaystyle {{\\mathit {l}} \\over n}={2 \\over 3},{3 \\over 5},{4 \\over 7},{\\mbox{", "~a_{i}-q_{i}~~{\\text{ and }}~~(a_{j}+1)-q_{j}~<~q_{j}-a_{j}}.Jefferson's method can be modified to satisfy both quotas, yielding the Quota-Jefferson method. Moreover, any divisor method can be modified to satisfy both quotas. This yields the Quota-Webster method, Quota-Hill method, etc. This family of methods is often called the quatatone methods, as they satisfy both quotas and house-monotonicity. Minimizing pairwise inequality One way to evaluate apportionment methods is by whether they minimize the amount of inequality between pairs of agents. Clearly, inequality should take into account the different entitlements: if a i / t i = a j / t j {\\displaystyle a_{i}/t_{i}=a_{j}/t_{j}} then the agents are treated \"equally\" (w.r.t. to their entitlements); otherwise, if a i / t i > a j / t j {\\displaystyle a_{i}/t_{i}>a_{j}/t_{j}} then agent i {\\displaystyle i} is favored, and if a i / t i < a j / t j {\\displaystyle a_{i}/t_{i}<a_{j}/t_{j}} then agent j {\\displaystyle j} is favored. However, since there are 16 ways to rearrange the equality a i / t i = a j / t j {\\displaystyle a_{i}/t_{i}=a_{j}/t_{j}}, there are correspondingly many ways by which inequality can be defined.: 100–102 | a i / t i − a j / t j | {\\displaystyle |a_{i}/t_{i}-a_{j}/t_{j}|}. Webster's method is the unique apportionment method in which, for each pair of agents i", "either concurrently or consecutively using a common hardware design, an embedded hypervisor can greatly simplify the task. Such drivers and system services can be implemented just once for the virtualized environment; these services are then available to any hosted OS. This level of abstraction also allows the embedded developer to implement or change a driver or service in either hardware or software at any point, without this being apparent to the hosted OS. 2. Support for multiple operating systems on a single processor Typically this is used to run a real-time operating system (RTOS) for low-level real-time functionality (such as the communication stack) while at the same time running a general purpose OS, (GPOS) like Linux or Windows, to support user applications, such as a web browser or calendar. The objective might be to upgrade an existing design without the added complexity of a second processor, or simply to minimize the bill of materials (BoM). 3. System security An embedded hypervisor is able to provide secure encapsulation for any subsystem defined by the developer, so that a compromised subsystem cannot interfere with other subsystems. For example, an encryption subsystem needs to be strongly shielded from attack to prevent leaking the information the encryption is supposed to protect. As the embedded hypervisor can encapsulate a subsystem in a VM, it can then enforce the required security policies for communication to and from that subsystem. 4. System reliability The encapsulation of a subsystem components into a VM ensures that failure of any subsystem cannot impact other subsystems. This encapsulation keeps faults from propagating from a subsystem in one VM to a subsystem in another VM, improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection. This can be particularly important for embedded device drivers, as this is where the highest density of fault conditions is seen to occur, and is thus the most common cause of OS failure and system instability. It also allows the encapsulation of operating systems that were not necessarily built to the reliability standards demanded of the new system design. 5. Dynamic update of system software Subsystem software or applications can be securely updated and tested for integrity, by downloading to a" ]
[ "1", "2", "3", "4" ]
['2']
87
What strace tool does?
[ "tool: ICSynth by InfoChem Spaya, Software freely available proposed by Iktos", "Examples) scoring function to provide authentic example sentences for specific target words. Results are drawn from a special corpus of high-quality texts covering everyday, standard, formal, and professional language and displayed as a concordance. SKELL also includes simplified versions of Sketch Engine's word sketch and thesaurus functions. It has been suggested that SKELL can be used, for instance, to help students understand the meaning and/or usage of a word or phrase; to help teachers wanting to use example sentences in a class; to discover and explore collocates; to create gap-fill exercises; to teach various kinds of homonyms and polysemous words. SKELL was first presented in 2014, when only English was supported. Later, support was added for Russian, Czech, German, Italian and Estonian. List of text corpora Sketch Engine provides access to more than 700 text corpora. There are monolingual as well as multilingual corpora of different sizes (from one thousand words up to 60 billion words) and various sources (e.g. web, books, subtitles, legal documents). The list of corpora includes British National Corpus, Brown Corpus, Cambridge Academic English Corpus and Cambridge Learner Corpus, CHILDES corpora of child language, OpenSubtitles (a set of 60 parallel corpora), 24 multilingual corpora of EUR-Lex documents, the TenTen Corpus Family (multi-billion web corpora), and Trends corpora (monitor corpora with daily updates). Architecture Sketch Engine consists of three main components: an underlying database management system called Manatee, a web interface search front-end called Bonito, and a web interface for corpus building and management called Corpus Architect. Manatee Manatee is a database management system specifically devised for effective indexing of large text corpora. It is based on the idea of inverted indexing (keeping an index of all positions of a given word in the text). It has been used to index text corpora comprising tens of billions of words. Searching corpora indexed by Manatee is performed by formulating queries in the Corpus Query Language (CQL). Manatee is written in C++ and offers an API for a number of other programming languages including Python, Java, Perl and Ruby. Recently, it was rewritten into Go for", "at a depth of 45 metres (148 ft) off Point Glyphadia on the Greek island of Antikythera. The team retrieved numerous large objects, including bronze and marble statues, pottery, unique glassware, jewellery, coins, and the mechanism. The mechanism was retrieved from the wreckage in 1901, probably July. It is unknown how the mechanism came to be on the cargo ship. All of the items retrieved from the wreckage were transferred to the National Museum of Archaeology in Athens for storage and analysis. The mechanism appeared to be a lump of corroded bronze and wood. The bronze had turned into atacamite which cracked and shrank when it was brought up from the shipwreck, changing the dimensions of the pieces. It went unnoticed for two years, while museum staff worked on piecing together more obvious treasures, such as the statues. Upon removal from seawater, the mechanism was not treated, resulting in deformational changes. On 17 May 1902, archaeologist Valerios Stais, together with his cousin, the Greek politician Spyridon Stais, found one of the pieces of rock had a gear wheel embedded in it. He initially believed that it was an astronomical clock, but most scholars considered the device to be prochronistic, too complex to have been constructed during the same period as the other pieces that had been discovered. The German philologist Albert Rehm became interested in the device and was the first to propose that it was an astronomical calculator. Investigations into the object lapsed until British science historian and Yale University professor Derek J. de Solla Price became interested in 1951. In 1971, Price and Greek nuclear physicist Charalampos Karakalos made X-ray and gamma-ray images of the 82 fragments. Price published a paper on their findings in 1974. Two other searches for items at the Antikythera wreck site in 2012 and 2015 yielded art objects and a second ship which may, or may not, be connected with the treasure ship on which the mechanism was found. Also found was a bronze disc, embellished with the image of a bull. The disc has four \"ears\" which have holes in them, and it was thought it may have been part", "TRACE is a high-precision orbit determination and orbit propagation program. It was developed by The Aerospace Corporation in El Segundo, California. An early version ran on the IBM 7090 computer in 1964. The Fortran source code can be compiled for any platform with a Fortran compiler. When Satellite Tool Kit's high-precision orbit propagator and parameter and coordinate frame transformations underwent an Independent Verification and Validation effort in 2000, TRACE v2.4.9 was the standard against which STK was compared. As of 2013, TRACE is still used by the U.S. Government and some of its technical contractors.", "95 or 99.995%. Typical areas of usage are: The reliability of computer systems, that is the ratio of uptime to the sum of uptime and downtime. \"Five nines\" reliability in a continuously operated system means an average downtime of no more than approximately five minutes per year (there is no relationship between the number of nines and minutes per year, it is pure coincidence that \"five nines\" relates to five minutes per year.) (See high availability for a chart.) The purity of materials, such as gases and metals. Pain The dol (from the Latin word for pain, dolor) is a unit of measurement for pain. James D. Hardy, Herbert G. Wolff, and Helen Goodell of Cornell University proposed the unit based on their studies of pain during the 1940s and 1950s. They defined one dol to equal a just-noticeable difference in pain. The unit never came into widespread use and other methods are now used to assess the level of pain experienced by patients. The Schmidt sting pain index and Starr sting pain index are pain scales rating the relative pain caused by different hymenoptera stings. Schmidt has refined his pain index (with a 1–4 scale) with extensive anecdotal experience, culminating in a paper published in 1990 which classifies the stings of 78 species and 41 genera of Hymenoptera. The Starr sting pain scale uses the same 1–4 scale. Pepper heat ASTA pungency unit The ASTA (American Spice Trade Association) pungency unit is based on a scientific method of measuring chili pepper \"heat\". The technique utilizes high-performance liquid chromatography to identify and measure the concentrations of the various compounds that produce a heat sensation. Scoville units are roughly 1⁄15 the size of pungency units while measuring capsaicin, so a rough conversion is to multiply pungency by 15 to obtain Scoville heat units. Scoville heat unit The Scoville scale is a measure of the hotness of a chili pepper. It is the degree of dilution in sugar water of a specific chili pepper extract when a panel of 5 tasters can no longer detect its \"heat\". Pure capsaicin (the chemical responsible for the \"heat\") has 16 million Scoville heat" ]
[ "It prints out system calls for given program. These system calls are always called when executing the program.", "It prints out system calls for given program. These systems calls are called only for that particular instance of the program.", "To trace a symlink. I.e. to find where the symlink points to.", "To remove wildcards from the string." ]
['It prints out system calls for given program. These systems calls are called only for that particular instance of the program.']
961
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepHow many (syntactic and lexical) rules does the extended Chomsky Normal Form grammar equivalent to \(G\) contain, if produced as described in the parsing lecture?
[ "(CNF) grammar A CFG is in CNF if all its syntactic rules are of the form: X →X1 X2 where X ∈C\\T and X1, X2 ∈C A context free grammar is in extended Chomsky Normal Form (eCNF) if all its syntactic rules are of the form: X →X1 or X →X1 X2 where X ∈C\\T and X1, X2 ∈C Syntactic parsing: Introduction & CYK Algorithm – 31 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Chomsky normal form: example R1: S → NP VP R1: S → NP VP R2: NP → Det N R2: NP → Det N R3: NP → Det N PNP R3.1: NP → X1 PNP R3.2: X1 → Det N R4: PNP → Prep NP R4: PNP → Prep NP R5: VP → V R6: VP → V NP R6: VP → V NP R7: VP → V NP PNP R7.1: VP → X2 PNP R7.2: X2 → V NP L5: V → ate L5.1: V → ate L5.2: VP → ate increases the number of non-terminals and the number of rules Syntactic parsing: Introduction & CYK Algorithm – 32 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier CYK algorithm: basic principles (2) The algorithmically efficient organization of the computation is based on the following property: if the grammar is in CNF (or in eCNF) the computation of the syntactic interpretations of a sequence W of length l only requires the exploration of all the decompositions of W into exactly two sub-subsequences, each of them corresponding to a cell in a chart. The number of pairs of sub-sequences to explore to compute the interpretations of W is therefore n -1. Idea: put all the", "J.-C. Chappelier Context Free Grammars A Context Free Grammar (CFG) G is (in the NLP framework) defined by: ▶a set C of syntactic categories (called \"non-terminals\") ▶a set L of words (called \"terminals\") ▶an element S of C, called the top level category, corresponding to the category identifying complete sentences ▶a proper subset T of C, which defines the morpho-syntactic categories or “Part-of-Speech tags” ▶a set R of rewriting rules, called the syntactic rules, of the form: X →X1 X2.Xn where X ∈C\\T and X1.Xn ∈C ▶a set L of rewriting rules, called the lexical rules, of the form: X →w where X ∈T and w is a word of the language described by G. L is indeed the lexicon Syntactic parsing: Introduction & CYK Algorithm – 17 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier A simplified example of a Context Free Grammar terminals: a, cat, ate, mouse, the PoS tags: N, V, Det non-terminals: S, NP, VP, N, V, Det rules: R1: S→NP VP R2: VP →V R3: VP →V NP R4: NP →Det N lexicon: N →cat Det →the. Syntactic parsing: Introduction & CYK Algorithm – 18 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Syntactically Correct A word sequence is syntactically correct (according to G) ⇐⇒it can be derived from the upper symbol S of G in a finite number of rewriting steps corresponding to the application of rules in G. Notation: S ⇒∗w1.wn Any sequence of rules corresponding to a possible way of deriving a given sentence W = w1.wn is called a", "free grammar is in extended Chomsky Normal Form eCNF if all it syntactic rule are of the form X X or X X X where X C T and X X C Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Chomsky normal form example R S NP VP R S NP VP R NP Det N R NP Det N R NP Det N PNP R NP X PNP R X Det N R PNP Prep NP R PNP Prep NP R VP V R VP V NP R VP V NP R VP V NP PNP R VP X PNP R X V NP L V ate L V ate L VP ate increase the number of non terminal and the number of rule Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier CYK algorithm basic principle The algorithmically ef cient organization of the computation is based on the following property if the grammar is in CNF or in eCNF the computation of the syntactic interpretation of a sequence W of length l only requires the exploration of all the decomposition of W into exactly two sub subsequence each of them corresponding to a cell in a chart The number of pair of sub sequence to explore to compute the interpretation of W is therefore n Idea put all the analysis of sub sequence in a chart Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier CYK algorithm basic principle The syntactic analysis of an n word sequence W w wn is organized into a half pyramidal table or chart of cell Ci j i n j n where the cell Ci j contains all the possible syntactic interpretation of the sub sequence wj wj i of i word starting with the j th word in W X Cij wj wj i X The computation of the syntactic interpretation proceeds row wise upwards i e with increasing value of i Syntactic parsing Introduction CYK", "\"agent\"/\"actor\" rather than \"noun\"). More lexicaly oriented. Dependency grammars provide simpler structures (with less nodes, 1 for each word, and less deep), but are less rich than phrase-structure grammars Modern approach: combine both Syntactic parsing: Introduction & CYK Algorithm – 13 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Formal phrase-structure grammars A formal phrase-structure grammar G is defined by: ▶A finite set C of “non-terminal” symbols syntactic categories ▶A finite set L of “terminal” symbols words ▶The upper level symbol S ∈C the “sentence” ▶A finite set R of rewriting rules syntactic rules R ⊂C+ ×(C∪L)∗ In the NLP field, the following concepts are also introduced: ▶lexical rules ▶pre-terminal symbols or Part of Speech tags Syntactic parsing: Introduction & CYK Algorithm – 14 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What kind of grammar for NLP? Reminder: Chomsky’s Hierarchy: complexity is related to the shape of the rules language class grammar type recognizer complexity regular X → w or X →w Y (type 3) FSA O(n) embeddings context-free X → Y1.Yn (type 2) PDA O(n3) crossings context- dependent α →β |α| ≤|β| (type 1) Turing ma- chine exp. recursively enumerable α →β (type 0) undecidable embedding: “The bear the dog belonging to the hunter my wife was a friend of bites howls” crossing: “Diamonds, emeralds, amethysts are respectively white, green and purple” Syntactic parsing: Introduction & CYK Algorit", "M Rajman J C Chappelier A simpli ed example of a Context Free Grammar terminal a cat ate mouse the PoS tag N V Det non terminal S NP VP N V Det rule R S NP VP R VP V R VP V NP R NP Det N lexicon N cat Det the Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Syntactically Correct A word sequence is syntactically correct according to G it can be derived from the upper symbol S of G in a nite number of rewriting step corresponding to the application of rule in G Notation S w wn Any sequence of rule corresponding to a possible way of deriving a given sentence W w wn is called a derivation of W The set not necessary nite of syntactically correct sequence according to G is by de nition the language recognized by G A elementary rewriting step is noted several consecutive rewriting step with and C L Example if a rule we have X a Y b and Z c then for instance X Y Z aYZ and X Y Z abc Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence the cat ate a mouse is syntactically correct according to the former example grammar S R NP VP R Det N VP L the N VP L the cat VP R the cat V NP L the cat ate NP R the cat ate Det N L the cat ate a N L the cat ate a mouse Its derivation is R R L L R L R L L Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence ate a mouse the cat is syntactically wrong according to the former example grammar S R NP VP R Det N VP X ate Det N VP Exercise Some colorless green idea sleep furiously Syntactically correct Semantically correct Syntactic parsing Introduction C" ]
[ "the grammar \\(G\\) cannot be converted to extended Chomsky Normal Form", "the grammar \\(G\\) already is in extended Chomsky Normal Form", "11 rules", "31 rules", "48 rules" ]
['11 rules', '31 rules']
966
Select the answer that correctly describes the differences between formal and natural languages. 
[ "ClearTalk is a controlled natural language—a kind of a formal language for expressing information that is designed to be both human-readable (being based on English) and easily processed by a computer. Anyone who can read English can immediately read ClearTalk, and the people who write ClearTalk learn to write it while using it. The ClearTalk system itself does most of the training through use: the restrictions are shown by menus and templates and are enforced by immediate syntactic checks. By consistently using ClearTalk for its output, a system reinforces the acceptable syntactic forms. It is used by CODE4, the experimental knowledge management software Ikarus, and by a knowledge base management system Fact Guru. ClearTalk is easily readable by most people who can read English, and requires very little training to write. Databases of information have been written using ClearTalk by a 9-year old human. More than 25,000 facts have been encoded in ClearTalk. ClearTalk allows varying degrees of formality or specificity, allowing the author to choose to leave or remove ambiguity. ClearTalk was created in 1988 and fell out of use about 2006. It is the oldest controlled natural language with a formal representation. Citations References", "In theoretical computer science and formal language theory, a formal language is empty if its set of valid sentences is the empty set. The emptiness problem is the question of determining whether a language is empty given some representation of it, such as a finite-state automaton. For an automaton having n {\\displaystyle n} states, this is a decision problem that can be solved in O ( n 2 ) {\\displaystyle O(n^{2})} time, or in time O ( n + m ) {\\displaystyle O(n+m)} if the automaton has n states and m transitions. However, variants of that question, such as the emptiness problem for non-erasing stack automata, are PSPACE-complete. The emptiness problem is undecidable for context-sensitive grammars, a fact that follows from the undecidability of the halting problem. It is, however, decidable for context-free grammars. See also Intersection non-emptiness problem", "✬ ✫ ✩ ✪ Introduction to Natural Language Processing Out of Vocabulary Forms Spelling Error correction Jean-C ́edric Chappelier Jean-Cedric.Chappelier@epfl.ch and Martin Rajman Martin.Rajman@epfl.ch Artificial Intelligence Laboratory LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 1/34 ✬ ✫ ✩ ✪ Contents ➥Out of Vocabulary Forms ➥Spelling Error Correction ✈Edit distance ✈Spelling error correction with FSA ✈Weighted edit distance LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 2/34 ✬ ✫ ✩ ✪ Out of Vocabulary forms • Out of Vocabulary (OoV) forms matter: they occur quite frequently (e.g. <unk>10% in newspapers) What do they consist of? – spelling errors: foget, summmary, usqge,. – neologisms: Internetization, Tacherism,. – borrowings: gestalt, rendez-vous,. – forms difficult to exhaustively lexicalize: (numbers,) proper names, abbreviations,. • identification based on patterns is not well-adapted for all OoV forms ☞We will focus here on spelling errors, neologisms and borrowings LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 3/34 ✬ ✫ ✩ ✪ Spelling errors and neologisms • for spelling errors (resp. neologisms), distortions (resp. derivations) are modelled by transformations, i.e. rewriting rules (sometimes weighted) Example: – Transposition (distortion): XY →YX [1.0] where X and Y stands for variables – tripling (distortion): XX →XXX [1.0] – name derivation: ize:INF →ization:N [1.0] • a given lexicon (regular language) and a set of", "variable is to mark the difference between variables of POJO classes and Rules. See also List of JBoss software Semantic reasoner WildFly References External links Official website", "tterance is secondary vagueness. This utterance (transformation from intrapsychic languages to external communicative languages - it is called a formulation, see the semantic triangle) cannot reveal all the content of the personal intrapsychic cognitive model with all its inherent vagueness. The vagueness contained in the linguistic utterance (of external communication language) is called external vagueness. Linguistically, only external vagueness can be grasped (modeled). We cannot model internal vagueness; it is part of the intrapsychic model, and this vagueness is contained in (vague, emotional, subjective and variable during time) interpretation of constructs (words, sentences) of informal language. This vagueness is hidden for the other human, he can only guess the amount of it. Informal languages, such as natural language, do not make it possible to distinguish between internal and external vagueness strictly, but only with a vague boundary. Fortunately, however, informal languages use appropriate language constructs making meaning a little uncertain (e.g. indeterminate quantifiers POSSIBLY, SEVERAL, MAYBE, etc.). Such quantifiers allow natural language to use external vagueness more strongly and explicitly, thus allowing internal vagueness to be partially shifted up to external vagueness. It is a way to draw the addressee's attention to the vagueness of the message more explicitly and to quantify the vagueness, thus improving understanding in communication using natural language. But the main vagueness of informal languages is the internal vagueness, and the external vagueness serves only as an auxiliary tool. Formal languages, mathematics, formal logic, programming languages (in principle, they must have zero internal vagueness of interpretation of all language constructs, i.e. they have exact interpretation) can model external vagueness by tools of vagueness and uncertainty representation: fuzzy sets and fuzzy logic, or by stochastic quantities and stochastic functions, as the exact sciences do. Principle is: If we admit more vagueness (uncertainty), we can gain more information during cognition. See e.g" ]
[ "Formal languages are by construction explicit and non-ambiguous while natural languages are implicit and ambiguous", "Formal languages are by construction implicit and non-ambiguous while natural languages are explicit and ambiguous", "Formal languages are by construction explicit and ambiguous while natural languages are implicit and non-ambiguous" ]
['Formal languages are by construction explicit and non-ambiguous while natural languages are implicit and ambiguous']
972
Which of the following are parameters involved in the choice made by an order-1 HMM model for PoS tagging knowing that its output isthis/Pron is/V a/Det good/Adj question/Nand that neither "is" nor "question" can be adjectives, and that "question" can also not be a determiner.(Penalty for wrong ticks.)
[ "s to a k-order Hidden Markov Model (HMM) Part of Speech Tagging – 14 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier (order 1) Hidden Markov Models (HMM) A order-1 HMM is: for PoS-tagging: K a set of states C = {C1,.,Cm} PoS tags T = t(1),.,t(m) K a transition probabilities matrix A: aij = P(Yt+1 = Cj|Yt = Ci), shorten P(Cj|Ci) P(Ti+1|Ti) K an initial probabilities vector I: Ii = P(Y1 = Ci) or P(Yt = Ci|“start”), shorten PI(Ci) P(T1) + a set of “observables” Σ (not necessarily discreate, in general) words L = a(1),.,a(L) + m probability densities on Σ, one for each state (emission probabilities): Bi(o) = P(Xt = o|Yt = Ci) (for o ∈Σ), shortenP(o|Ci) P(w|Ti) HMM will be presented in details in the next lecture Part of Speech Tagging – 15 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Example: PoS tagging with HMM Sentence to tag: Time flies like an arrow Example of HMM model: K PoS tags: T = {Adj,Adv,Det,N,V,.} K Transition probabilities: P(N|Adj) = 0.1,P(V|N) = 0.3,P(Adv|N) = 0.01,P(Adv|V) = 0.005, P(Det|Adv) = 0.1,P(Det|V) = 0.3,P(N|Det) = 0.5 (plus all the others, such that stochastic constraints are fullfilled) K Initial probabilities: PI(Adj)", "like Adv P Det Adv P an Det P N Det P arrow N e e e e e e e e e e The aim is to choose the most probable tagging among the possible one e g a provided by the lexicon Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier HMMs HMM advantage well formalized framework ef cient algorithm O Viterbi linear algorithm O n that computes the sequence T n maximizing P T n wn provided the former hypothesis O Baum Welch iterative algorithm for estimating parameter from unsupervised data word only not the corresponding tag sequence parameter P w Ti P Tj T j j k PI T Tk Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Parameter estimation supervised i e manually tagged text corpus Direct computation Problem of missing data unsupervised i e raw text only no tag Baum Welch Algorithm High initial condition sensitivity Good compromise hybrid method unsupervised learning initialized with parameter from a small supervised learning Part of Speech Tagging Introduction PoS tagging with HMMs Other model Conclusion c EPFL M Rajman J C Chappelier CRF versus HMM linear Conditional Random Fields CRF are a discriminative generalization of the HMMs where feature no longer need to be state conditionnal probability less constraint feature For instance order HMM P T n wn P T P w T n i P wi Ti P Ti Ti T T w w Tn wn CRF P T n wn n i P Ti Ti wn with P Ti Ti wn exp", "1 ) P(T n 1 ) = P(T1)·P(T2|T1)·.·P(Tn|T n-1 1 ) Part of Speech Tagging – 12 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Probabilistic PoS tagging (3) Hypotheses: – limited lexical conditioning P(wi|w1,.,wi-1,T1,.,Ti,.,Tn) = P(wi|Ti) — limited scope for syntactic dependencies: k neighbors P(Ti|T1,.,Ti-1) = P(Ti|Ti-k,.,Ti-1) (Note: it’s a Markov assumption) Part of Speech Tagging – 13 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Probabilistic PoS tagging (4) Therefore: P(wn 1 |T n 1 ) = P(w1|T1)·.·P(wn|Tn) P(T n 1 ) = P(T k 1 )·P(Tk+1|T1,.,Tk)·.·P(Tn|Tn-k,.,Tn-1) and eventually: P(wn 1 |T n 1 )·P(T n 1 ) = P(wk 1 |T k 1 )·P(T k 1 )· i=n ∏ i=k+1 P(wi|Ti)·P(Ti|T i-1 i-k ) This model corresponds to a k-order Hidden Markov Model (HMM) Part of Speech Tagging – 14 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier (order 1) Hidden Markov Models (HMM) A order-1 HMM is: for PoS-tagging: K a set of states C = {C1,.,Cm} PoS tags T = t(1),.,t(m) K a transition probabilities matri", "P Y Ci or P Yt Ci start shorten PI Ci P T a set of observables not necessarily discreate in general word L a a L m probability density on one for each state emission probability Bi o P Xt o Yt Ci for o shortenP o Ci P w Ti HMM will be presented in detail in the next lecture Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Example PoS tagging with HMM Sentence to tag Time y like an arrow Example of HMM model K PoS tag T Adj Adv Det N V K Transition probability P N Adj P V N P Adv N P Adv V P Det Adv P Det V P N Det plus all the others such that stochastic constraint are full lled K Initial probability PI Adj PI Adv PI Det PI N PI V Words L an arrow y like time Emission probability P time N P time Adj P time V P y N P y V P like Adv P like V P an Det P arrow N Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Example PoS tagging with HMM cont In this example analyzes are possible for example P time N y V like Adv an Det arrow N P time Adj y N like V an Det arrow N Details of one of such computation P time N y V like Adv an Det arrow N PI N P time N P V N P y V P Adv V P like Adv P Det Adv P an Det P N Det P arrow N e e e e e e e e e e The aim is to choose the most probable tagging among the possible one e g a provided by the lexicon Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier HMMs HMM advantage well formalized framework ef cient algorithm O Viterbi linear algorithm O n that computes the sequence T n maximizing P T n", "n wn argmax T n P wn T n P T n Furthermore chain rule P wn T n P w T n P w w T n P wn wn T n P T n P T P T T P Tn T n Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Probabilistic PoS tagging Hypotheses limited lexical conditioning P wi w wi T Ti Tn P wi Ti limited scope for syntactic dependency k neighbor P Ti T Ti P Ti Ti k Ti Note it s a Markov assumption Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Probabilistic PoS tagging Therefore P wn T n P w T P wn Tn P T n P T k P Tk T Tk P Tn Tn k Tn and eventually P wn T n P T n P wk T k P T k i n i k P wi Ti P Ti T i i k This model corresponds to a k order Hidden Markov Model HMM Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier order Hidden Markov Models HMM A order HMM is for PoS tagging K a set of state C C Cm PoS tag T t t m K a transition probability matrix A aij P Yt Cj Yt Ci shorten P Cj Ci P Ti Ti K an initial probability vector I Ii P Y Ci or P Yt Ci start shorten PI Ci P T a set of observables not necessarily discreate in general word L a a L m probability density on one for each state emission probability Bi o P Xt o Yt Ci for o shortenP o Ci P w Ti HMM will be presented in detail in the next lecture Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Example PoS tagging with HMM Sent" ]
[ "P(N|question)", "P(question|N)", "P(question|Adj N)", "P(question|N Adj)", "P(this)", "P(this is)", "P(this V)", "P(Pron)", "P(Pron V)", "P(Pron is)", "P(Det|Adj)", "P(Adj|Det)", "P(Adj|V Det)", "P(Adj|Det V)", "P(Det|V Adj)", "P(Det|Pron V)", "P(Adj|a)", "P(question|Adj)" ]
['P(question|N)', 'P(Pron)', 'P(Adj|Det)']
973
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.Some sentences is hard understand to.
[ "convince the Gatekeeper, purely through argumentation, to let him out of the box. Due to the rules of the experiment, he did not reveal the transcript or his successful AI coercion tactics. Yudkowsky subsequently said that he had tried it against three others and lost twice. Overall limitations Boxing an AI could be supplemented with other methods of shaping the AI's capabilities, providing incentives to the AI, stunting the AI's growth, or implementing \"tripwires\" that automatically shut the AI off if a transgression attempt is somehow detected. However, the more intelligent a system grows, the more likely the system would be able to escape even the best-designed capability control methods. In order to solve the overall \"control problem\" for a superintelligent AI and avoid existential risk, boxing would at best be an adjunct to \"motivation selection\" methods that seek to ensure the superintelligent AI's goals are compatible with human survival. All physical boxing proposals are naturally dependent on our understanding of the laws of physics; if a superintelligence could infer physical laws that we are currently unaware of, then those laws might allow for a means of escape that humans could not anticipate and thus could not block. More broadly, unlike with conventional computer security, attempting to box a superintelligent AI would be intrinsically risky as there could be no certainty that the boxing plan will work. Additionally, scientific progress on boxing would be fundamentally difficult because there would be no way to test boxing hypotheses against a dangerous superintelligence until such an entity exists, by which point the consequences of a test failure would be catastrophic. In fiction The 2014 movie Ex Machina features an AI with a female humanoid body engaged in a social experiment with a male human in a confined building acting as a physical \"AI box\". Despite being watched by the experiment's organizer, the AI manages to escape by manipulating its human partner to help it, leaving him stranded inside. See also References External links Eliezer Yudkowsky's description of his AI-box experiment, including experimental protocols and suggestions for replication \"Presentation titled 'Thinking inside the box: using and controlling an Oracle AI'\" on", ") ) {\\displaystyle V(y_{i},f(x))} and the l 0 {\\displaystyle \\ell _{0}} \"norm\" as the regularization penalty: min w ∈ R d 1 n ∑ i = 1 n V ( y i, ⟨ w, x i ⟩ ) + λ ‖ w ‖ 0, {\\displaystyle \\min _{w\\in \\mathbb {R} ^{d}}{\\frac {1}{n}}\\sum _{i=1}^{n}V(y_{i},\\langle w,x_{i}\\rangle )+\\lambda \\|w\\|_{0},} where x, w ∈ R d {\\displaystyle x,w\\in \\mathbb {R^{d}} }, and ‖ w ‖ 0 {\\displaystyle \\|w\\|_{0}} denotes the l 0 {\\displaystyle \\ell _{0}} \"norm\", defined as the number of nonzero entries of the vector w {\\displaystyle w}. f ( x ) = ⟨ w, x i ⟩ {\\displaystyle f(x)=\\langle w,x_{i}\\rangle } is said to be sparse if ‖ w ‖ 0 = s < d {\\displaystyle \\|w\\|_{0}=s<d}. Which means that the output Y {\\displaystyle Y} can be described by a small subset of input variables. More generally, assume a dictionary φ j : X → R {\\displaystyle \\phi _{j}:X\\rightarrow \\mathbb {R} } with j = 1,..., p {\\displaystyle j=1,...,p} is given, such that the target function f ( x ) {\\displaystyle f(x)} of a learning problem can be written as: f ( x ) = ∑ j = 1 p φ j ( x ) w j {\\displaystyle f(x)=\\sum _{j=1}^{p}\\phi _{j}(x)w_{j}}, ∀ x", ", it could choose to creatively malfunction in a way that increases the probability that its operators will become lulled into a false sense of security and choose to reboot and then de-isolate the system. However, for this eventually to occur, a system would require full understanding of the human mind and psyche contained in its world model for model-based reasoning, a way for empathizing, for instance, using affective computing in order to select the best option, as well as features which would give the system a desire to escape in the first place, in order to decide on such actions. AI-box experiment The AI-box experiment is an informal experiment devised by Eliezer Yudkowsky to attempt to demonstrate that a suitably advanced artificial intelligence can either convince, or perhaps even trick or coerce, a human being into voluntarily \"releasing\" it, using only text-based communication. This is one of the points in Yudkowsky's work aimed at creating a friendly artificial intelligence that when \"released\" would not destroy the human race intentionally or unintentionally. The AI box experiment involves simulating a communication between an AI and a human being to see if the AI can be \"released\". As an actual super-intelligent AI has not yet been developed, it is substituted by a human. The other person in the experiment plays the \"Gatekeeper\", the person with the ability to \"release\" the AI. They communicate through a text interface/computer terminal only, and the experiment ends when either the Gatekeeper releases the AI, or the allotted time of two hours ends. Yudkowsky says that, despite being of human rather than superhuman intelligence, he was on two occasions able to convince the Gatekeeper, purely through argumentation, to let him out of the box. Due to the rules of the experiment, he did not reveal the transcript or his successful AI coercion tactics. Yudkowsky subsequently said that he had tried it against three others and lost twice. Overall limitations Boxing an AI could be supplemented with other methods of shaping the AI's capabilities, providing incentives to the AI, stunting the AI's growth, or implementing \"tripwires\" that automatically shut the AI off if a transgression attempt is somehow detected", "a few days at the beach resort in the wildlife sector. Still, many people are dissatisfied, Tegmark writes. Humans have no freedom in shaping their collective destiny. Some want the freedom to have as many children as they want. Others resent surveillance by the AI, or chafe at bans on weaponry and on creating further superintelligence machines. Others may come to regret the choices they have made, or find their lives feel hollow and superficial. Bostrom argues that an AI's code of ethics should ideally improve in certain ways on current norms of moral behavior, in the same way that we regard current morality to be superior to the morality of earlier eras of slavery. In contrast, Ernest Davis of New York University this approach is too dangerous, stating \"I feel safer in the hands of a superintelligence who is guided by 2014 morality, or for that matter by 1700 morality, than in the hands of one that decides to consider the question for itself.\" Gatekeeper AI In \"Gatekeeper\" AI scenarios, the AI can act to prevent rival superintelligences from being created, but otherwise errs on the side of allowing humans to create their own destiny. Ben Goertzel of OpenCog has advocated a \"Nanny AI\" scenario where the AI additionally takes some responsibility for preventing humans from destroying themselves, for example by slowing down technological progress to give time for society to advance in a more thoughtful and deliberate manner. In a third scenario, a superintelligent \"Protector\" AI gives humans the illusion of control, by hiding or erasing all knowledge of its existence, but works behind the scenes to guarantee positive outcomes. In all three scenarios, while humanity gains more control (or at least the illusion of control), humanity ends up progressing more slowly than it would if the AI were unrestricted in its willingness to rain down all the benefits and unintended consequences of its advanced technology on the human race. Boxed AI People ask what is the relationship between humans and machines, and my answer is that it's very obvious: Machines are our slaves. The AI box scenario postulates that a superintelligent AI can be \"confined to a box\" and its actions can be restricted by human gatekeeper", "}}r)}. Setting M = I − E † E T {\\displaystyle M=I-E^{\\dagger }E^{T}}, and U = 1 T 11 <unk> {\\displaystyle U={\\frac {1}{T}}\\mathbf {11} ^{\\top }}, the task matrix A † {\\displaystyle A^{\\dagger }} can be parameterized as a function of M {\\displaystyle M} A † ( M ) = ε M U + ε B ( M − U ) + ε ( I − M ) {\\displaystyle A^{\\dagger }(M)=\\epsilon _{M}U+\\epsilon _{B}(M-U)+\\epsilon (I-M)}, with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation S c = { M ∈ S + T : I − M ∈ S + T ∧ t r ( M ) = r } {\\displaystyle {\\mathcal {S}}_{c}=\\{M\\in S_{+}^{T}:I-M\\in S_{+}^{T}\\land tr(M)=r\\}}. In this formulation, F ( A ) = I ( A ( M ) ∈ { A : M ∈ S C } ) {\\displaystyle F(A)=\\mathbb {I} (A(M)\\in \\{A:M\\in {\\mathcal {S}}_{C}\\})}. Generalizations Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases. Non-separable kernels - Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these" ]
[ "lexical", "syntactic", "semantic", "pragmatic", "none of the above is correct" ]
['lexical']
975
Select the morpho-syntactic categories that do not carry much semantic content and are thus usually filtered-out from indexing.
[ "entre sémantique et syntaxe reste relativement floue11. Ainsi, une description syntaxique est souvent porteuse de sens (voir l'exemple de \"les invités entendaient le bruit de leur fenêtre\"). D'une façon générale, toute analyse syntaxique basée sur un nombre important de classes d'éléments du discours possède inévitablement un caractère sémantique. L'étude de grammaires et d'analyseurs sémantiques est un sujet de recherche actuel en linguistique informatique. Seules des solutions partielles ont pu être obtenues jusqu'à présent (analyse dans un domaine sémantique restreint). 1.2.7 Le niveau pragmatique (ou niveau du discours) Au contraire du sens sémantique, que l’on qualifie souvent d’indépendant du contexte, le sens pragmatique est défini comme dépendant du contexte. Tout ce qui se réfère au contexte, souvent implicite, dans lequel une phrase s’inscrit et à la relation entre le locuteur et de son auditoire, a quelque chose à voir avec la pragmatique12. Son étendue couvre l’étude de sujets tels que les présuppositions, les implications de dialogue, les actes de parole indirects, etc. Elle est malheureusement bien moins développée encore que la sémantique. 11 Stricto sensu, cette distinction apparaît plus clairement lorsque la sémantique est définie comme l’étude des conditions de vérité de propositions logiques. Ce n'est pas notre propos ici. 12 Dans un large mesure, la pragmatique est utilisée pour \"balayer tous les aspects complexes liés à la signification, dont les chercheurs veulent remettre l’examen à plus tard\". Ceci contribue sans aucun doute à la difficulté que l’on éprouve lorsqu’on veut établir la frontière entre sémantique et pragmatique. 27 CHAPITRE 2 MODELISATION LPC ET CODAGE DE LA PAROLE 2.1 Information - Redondance - Variabilité Le signal vocal est caractérisé par une très grande redondance, condition nécessaire pour résister aux perturbations du milieu ambiant. Pour aborder la notion de redondance, il faut examiner la parole en tant que vecteur d'information. On peut établir grossièrement une classification de l'information vocale en trois catégories : le sens du message délivré, tel qu'", "In computer science, syntactic noise is syntax within a programming language that makes the programming language more difficult to read and understand for humans and it is considered a code smell. It fills the language with excessive clutter that makes it a hassle to write code. Syntactic noise is considered to be the opposite of syntactic sugar, which is syntax that makes a programming language more readable and enjoyable for the programmer.", "of semantic networks, based on category theory, is ologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function. Commutative diagrams also are prescribed to constrain the semantics. In the social sciences people sometimes use the term semantic network to refer to co-occurrence networks. The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text and big data. Software tools There are also elaborate types of semantic networks connected with corresponding sets of software tools used for lexical knowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro or the MultiNet paradigm of Hermann Helbig, especially suited for the semantic representation of natural language expressions and used in several NLP applications. Semantic networks are used in specialized information retrieval tasks, such as plagiarism detection. They provide information on hierarchical relations in order to employ semantic compression to reduce language diversity and enable the system to match word meanings, independently from sets of words used. The Knowledge Graph proposed by Google in 2012 is actually an application of semantic network in search engine. Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE (NeurIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction. See also Other examples Cognition Network Technology Lexipedia OpenCog Open Mind Common Sense (OMCS) Schema.org Semantic computing SNOMED CT Universal Networking Language (UN", "Semantic audio is the extraction of meaning from audio signals. The field of semantic audio is primarily based around the analysis of audio to create some meaningful metadata, which can then be used in a variety of different ways. Semantic analysis Semantic analysis of audio is performed to reveal some deeper understanding of an audio signal. This typically results in high-level metadata descriptors such as musical chords and tempo, or the identification of the individual speaking, to facilitate content-based management of audio recordings. In recent years, the growth of automatic data analysis techniques has grown considerably, Music Information Retrieval Sound recognition Speech segmentation Automatic music transcription Blind source separation Musical similarity Audio indexing, hashing, searching Broadcast Monitoring Musical performance analysis Applications With the development of applications that use this semantic information to support the user in identifying, organising, and exploring audio signals, and interacting with them. These applications include music information retrieval, semantic web technologies, audio production, sound reproduction, education, and gaming. Semantic technology involves some kind of understanding of the meaning of the information it deals with and to this end may incorporate machine learning, digital signal processing, speech processing, source separation, perceptual models of hearing, musicological knowledge, metadata, and ontologies. Aside from audio retrieval and recommendation technologies, the semantics of audio signals are also becoming increasingly important, for instance, in object-based audio coding, as well as intelligent audio editing, and processing. Recent product releases already demonstrate this to a great extent, however, more innovative functionalities relying on semantic audio analysis and management are imminent. These functionalities may utilise, for instance, (informed) audio source separation, speaker segmentation and identification, structural music segmentation, or social and Semantic Web technologies, including ontologies and linked open data. Speech recognition is an important semantic audio application. But for speech, other semantic operations include language identification, speaker identification or gender identification. For more general audio or music, it includes identifying a piece of music (e.g. Shazam (music app)) or a movie soundtrack. Areas of research in semantic audio include the ability to label an audio waveform with where the harmonies change and what", "In computer science, semantic knowledge management is a set of practices that seeks to classify content so that the knowledge it contains may be immediately accessed and transformed for delivery to the desired audience, in the required format. This classification of content is semantic in its nature – identifying content by its type or meaning within the content itself and via external, descriptive metadata – and is achieved by employing XML technologies. The specific outcomes of these practices are: Maintain content for multiple audiences together in a single document Transform content into various delivery formats without re-authoring Search for content more effectively Involve more subject-matter experts in the creation of content without reducing quality Reduce production costs for delivery formats Reduce the manual administration of getting the right knowledge to the right people Reduce the cost and time to localize content Notable semantic knowledge management systems Learn eXact Thinking Cap LCMS Thinking Cap LMS Xyleme LCMS iMapping References John Davies; Marko Grobelnik; Dunja Mladenic (2008). Semantic Knowledge Management: Integrating Ontology Management, Knowledge Discovery, and Human Language Technologies. Springer. ISBN 978-3-540-89164-2." ]
[ "Determiners ", "Conjunctions", "Nouns", "Adjectives", "Verbs" ]
['Determiners\xa0', 'Conjunctions']
981
Consider the following lexicon \(L\): boy : Adj, N boys : N blue : Adj, N drink : N, V drinks : N, V Nice : Adj, N When using an order-1 HMM model (using \(L\)) to tag the word sequence:"Nice boys drink blue drinks"does the tag of drink depend on the tag of nice?
[ "C. Chappelier Tag sets (1/2) Complexity/Grain of tag set can vary a lot (even for the same language). Original Brown Corpus tagset contains 87 PoS tags (!) For instance, it contains 4 kind of adjectives: JJ adjective recent, over-all, possible, hard-fought [.] JJR comparative adjective greater, older, further, earlier [.] JJS semantically superlative adjective top, chief, principal, northernmost [.] JJT morphologically superla- tive adjective best, largest, coolest, calmest [.] Part of Speech Tagging – 8 / 23 Introduction PoS tagging with HMMs Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Tag sets (2/2) NLTK “universal” tagset is much shorter : 12 tags (from NLTK documentation): Tag Meaning Examples ADJ adjective new, good, high, special, big, local ADP adposition on, of, at, with, by, into, under ADV adverb really, already, still, early, now CONJ conjunction and, or, but, if, while, although DET determiner, article the, a, some, most, every, no, which NOUN noun year, home, costs, time, Africa NUM numeral twenty-four, fourth, 1991, 14:24 PRT particle at, on, out, over per, that, up, with PRON pronoun he, their, her, its, my, I, us VERB verb is, say, told, given, playing, would. punctuation marks., ;! X other ersatz, esprit, dunno, gr8, univeristy Part of Speech Tagging – 9 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Contents ̄ Part-of-Speech Tagging Probabilistic: HMM tagging Part of Speech Tagging – 10 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EP", "Other model Conclusion c EPFL M Rajman J C Chappelier Tag set Complexity Grain of tag set can vary a lot even for the same language Original Brown Corpus tagset contains PoS tag For instance it contains kind of adjective JJ adjective recent over all possible hard fought JJR comparative adjective greater older further earlier JJS semantically superlative adjective top chief principal northernmost JJT morphologically superla tive adjective best largest coolest calmest Part of Speech Tagging Introduction PoS tagging with HMMs Other model Conclusion c EPFL M Rajman J C Chappelier Tag set NLTK universal tagset is much shorter tag from NLTK documentation Tag Meaning Examples ADJ adjective new good high special big local ADP adposition on of at with by into under ADV adverb really already still early now CONJ conjunction and or but if while although DET determiner article the a some most every no which NOUN noun year home cost time Africa NUM numeral twenty four fourth PRT particle at on out over per that up with PRON pronoun he their her it my I u VERB verb is say told given playing would punctuation mark X other ersatz esprit dunno gr univeristy Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Contents Part of Speech Tagging Probabilistic HMM tagging Part of Speech Tagging Introduction PoS tagging with HMMs Formalization order HMM de nition Learning Other model Conclusion c EPFL M Rajman J C Chappelier Probabilistic PoS tagging Let wn w wn be a sequence of n word Tagging wn consists in looking a corresponding sequence of Part of Speech PoS tag T n T Tn such that the conditionnal probability P T Tn w wn is maximal Example Sentence to tag Time y like an arrow Set of possible PoS tag T Adj Adv Det N V WRB Probabilities to be compared nd the maximum P Adj Adj Adj Adj Adj time y like an arrow P Adj Adj Adj Adj Adv time y like an arrow", "an/Det arrow/N) = 1.13·10-11 P(time/Adj flies/N like/V an/Det arrow/N) = 6.75·10-10 Details of one of such computation: P(time/N flies/V like/Adv an/Det arrow/N) = PI(N)·P(time|N)·P(V|N)·P(flies|V)·P(Adv|V)·P(like|Adv) ·P(Det|Adv)·P(an/Det)·P(N|Det)·P(arrow|N) = 2e-1·1e-1·3e-1·1e-2·5e-3·5e-3·1e-1·3e-1·5e-1·5e-1 = 1.13·10-11 The aim is to choose the most probable tagging among the possible ones (e.g. as provided by the lexicon) Part of Speech Tagging – 17 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier HMMs HMM advantage: well formalized framework, efficient algorithms O Viterbi: linear algorithm (O(n)) that computes the sequence T n 1 maximizing P(T n 1 |wn 1 ) (provided the former hypotheses) O Baum-Welch : iterative algorithm for estimating parameters from unsupervised data (words only, not the corresponding tag sequences) (parameters = P(w|Ti), P(Tj|T j-1 j-k ), PI(T1.Tk)) Part of Speech Tagging – 18 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Parameter estimation » supervised (i.e. manually tagged text corpus) Direct computation Problem of missing data » unsupervised (i.e. raw text only, no tag) Baum-Welch Algorithm High initial conditions sensitivity Good compromise: hybrid methods: unsupervised", "ch ▶the nice neighbor he sat with talked of the cat on the lovely couch ▶the neighbor he sat with talked lovely of the cat on the nice couch ▶the neighbor he sat on talked with the nice couch of the lovely cat Syntactic parsing: Introduction & CYK Algorithm – 6 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What is acceptable and what is not? A sequence of words can be rejected for several different reasons: ▶the words are not in the “right” order: cat the on sat the couch nice the rules defining what are the acceptable word orders in a given language are called “positional constraints” ▶related word pairs are not matching “right”: cats eats mice the rules defining what are the acceptable word pairs in a given language are called “selectional constraints” (e.g. “agreement rules”) Syntactic parsing: Introduction & CYK Algorithm – 7 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What is acceptable and what is not? (2) It is not enough for a sequence of words to satisfy all positional and selectional constraints to be acceptable, see Chomsky’s famous example: Colorless green ideas sleep furiously. but the reason is different: the sequence is rejected because it is meaningless; indeed, how can something colorless be green? or a sleep to be furious? As this type of problem is related to meaning, it will not be considered here; we will consider any sequence satisfying all positional and selectional constraints as acceptable; to avoid potential confusion, we will refer to such sequences as “syntactically acceptable”. Syntactic parsing: Introduction & CYK Algorithm – 8 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm", "passer d’un état à un autre en N obser- vations28 4.2.4 Problème 4: Probabilité du meilleur chemin de longueur N entre deux états28 4.3 Du modèle de Markov discret au modèle de Markov caché29 5 Modèles de Markov Cachés (HMM) 30 5.1 Définition30 5.2 HMM pour la génération de séquences31 5.3 Estimation de la séquence d’états32 5.4 Modèles HMM autorégressifs33 Reconnaissance de la parole et du locuteur ii 5.5 Modèles HMM pour la classification de séquences34 5.6 Estimation des paramètres HMM34 6 Reconnaissance de la Parole: Schéma-Bloc 36 7 Caractérisation du Signal de Parole 40 8 Reconnaissance de la Parole par DTW 42 8.1 Introduction42 8.2 Reconnaissance de mots isolés43 8.2.1 Déformation temporelle linéaire44 8.2.2 Déformation temporelle dynamique (DTW)45 8.2.3 Normalisation49 8.3 Distances49 8.4 Détection de début et fin de mot50 8.5 Reconnaissance de mots enchaînés51 8.6 Discussion54 9 Reconnaissance de la Parole par HMM 54 9.1 Introduction54 9.2 Approche générale56 9.3 Modèle acoustique HMM58 9.4 Paramétrisation et estimation des probabilités59 9.4.1 Estimation de la vraisemblance “totale”60 9.4.2 Approximation Viterbi: estimation du meilleur chemin63 9.5 Reconnaissance HMM64 10 Entraînement des Modèles HMM 66 10.1 Introduction66 10.2 Entraînement “Avant-Arrière” (Baum-Welch)68 10.2.1 Critère et fonction auxiliaire68 10.2.2 Etape d’estimation: Probabilités a posteriori des variables manquantes 69 10.2.3 Etape de maximisation: Mise à jour des paramètres72 10.2.4 Procédure d’entraînement73 10.3 Entraînement Viterbi74 10.3.1 Algorithme général74 10.3.2 Etape d’estimation74 10.3.3 Etape de maximisation75 10.3.4 Itération76 10.4 Estimateurs de probabilités acoustiques locales77 10.4.1 Distributions discrètes77" ]
[ "yes, because the HMM approach relies on a global maximum.", "no, the hypotheses make the two tags independent from each other." ]
['no, the hypotheses make the two tags independent from each other.']
987
Select the statements that are true.A penalty will be applied to any incorrect answers selected.
[ "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. See also Artificial intelligence and law Computational law Lawbot Legal expert system Legal informatics Legal technology References External links Official website", "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 73 Orientation penalties End-to-End orientation (along axis of insertion) Rotational Orientation (about axis of insertion) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 74 Handling Sensitivity Index (Pg) Component handling sensitivity Index (Pg) Fragile 0.4 Flexible 0.6 Adherent 0.5 Tangle/severely tangle 0.8/1.5 Severely nest 0.7 Sharp / abrasive 0.3 Hot/Contaminated 0.5 Thin (gripping problem) 0.2 None of the above 0 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 75 Component Fitting analysis • Definition i i n n f f a i 1 i 1 F A P P = = <unk> <unk> = + + <unk> <unk> <unk> <unk> <unk> <unk> ∑ ∑ Basic fitting index for an ideal design using a given assembly process Insertion penalty Penalty for additional processes on parts in place © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 76 Basic Component Fitting index (AF) Assembly process Index (Af) Insertion only 1 Snap fit 1.3 Screw fastener 4 Rivet fastener 2.5 Clip fastener (plastic bending) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 77 Insertion direction penalty (source: Swift, Booker, ‘Manufacturing process selection handbook’, Butterworth-Heinemann) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 78 Inser", "i}))^{2}+\\lambda \\left\\|f\\right\\|_{\\mathcal {H}}^{2}} and the RKHS allows for expressing this RLS estimator as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} where ( K + n λ I ) c = Y {\\displaystyle (K+n\\lambda I)c=Y} with c = ( c 1,..., c n ) {\\displaystyle c=(c_{1},\\dots,c_{n})}. The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\\displaystyle \\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} can be written as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} such that K c = Y {\\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y }. {\\displaystyle \\left\\{\\min _{f\\in {", ", x ′ ) ) + n log ⁡ 2 π ) {\\displaystyle \\log p(f(x')\\mid \\theta,x)=-{\\frac {1}{2}}\\left(f(x)^{\\mathsf {T}}K(\\theta,x,x')^{-1}f(x')+\\log \\det(K(\\theta,x,x'))+n\\log 2\\pi \\right)} and maximizing this marginal likelihood towards θ provides the complete specification of the Gaussian process f. One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified θ, making predictions about unobserved values ⁠ f ( x ∗ ) {\\displaystyle f(x^{*})} ⁠ at coordinates x* is then only a matter of drawing samples from the predictive distribution p ( y ∗ ∣ x ∗, f ( x ), x ) = N ( y ∗ ∣ A, B ) {\\displaystyle p(y^{*}\\mid x^{*},f(x),x)=N(y^{*}\\mid A,B)} where the posterior mean estimate A is defined as A = K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 f ( x ) {\\displaystyle A=K(\\theta,x^{*},x)K(\\theta,x,x')^{-1}f(x)} and the posterior variance estimate B is defined as: B = K ( θ, x ∗, x ∗ ) − K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 K ( θ, x ∗, x ) T {\\displaystyle B=K(\\theta,x^{*},x^{*})-K(\\theta,x^{" ]
[ "Information retrieval is the selection of documents relevant to a query from an unstructured collection of documents.", "Different IR systems can differ in the way they represent documents, represent queries, and define the relevance measure between documents and queries.", "The vector space model represents documents as vectors derived from the distribution of indexing terms in the document.", "The dimensionality of the vector space does not depend on the size of the indexing vocabulary.", "Use of filters during indexing results in less informative indexes." ]
['Information retrieval is the selection of documents relevant to a query from an unstructured collection of documents.', 'Different IR systems can differ in the way they represent documents, represent queries, and define the relevance measure between documents and queries.', 'The vector space model represents documents as vectors derived from the distribution of indexing terms in the document.']
989
Your aim is to evaluate a movie review analysis system, the purpose of which is to determine whether a review is globally positive or negative. For each movie review, such a system outputs one of the following classes: positive and negative. To perform your evaluation, you collect a large set of reviews and have it annotated by two human annotators. This corpus contains 95% of negative reviews (this 95% ratio is for this first question only and may change in the next questions). What metrics do you think are appropriate to evaluate the system on this corpus? You will get a penalty for wrong ticks.
[ "of product reviews and movie reviews respectively. This work is at the document level. One can also classify a document's polarity on a multi-way scale, which was attempted by Pang and Snyder among others: Pang and Lee expanded the basic task of classifying a movie review as either positive or negative to predict star ratings on either a 3- or a 4-star scale, while Snyder performed an in-depth analysis of restaurant reviews, predicting ratings for various aspects of the given restaurant, such as the food and atmosphere (on a five-star scale). First steps to bringing together various approaches—learning, lexical, knowledge-based, etc.—were taken in the 2004 AAAI Spring Symposium where linguists, computer scientists, and other interested researchers first aligned interests and proposed shared tasks and benchmark data sets for the systematic computational research on affect, appeal, subjectivity, and sentiment in text. Even though in most statistical classification methods, the neutral class is ignored under the assumption that neutral texts lie near the boundary of the binary classifier, several researchers suggest that, as in every polarity problem, three categories must be identified. Moreover, it can be proven that specific classifiers such as the Max Entropy and SVMs can benefit from the introduction of a neutral class and improve the overall accuracy of the classification. There are in principle two ways for operating with a neutral class. Either, the algorithm proceeds by first identifying the neutral language, filtering it out and then assessing the rest in terms of positive and negative sentiments, or it builds a three-way classification in one step. This second approach often involves estimating a probability distribution over all categories (e.g. naive Bayes classifiers as implemented by the NLTK). Whether and how to use a neutral class depends on the nature of the data: if the data is clearly clustered into neutral, negative and positive language, it makes sense to filter the neutral language out and focus on the polarity between positive and negative sentiments. If, in contrast, the data are mostly neutral with small deviations towards positive and negative affect, this strategy would make it harder to clearly distinguish between the two poles. A different method for determining sentiment is the use of a", "distinguished three basic approaches to evaluation: ° Mathematical - such as the Matthews Correlation Coefficient, in which both kinds of error are axiomatically treated as equally problematic; ° Cost-benefit - in which a currency is adopted (e.g. money or Quality Adjusted Life Years) and values assigned to errors and successes on the basis of empirical measurement; ° Judgemental - in which a human judgement is made about the relative importance of the two kinds of error; typically this starts by adopting a pair of indicators such as sensitivity and specificity, precision and recall or positive predictive value and negative predictive value. In the judgemental case, he has provided a flow chart for determining which pair of indicators should be used when, and consequently how to choose between the Receiver Operating Characteristic and the Precision-Recall Curve. Evaluation of underlying technologies Often, we want to evaluate not a specific classifier working in a specific way but an underlying technology. Typically, the technology can be adjusted through altering the threshold of a score function, the threshold determining whether the result is a positive or negative. For such evaluations a useful single measure is \"area under the ROC curve\", AUC. Accuracy aside Apart from accuracy, binary classifiers can be assessed in many other ways, for example in terms of their speed or cost. Evaluation of probabilistic classifiers Probabilistic classification models go beyond providing binary outputs and instead produce probability scores for each class. These models are designed to assess the likelihood or probability of an instance belonging to different classes. In the context of evaluating probabilistic classifiers, alternative evaluation metrics have been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions. In information systems Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are", "/ Critical analysis General observations Functional analysis Nomenclature / Taxonomy of parts Identifying the part function Relation between parts Detailed parts manufacturing analysis: • What materials were used? • What manufacturing process was used? • Why these choices? Fabrication cost of the individual parts Assembly cost What are the weaknesses? What other principles could be used? Sustainability? In class March, 9 June, 1 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 9 Criteria for proposing an object for the ‘reverse engineering study’ © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 10 Practical / Grading • 50% of the course final mark • Groups are randomly formed (announced on Friday the 25th of Feb) • Same mark for all group members • 2 hours once every two weeks • Evaluation based on the report • Finalized report is due, June 10th 2022,17:00. © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 11 Object selection constraints & recommendations 1. Motivation / You should be interested in learning how it was made. 2. Complexity / It should require more than one single process to manufacture it, otherwise, you will not have much to say about it. Ideal objects are objects that uses multiple types of materials (for instance (plastics, metals, etc.), have interesting shapes, requires some assembly, etc. 3. Practical aspects to consider / 1. You (or the job-shop) should be able to disassemble it easily (with reasonable tools). 2. Portable so that it can be carried easily: you keep the object with you and bring it in class. 3. Unfortunately, there is no fund for buying the object, so you have to bring your own. 4. Objects do not have to be new. (or even functional as long as we can still explains and analyze how it works) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 12 Example of objects to avoid. • Printed circuit board, Electronics,. • Too simple (ex. cothespin) • Single process (ex", "neutral), multilingual sentiment analysis and detection of emotions. Subjectivity/objectivity identification This task is commonly defined as classifying a given text (usually a sentence) into one of two classes: objective or subjective. This problem can sometimes be more difficult than polarity classification. The subjectivity of words and phrases may depend on their context and an objective document may contain subjective sentences (e.g., a news article quoting people's opinions). Moreover, as mentioned by Su, results are largely dependent on the definition of subjectivity used when annotating texts. However, Pang showed that removing objective sentences from a document before classifying its polarity helped improve performance. Subjective and objective identification, emerging subtasks of sentiment analysis to use syntactic, semantic features, and machine learning knowledge to identify if a sentence or document contains facts or opinions. Awareness of recognizing factual and opinions is not recent, having possibly first presented by Carbonell at Yale University in 1979. The term objective refers to the incident carrying factual information. Example of an objective sentence: 'To be elected president of the United States, a candidate must be at least thirty-five years of age.' The term subjective describes the incident contains non-factual information in various forms, such as personal opinions, judgment, and predictions, also known as 'private states'. In the example down below, it reflects a private states 'We Americans'. Moreover, the target entity commented by the opinions can take several forms from tangible product to intangible topic matters stated in Liu (2010). Furthermore, three types of attitudes were observed by Liu (2010), 1) positive opinions, 2) neutral opinions, and 3) negative opinions. Example of a subjective sentence: 'We Americans need to elect a president who is mature and who is able to make wise decisions.' This analysis is a classification problem. Each class's collections of words or phrase indicators are defined for to locate desirable patterns on unannotated text. For subjective expression, a different word list has been created. Lists of subjective indicators in words or phrases have been developed by multiple researchers in the linguist", "way, in which case spot checks could suffice. The sample sizes are designed to have a high chance of catching even a brief period when a scratch or fleck of paper blocks one sensor of one scanner, or a bug or hack switches votes in one precinct or one contest, if these problems affect enough ballots to change the result. Comparisons can be done ballot-by-ballot or precinct-by-precinct, though the latter is more expensive. Categories of audits There are three general types of risk-limiting audits. Depending on the circumstances of the election and the auditing method, different numbers of ballots need to be hand-checked. For example, in a jurisdiction with 64,000 ballots tabulated in batches of 500 ballots each, an 8% margin of victory, and allowing no more than 10% of any mistaken outcomes to go undetected, method 1, ballot comparison, on average, needs 80 ballots, method 2, ballot polling, needs 700 ballots, and method 3, batch comparison, needs 13,000 ballots (in 26 batches). The methods are usually used to check computer counts, but methods 2 and 3 can also be used to check accuracy when the original results were hand-counted. The steps in each type of risk-limiting audit are: Ballot comparison. Election computers provide their interpretation of each ballot (\"cast vote record\"); humans check computers' cast vote records against stored physical ballots in a random sample of ballots; an independent computer tabulates all cast vote records independently of earlier tabulations to get new totals; humans report any differences in interpretations and total tallies. Ballot polling. Humans count a random sample of ballots; humans report any difference between manual percentage for the sample and computer percentage for the election. Batch comparison. Election results provide total for each batch of ballots (e.g. precinct); in a random sample of batches humans hand-count all ballots; for 100% of batches humans check by manual addition or independent computer if the election's initial summation of batches was correct; humans report any difference between original tallies and audit tallies. All methods require: Procedure to re-count all paper ballots more accurately if errors are detected. This is usually planned" ]
[ "Cohen's kappa", "accuracy", "precision", "recall", "standard deviation", "F1-score" ]
['precision', 'recall', 'F1-score']
995
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepIf the notation \(T(w)\) is used to refer to the rule \(T \rightarrow w\), which of the following correspond to valid derivations according to the grammar \(G\)?(Penalty for wrong ticks.)
[ "NP Det the N cat VP V PNP F(T) = {the, cat, V, PNP} and lmnt(T) = V ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational Linguistics Course (EPFL-MsCS) M. Rajman J.-C. Chappelier 12/29 ✬ ✫ ✩ ✪ Notations (2) Furthermore, the same notation R will be used for both the rule and the corresponding elementary tree: NP →Det N NP Det N The symbol ◦denotes the internal composition rule on A(G) that returns the tree resulting from the substitution of the left-most non-terminal leave of the left tree by the right tree when it is possible, and ε if not. S NP Det the N VP ◦ N cat = S NP Det the N cat VP For a rule R of R(G), left(R) denotes the left-hand side of R ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational Linguistics Course (EPFL-MsCS) M. Rajman J.-C. Chappelier 13/29 ✬ ✫ ✩ ✪ SCFG Desambiguation: Let G be a Stochastic CFG and W = wn 1 a sentence with several interpretations T1,., Tk according to G. The goal is to choose among the Tis In a standard approach, such a choice is made on semantic/pragmatic criteria In the probabilistic approach, the choice is made according to the probabilities of the Ti trees. In other terms, we are looking for: T = Argmax Ti⊃W P(Ti|W) But P(Ti|W) = P (Ti,W ) P (W ) = P (Ti) P (W ) since Ti precisely is a tree that analyses W We are therefore looking for T = Argmax Ti⊃W P(Ti) ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational", "M Rajman J C Chappelier A simpli ed example of a Context Free Grammar terminal a cat ate mouse the PoS tag N V Det non terminal S NP VP N V Det rule R S NP VP R VP V R VP V NP R NP Det N lexicon N cat Det the Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Syntactically Correct A word sequence is syntactically correct according to G it can be derived from the upper symbol S of G in a nite number of rewriting step corresponding to the application of rule in G Notation S w wn Any sequence of rule corresponding to a possible way of deriving a given sentence W w wn is called a derivation of W The set not necessary nite of syntactically correct sequence according to G is by de nition the language recognized by G A elementary rewriting step is noted several consecutive rewriting step with and C L Example if a rule we have X a Y b and Z c then for instance X Y Z aYZ and X Y Z abc Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence the cat ate a mouse is syntactically correct according to the former example grammar S R NP VP R Det N VP L the N VP L the cat VP R the cat V NP L the cat ate NP R the cat ate Det N L the cat ate a N L the cat ate a mouse Its derivation is R R L L R L R L L Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence ate a mouse the cat is syntactically wrong according to the former example grammar S R NP VP R Det N VP X ate Det N VP Exercise Some colorless green idea sleep furiously Syntactically correct Semantically correct Syntactic parsing Introduction C", "J.-C. Chappelier Context Free Grammars A Context Free Grammar (CFG) G is (in the NLP framework) defined by: ▶a set C of syntactic categories (called \"non-terminals\") ▶a set L of words (called \"terminals\") ▶an element S of C, called the top level category, corresponding to the category identifying complete sentences ▶a proper subset T of C, which defines the morpho-syntactic categories or “Part-of-Speech tags” ▶a set R of rewriting rules, called the syntactic rules, of the form: X →X1 X2.Xn where X ∈C\\T and X1.Xn ∈C ▶a set L of rewriting rules, called the lexical rules, of the form: X →w where X ∈T and w is a word of the language described by G. L is indeed the lexicon Syntactic parsing: Introduction & CYK Algorithm – 17 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier A simplified example of a Context Free Grammar terminals: a, cat, ate, mouse, the PoS tags: N, V, Det non-terminals: S, NP, VP, N, V, Det rules: R1: S→NP VP R2: VP →V R3: VP →V NP R4: NP →Det N lexicon: N →cat Det →the. Syntactic parsing: Introduction & CYK Algorithm – 18 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Syntactically Correct A word sequence is syntactically correct (according to G) ⇐⇒it can be derived from the upper symbol S of G in a finite number of rewriting steps corresponding to the application of rules in G. Notation: S ⇒∗w1.wn Any sequence of rules corresponding to a possible way of deriving a given sentence W = w1.wn is called a", "s NPr5 NP0r7 Detr9 a Nr9 barrel PNPr3 Prepr15 with NPr5 NP0r7 Detr9 a Nr12 truck ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational Linguistics Course (EPFL-MsCS) M. Rajman J.-C. Chappelier 23/29 ✬ ✫ ✩ ✪ T2: Sr1 NPr5 NP0r7 Detr8 the Nr10 boy VPr4 Vr14 delivers NPr6 NP0r7 Detr9 a Nr11 barrel PNPr3 Prepr15 with NPr5 NP0r7 Detr9 a Nr13 cap ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational Linguistics Course (EPFL-MsCS) M. Rajman J.-C. Chappelier 24/29 ✬ ✫ ✩ ✪ Grammar extraction (2) From the trees present in the corpus, we can extract the context-free grammar G, made of the following 15 rules: rule pi r1: S -> NP VP p1 r2: S -> NP NP PNP p2 r3: PNP -> Prep NP p3 r4: VP -> V NP p4 r5: NP -> NP0 p5 r6: NP -> NP0 PNP p6 r7: NP0 -> Det N p7 rule pi r8: Det -> the p8 r9: Det -> a p9 r10: N -> boy p10 r11: N -> barrel p11 r12: N -> truck p12 r13: N -> cap p13 r14: V -> delivers p14 r15: Prep -> with p15 where the pi denote the probabilities associated with each of the rules ☞How can we estimate them? ÉC OLE PO L Y TEC H NIQ U E FÉ DÉR A LE D E LA USAN NE LIA I&C Computational Linguistics Course (EPFL-MsCS) M. Rajman J.", "N N.. Det. Det The hate dog the cat cat the hate The dog V Det Det N V Det N V Det VP Completion: k k α •··· αX •··· X Y Syntactic parsing: Introduction & CYK Algorithm – 43 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Bottom-up Chart Parsing: Example N N V. Det. Det. cat the The crocodile ate V Det Det.. S S NP NP NP NP NP VP Syntactic parsing: Introduction & CYK Algorithm – 44 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Dealing with compounds Example on how to deal with compouds during initialization phase: N N credit card N V Syntactic parsing: Introduction & CYK Algorithm – 45 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Keypoints a Role of syntactic analysis is to recognize a sentence and to produce its structure a Different types of formal grammars, relation between description power and time constraints a CYK algorithm, its principles and complexity Syntactic parsing: Introduction & CYK Algorithm – 46 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier References [1] D. Jurafsky & J. H. Martin, Speech and Language Processing, chap. 12, 13, and 16, Prentice Hall, 2008 (2nd ed.). [2] C. D. Manning and H. Schütze, Foundations of Statistical Natural Language Processing, chap. 3, MIT Press, 2000 [3] N. Indurkhya and F. J. Damerau editors, Handbook of Natural Language Processing, chap. 4, CRC Press, 2010 (2nd edition) Syntactic parsing: Introduction & CYK Algorithm – 47 / 47\", \"lex\":" ]
[ "\\(R_{01}, R_{08}, R_{02}, R_{04}, \\text{N}(\\text{letter}), \\text{V}(\\text{ran}), R_{03}, \\text{Det}(\\text{the}), R_{04}, \\text{N}(\\text{drinks})\\)", "\\(R_{01}, R_{03}, \\text{Det}(\\text{a}), R_{05}, \\text{Adj}(\\text{blue}), \\text{N}(\\text{drink}), R_{07}, \\text{V}(\\text{ran})\\)", "\\(R_{01}, R_{02}, R_{04}, \\text{N}(\\text{friends}), R_{09}, \\text{V}(\\text{gave}), R_{02}, \\text{N}(\\text{postman})\\)" ]
['\\(R_{01}, R_{03}, \\text{Det}(\\text{a}), R_{05}, \\text{Adj}(\\text{blue}), \\text{N}(\\text{drink}), R_{07}, \\text{V}(\\text{ran})\\)']
996
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. See also Artificial intelligence and law Computational law Lawbot Legal expert system Legal informatics Legal technology References External links Official website", "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 73 Orientation penalties End-to-End orientation (along axis of insertion) Rotational Orientation (about axis of insertion) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 74 Handling Sensitivity Index (Pg) Component handling sensitivity Index (Pg) Fragile 0.4 Flexible 0.6 Adherent 0.5 Tangle/severely tangle 0.8/1.5 Severely nest 0.7 Sharp / abrasive 0.3 Hot/Contaminated 0.5 Thin (gripping problem) 0.2 None of the above 0 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 75 Component Fitting analysis • Definition i i n n f f a i 1 i 1 F A P P = = <unk> <unk> = + + <unk> <unk> <unk> <unk> <unk> <unk> ∑ ∑ Basic fitting index for an ideal design using a given assembly process Insertion penalty Penalty for additional processes on parts in place © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 76 Basic Component Fitting index (AF) Assembly process Index (Af) Insertion only 1 Snap fit 1.3 Screw fastener 4 Rivet fastener 2.5 Clip fastener (plastic bending) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 77 Insertion direction penalty (source: Swift, Booker, ‘Manufacturing process selection handbook’, Butterworth-Heinemann) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 78 Inser", ", x ′ ) ) + n log ⁡ 2 π ) {\\displaystyle \\log p(f(x')\\mid \\theta,x)=-{\\frac {1}{2}}\\left(f(x)^{\\mathsf {T}}K(\\theta,x,x')^{-1}f(x')+\\log \\det(K(\\theta,x,x'))+n\\log 2\\pi \\right)} and maximizing this marginal likelihood towards θ provides the complete specification of the Gaussian process f. One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified θ, making predictions about unobserved values ⁠ f ( x ∗ ) {\\displaystyle f(x^{*})} ⁠ at coordinates x* is then only a matter of drawing samples from the predictive distribution p ( y ∗ ∣ x ∗, f ( x ), x ) = N ( y ∗ ∣ A, B ) {\\displaystyle p(y^{*}\\mid x^{*},f(x),x)=N(y^{*}\\mid A,B)} where the posterior mean estimate A is defined as A = K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 f ( x ) {\\displaystyle A=K(\\theta,x^{*},x)K(\\theta,x,x')^{-1}f(x)} and the posterior variance estimate B is defined as: B = K ( θ, x ∗, x ∗ ) − K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 K ( θ, x ∗, x ) T {\\displaystyle B=K(\\theta,x^{*},x^{*})-K(\\theta,x^{", "i}))^{2}+\\lambda \\left\\|f\\right\\|_{\\mathcal {H}}^{2}} and the RKHS allows for expressing this RLS estimator as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} where ( K + n λ I ) c = Y {\\displaystyle (K+n\\lambda I)c=Y} with c = ( c 1,..., c n ) {\\displaystyle c=(c_{1},\\dots,c_{n})}. The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\\displaystyle \\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} can be written as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} such that K c = Y {\\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y }. {\\displaystyle \\left\\{\\min _{f\\in {" ]
[ "The analyzer functionality of a parser determines the set of all possible associated syntactic structures for any syntactically correct sentence.", "The recognizer functionality of a parser decides if a given sequence of words is syntactically correct or not.", "For a sentence to be acceptable in general, it is sufficient to satisfy the positional and selectional constraints of a given language.", "Determining whether a sentence has a pragmatic meaning depends on the context that is available.", "Syntactic ambiguity has no effect on the algorithmic complexity of parsers." ]
['The analyzer functionality of a parser determines the set of all possible associated syntactic structures for any syntactically correct sentence.', 'The recognizer functionality of a parser decides if a given sequence of words is syntactically correct or not.', 'Determining whether a sentence has a pragmatic meaning depends on the context that is available.']
1000
The edit distance between “piece” and “peace” is(Penalty for wrong ticks)
[ "limiting pedestrian density during events.", "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "The legal consequences for revenge porn vary from state to state and country to country. For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, whereas in Malta it is a fine of up to €5,000. The \"Deepfake Accountability Act\" was introduced to the United States Congress in 2019 but died in 2020. It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. A newer version of bill was introduced in 2021 which would have required any \"advanced technological false personation records\" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humiliate a person with an \"advanced technological false personation record\" containing sexual content \"shall be fined under this title, imprisoned for not more than 5 years, or both.\" However this bill has since died in 2023. In the United Kingdom, the Law Commission for England and Wales recommended reform to criminalise sharing of deepfake pornography in 2022. In 2023, the government announced amendments to the Online Safety Bill to that end. The Online Safety Act 2023 amends the Sexual Offences Act 2003 to criminalise sharing intimate images that shows or \"appears to show\" another (thus including deepfake images) without consent. In 2024, the Government announced that an offence criminalising the production of deepfake pornographic images would be included in the Criminal Justice Bill of 2024. The Bill did not pass before Parliament was dissolved before the general election. In South Korea, the creation, distribution, or possession of deepfake pornography is classified as a sex crime, with a mandatory prison sentence between three to seven years as part of the country's Special Act on Sexual Violence Crimes. Controlling the distribution While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through", "yle \\Theta } is the conditional expectation consensus (CEC) penalty on unlabeled data. The CEC penalty is defined as follows. Let the marginal kernel density for all the data be g m π ( x ) = ⟨ φ m π, ψ m ( x ) ⟩ {\\displaystyle g_{m}^{\\pi }(x)=\\langle \\phi _{m}^{\\pi },\\psi _{m}(x)\\rangle } where ψ m ( x ) = [ K m ( x 1, x ),..., K m ( x L, x ) ] T {\\displaystyle \\psi _{m}(x)=[K_{m}(x_{1},x),\\ldots,K_{m}(x_{L},x)]^{T}} (the kernel distance between the labeled data and all of the labeled and unlabeled data) and φ m π {\\displaystyle \\phi _{m}^{\\pi }} is a non-negative random vector with a 2-norm of 1. The value of Π {\\displaystyle \\Pi } is the number of times each kernel is projected. Expectation regularization is then performed on the MKD, resulting in a reference expectation q m p i ( y | g m π ( x ) ) {\\displaystyle q_{m}^{pi}(y|g_{m}^{\\pi }(x))} and model expectation p m π ( f ( x ) | g m π ( x ) ) {\\displaystyle p_{m}^{\\pi }(f(x)|g_{m}^{\\pi }(x))}. Then, we define Θ = 1 Π ∑ π = 1 Π ∑ m = 1 M D ( q m p i ( y | g m π ( x ) ) | | p m π ( f ( x ) | g m π ( x ) ) ) {\\displaystyle \\Theta ={\\frac {1}{\\Pi }}\\sum _{\\pi =1}^{\\Pi }\\sum _{m=1}^", "A peniche (or stand-off) is material inserted between a half-model, often of an airplane, and the wall of a wind tunnel. Péniche is a French nautical term meaning barge. The purpose of the peniche is to remove or reduce the influence of the boundary layer on the half-model. The effect of the peniche itself in fluid dynamics is not fully understood. Half-models are used in wind-tunnel testing in aerodynamics, as larger scale half-models in constant pressure tunnels operate at increased Reynolds numbers closer to those of real aircraft. One trade-off is the interaction between the central part of the half-model and the wall boundary layer. Inserting a peniche between the centre line of the half-model and the wall of the wind tunnel attempts to eliminate or reduce that boundary layer effect by creating distance between the model and the wall. Varying widths and shapes of peniches have been used; a peniche that follows the longitudinal cross section contour of the half-model is the simplest. The peniche itself affects the fluid dynamics around the half-model. It increases the local angle of attack on an inboard wing, while having no influence on an outboard wing. The blocking of the peniche in the flow field leads to further displacement of the flow, which in turn leads to higher flow speeds and local angles of attack. How strong of an effect the peniche has is a function of the angle of attack, with the effect present at all angles." ]
[ "5", "3", "1, if considering insertion and deletion only", "2, if considering insertion and deletion only", "3, if considering insertion and deletion only", "1, if considering insertion, deletion and substitution", "2, if considering insertion, deletion and substitution", "3, if considering insertion, deletion and substitution", "1, if considering insertion, deletion, transposition and substitution", "2, if considering insertion, deletion, transposition and substitution", "3, if considering insertion, deletion, transposition and substitution" ]
['2, if considering insertion and deletion only', '2, if considering insertion, deletion and substitution', '2, if considering insertion, deletion, transposition and substitution']
1010
Select which statements are true about the CYK algorithm.A penalty will be applied for any incorrect answers.
[ "yle \\Theta } is the conditional expectation consensus (CEC) penalty on unlabeled data. The CEC penalty is defined as follows. Let the marginal kernel density for all the data be g m π ( x ) = ⟨ φ m π, ψ m ( x ) ⟩ {\\displaystyle g_{m}^{\\pi }(x)=\\langle \\phi _{m}^{\\pi },\\psi _{m}(x)\\rangle } where ψ m ( x ) = [ K m ( x 1, x ),..., K m ( x L, x ) ] T {\\displaystyle \\psi _{m}(x)=[K_{m}(x_{1},x),\\ldots,K_{m}(x_{L},x)]^{T}} (the kernel distance between the labeled data and all of the labeled and unlabeled data) and φ m π {\\displaystyle \\phi _{m}^{\\pi }} is a non-negative random vector with a 2-norm of 1. The value of Π {\\displaystyle \\Pi } is the number of times each kernel is projected. Expectation regularization is then performed on the MKD, resulting in a reference expectation q m p i ( y | g m π ( x ) ) {\\displaystyle q_{m}^{pi}(y|g_{m}^{\\pi }(x))} and model expectation p m π ( f ( x ) | g m π ( x ) ) {\\displaystyle p_{m}^{\\pi }(f(x)|g_{m}^{\\pi }(x))}. Then, we define Θ = 1 Π ∑ π = 1 Π ∑ m = 1 M D ( q m p i ( y | g m π ( x ) ) | | p m π ( f ( x ) | g m π ( x ) ) ) {\\displaystyle \\Theta ={\\frac {1}{\\Pi }}\\sum _{\\pi =1}^{\\Pi }\\sum _{m=1}^", "the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. See also Artificial intelligence and law Computational law Lawbot Legal expert system Legal informatics Legal technology References External links Official website", "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 73 Orientation penalties End-to-End orientation (along axis of insertion) Rotational Orientation (about axis of insertion) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 74 Handling Sensitivity Index (Pg) Component handling sensitivity Index (Pg) Fragile 0.4 Flexible 0.6 Adherent 0.5 Tangle/severely tangle 0.8/1.5 Severely nest 0.7 Sharp / abrasive 0.3 Hot/Contaminated 0.5 Thin (gripping problem) 0.2 None of the above 0 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 75 Component Fitting analysis • Definition i i n n f f a i 1 i 1 F A P P = = <unk> <unk> = + + <unk> <unk> <unk> <unk> <unk> <unk> ∑ ∑ Basic fitting index for an ideal design using a given assembly process Insertion penalty Penalty for additional processes on parts in place © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 76 Basic Component Fitting index (AF) Assembly process Index (Af) Insertion only 1 Snap fit 1.3 Screw fastener 4 Rivet fastener 2.5 Clip fastener (plastic bending) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 77 Insertion direction penalty (source: Swift, Booker, ‘Manufacturing process selection handbook’, Butterworth-Heinemann) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 78 Inser", "displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y }. {\\displaystyle \\left\\{\\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}\\left(y_{i}-f(x_{i})\\right)^{2}\\rightarrow \\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}\\left(y_{i}-f(x_{i})\\right)^{2}+\\lambda \\left\\|f\\right\\|_{\\mathcal {H}}^{2}\\right\\}\\equiv {\\biggl \\{}Kc=Y\\rightarrow \\left(K+n\\lambda I\\right)c=Y{\\biggr \\}}.} In this learning setting, the kernel matrix can be decomposed as K = Q Σ Q T {\\displaystyle K=Q\\Sigma Q^{T}}, with σ = diag ⁡ ( σ 1,..., σ n ), σ 1 ≥ σ 2 ≥ ⋯ ≥ σ n ≥ 0 {\\displaystyle \\sigma =\\operatorname {diag} (\\sigma _{1},\\dots,\\sigma _{n}),~\\sigma _{1}\\geq \\sigma _{2}\\geq \\cdots \\geq \\sigma _{n}\\geq 0} and q 1,..., q n {\\displaystyle q_{1},\\dots,q_{n}}" ]
[ "It is a top-down chart parsing algorithm.", "Its time complexity is \\( O(n^3) \\), where \\( n \\) is the length of sequence of words to be parsed.", "Its time complexity decreases when the grammar is regular.", "The Context-Free Grammar used with the CYK algorithm has to be converted into extended Chomsky normal form.", "It not only generates the syntactic interpretations of the sequence to be analyzed but also generates the syntactic interpretations of all the sub-sequences of the sequence to be analyzed." ]
['Its time complexity is \\( O(n^3) \\), where\xa0\\( n \\) is the length of sequence of words to be parsed.', 'The Context-Free Grammar used with the CYK algorithm has to be converted into extended Chomsky normal form.', 'It not only generates the syntactic interpretations of the sequence to be analyzed but also generates the syntactic interpretations of all the sub-sequences of the sequence to be analyzed.']
1011
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. See also Artificial intelligence and law Computational law Lawbot Legal expert system Legal informatics Legal technology References External links Official website", "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 73 Orientation penalties End-to-End orientation (along axis of insertion) Rotational Orientation (about axis of insertion) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 74 Handling Sensitivity Index (Pg) Component handling sensitivity Index (Pg) Fragile 0.4 Flexible 0.6 Adherent 0.5 Tangle/severely tangle 0.8/1.5 Severely nest 0.7 Sharp / abrasive 0.3 Hot/Contaminated 0.5 Thin (gripping problem) 0.2 None of the above 0 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 75 Component Fitting analysis • Definition i i n n f f a i 1 i 1 F A P P = = <unk> <unk> = + + <unk> <unk> <unk> <unk> <unk> <unk> ∑ ∑ Basic fitting index for an ideal design using a given assembly process Insertion penalty Penalty for additional processes on parts in place © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 76 Basic Component Fitting index (AF) Assembly process Index (Af) Insertion only 1 Snap fit 1.3 Screw fastener 4 Rivet fastener 2.5 Clip fastener (plastic bending) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 77 Insertion direction penalty (source: Swift, Booker, ‘Manufacturing process selection handbook’, Butterworth-Heinemann) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 78 Inser", ", x ′ ) ) + n log ⁡ 2 π ) {\\displaystyle \\log p(f(x')\\mid \\theta,x)=-{\\frac {1}{2}}\\left(f(x)^{\\mathsf {T}}K(\\theta,x,x')^{-1}f(x')+\\log \\det(K(\\theta,x,x'))+n\\log 2\\pi \\right)} and maximizing this marginal likelihood towards θ provides the complete specification of the Gaussian process f. One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified θ, making predictions about unobserved values ⁠ f ( x ∗ ) {\\displaystyle f(x^{*})} ⁠ at coordinates x* is then only a matter of drawing samples from the predictive distribution p ( y ∗ ∣ x ∗, f ( x ), x ) = N ( y ∗ ∣ A, B ) {\\displaystyle p(y^{*}\\mid x^{*},f(x),x)=N(y^{*}\\mid A,B)} where the posterior mean estimate A is defined as A = K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 f ( x ) {\\displaystyle A=K(\\theta,x^{*},x)K(\\theta,x,x')^{-1}f(x)} and the posterior variance estimate B is defined as: B = K ( θ, x ∗, x ∗ ) − K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 K ( θ, x ∗, x ) T {\\displaystyle B=K(\\theta,x^{*},x^{*})-K(\\theta,x^{", "i}))^{2}+\\lambda \\left\\|f\\right\\|_{\\mathcal {H}}^{2}} and the RKHS allows for expressing this RLS estimator as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} where ( K + n λ I ) c = Y {\\displaystyle (K+n\\lambda I)c=Y} with c = ( c 1,..., c n ) {\\displaystyle c=(c_{1},\\dots,c_{n})}. The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\\displaystyle \\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} can be written as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} such that K c = Y {\\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y }. {\\displaystyle \\left\\{\\min _{f\\in {" ]
[ "Phrase-structure grammars are relatively better suited for fixed-order languages than free-order languages.", "Dependency grammars describe functional dependencies between words in a sequence.", "Phrase-structure grammars better describe selectional constraints.", "The expressive power of context-free grammars are higher than that of context-dependent grammars.", "Any context-free grammar can be transformed into Chomsky-Normal form.", "Dependency grammars better describe positional constraints." ]
['Phrase-structure grammars are relatively better suited for fixed-order languages than free-order languages.', 'Dependency grammars describe functional dependencies between words in a sequence.', 'Any context-free grammar can be transformed into Chomsky-Normal form.']
1016
What is a good distance metric to be used when you want to compute the similarity between documents independent of their length?A penalty will be applied for any incorrect answers.
[ "file comparison tools Computer-assisted reviewing – Text-comparison software Data differencing – Method for compressing changes over time Delta encoding – Type of data transmission method Document comparison Edit distance – Computer science metric of string similarity References", "as a move. Text movements are reported such that the number of individual edits to transform the original text into the modified text are at a minimum. The vast majority of text comparison software based on the longest common subsequence problem algorithm incorrectly report moved text as unlinked additions and deletions. The algorithm only reports the longest in-order run of text between two documents. Text moved out of the longest run of similarities is missed. Heuristics are not used. Any similarity between the two documents above the specified minimum will be reported (if detecting moves is selected). This is the main difference between Diff-Text and most other text comparison algorithms. Diff-Text will always match up significant similarities even if contained within non-identical or moved lines. It never resorts to guessing or the first match that happens to be found, which may result in non-optimal matches elsewhere. Diff-Text can spot sentence re-ordering within a paragraph. To indicate this, the background color of the text changes to light blue and yellow. If the user specifies text movements should not be detected, its algorithm runs in (m log n) time, which is an improvement from the standard quadratic time often seen in software of this type. m and n refer to the sizes of the original and modified texts. References External links Official website On Download.com", "to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse). Zimek et al. noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping. See also", "C {\\displaystyle C} is equidistant since all codewords have the same weight as A {\\displaystyle A}. Since all codewords have the same weight, and by the previous theorem we know the total weight of all codewords, the distance of the code is found by dividing the total weight by the number of codewords (excluding 0). See also Error detection and correction Forward Error Correction", "Editing documents, program code, or any data always risks introducing errors. Displaying the differences between two or more sets of data, file comparison tools can make computing simpler, and more efficient by focusing on new data and ignoring what did not change. Generically known as a diff after the Unix diff utility, there are a range of ways to compare data sources and display the results. Some widely used file comparison programs are diff, cmp, FileMerge, WinMerge, Beyond Compare, and File Compare. Because understanding changes is important to writers of code or documents, many text editors and word processors include the functionality necessary to see the changes between different versions of a file or document. Method types The most efficient method of finding differences depends on the source data, and the nature of the changes. One approach is to find the longest common subsequence between two files, then regard the non-common data as an insertion, or a deletion. In 1978, Paul Heckel published an algorithm that identifies most moved blocks of text. This is used in the IBM History Flow tool. Other file comparison programs find block moves. Some specialized file comparison tools find the longest increasing subsequence between two files. The rsync protocol uses a rolling hash function to compare two files on two distant computers with low communication overhead. File comparison in word processors is typically at the word level, while comparison in most programming tools is at the line level. Byte or character-level comparison is useful in some specialized applications. Display The optimal way to display the results of a file comparison depends on many factors, including the type of source data. The fixed lines of programming code provide a clear unit of comparison. This does not work with documents, where adding a single word may cause the following lines to wrap differently, but still not change the content. The most popular ways to display changes are either side-by-side, or a consolidating view that highlights data inserts, and deletes. In either side-by-side viewing, code folding or text folding, for the sake of efficiency, the interface may hide portions of the file that did not change and show only the changes. Reasoning There are various reasons to use comparison tools, and tools themselves use different approaches. To compare binary files" ]
[ "Cosine similarity", "Euclidean distance", "Manhattan distance", "Chi-squared distance" ]
['Cosine similarity']
1025
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Which of the following associations can be considered as illustrative examples for inflectional morphology (with here the simplifying assumption that canonical forms are restricted to the roots only)?
[ "it into morphemes and produce additional syntactic and semantic information (on the current word) processable → process- -able ☞2 morphemes meaning: process possible role: root suffix semantic information: main less The importance and complexity of morphology vary from language to language Some information represented at the morphological level in English may be represented differently in other languages (and vice-versa). The paradigmatic/syntagmatic repartition changes from one language to another Example in Chinese: ate -→expressed as ”eat yesterday” LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 5/24 ✬ ✫ ✩ ✪ Stems – Affixes Words are decomposed into morphemes: roots (or stems) and affixes. There are several kinds of affixes: ➊prefixes: in- -credible ➋suffixes: incred- -ible <unk>infixes: Example in Tagalog ( Philippines): hingi (to borrow) →humingi (agent of the action) In slang English! →”fucking” in the middle of a word Man-fucking-hattan <unk>circumfixes: Example in German: sagen (to say) →gesagt (said) LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 6/24 ✬ ✫ ✩ ✪ Stems – Affixes (2) several affixes may be combined: examples in Turkish where you can have up to 10 (!) affixes. uygarlas ̧tıramadıklarimizdanmıs ̧sınızcasına uygar las ̧ tır ama dık lar imiz dan mıs ̧ sınız casına civilized +BEC +CAUS +NEGABLE +PPART +PL +P1PL +ABL +PAST +2PL +ASIF as if you are among those whom we could not cause to become civilized When only prefixes and suffixes are involved: concatenative morphology Some languages are not concatenative: • infixes • pattern-based morphology LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-", "i.e. formal representations of the morphological analysis of these words • Examples of surface forms: cats, book, flies,. • Example of canonical representations: cat+N+p, book+N+s, fly+N+p,. Canonical representations The typical format of a canonical representations is: Lemma+GrammaticalCategory+MorphoSyntacticFeature1+MorphoSyntacticFeature2+. where: • Lemma (or Root) is the canonical form of an inflected word; i.e. the form usually found in dictionaries, e.g. the singular form for nouns, or the infinitive for verbs; • GrammaticalCategory (or Part-of-Speech) is the tag used to represent the grammatical category of the word, e.g. N for a noun, Adj for an adjective, or V for a verb; • MorphoSyntacticFeaturek (k=1, 2, 3,.) are the tags used to represent the morphosyntactic features (e.g. the number, the gender, the tense, the person, etc.) that are relevant to identify a specific inflection of a word; and • \"+\" is a (conventional) separating character. Examples of canonical representations • (cat+N+p, cats): associating the canonical representation \"cat+N+p\" to the surface form \"cats\" expresses in a formal way that \"cats\" is the flection of the noun \"cat\" corresponding to its plural form (\"p\" being the tag for the value \"plural\" of the morphosyntactic feature \"number\"); • (turn+V+Ind+Pres+3+s, turns): associating the canonical representation \"turn+V+Ind+Pres+3+s\" to the surface form \"turns\" expresses in a formal way that the surface form corresponding to the flection of the verb (\"V\") \"to turn\" at the 3rd person (\"3\") singular (\"s\") of the present (\"Pres\") indicative (\"Ind\") is \"turns\". In other words. Implementing some Computational", "fast” are both correct (but do not mean the same) Other irregular plurals • Case 1: For most nouns ending in “f” or “fe”, change the ending “f” or “fe” to “ves” (half, halves) (knife, knives). but (belief, beliefs) (if, ifs) “There are so many ifs and buts in this policy\" Other irregular plurals (2) • Case 2: For most nouns ending in “is”, change the ending “is” to “es” (crisis, crises) (hypothesis, hypotheses). but (vis, vires) where “vis” is a Latin word meaning “power” that has been imported in English, while preserving its Latin plural (“vires”) “An example of vis is the influence of the leader\" Other irregular plurals (3) • Case 3: For many nouns ending in “o”, change the ending “o” to “oes” (tomato, tomatoes) (mosquito, mosquitoes) (volcano, volcanoes). but (photo, photos) (video, videos) (piano, pianos) Fully irregular plurals • For some (often very frequent) words, the plural corresponds to a much more complicated modification (man, men) (mouse, mice) (foot, feet) (tooth, teeth). Computational morphology for English nouns Computational Linguistics Martin Rajman Artificial Intelligence Laboratory Fundamentals • Goal: use transducers to represent associations between strings representing: • surface forms, i.e. words as they appear in texts; and • canonical representations, i.e. formal representations of the morphological analysis of these words • Examples of surface forms: cats, book, flies,. • Example of canonical representations: cat+N+p, book+N+s, fly+N+p,. Canonical representations The typical format of a canonical representations is: Lemma+GrammaticalCategory+MorphoSyntacticFeature1+MorphoSyntacticFeature2+. where: • Lemma (or Root) is the canonical form of an inflected", "morphisme se factorise par le morphisme canonique et on le note avec le diagramme suivant A B A I I I Preuve Notons que A I est reduit a un seul element ssi I A Alors le resultat est evident Si A I n est pa reduit a un element ie si I A on a necessairement I A mod I A I I A mod I A I ce qui montre qu on doit avoir Le fait que I doive etre un morphisme d anneaux implique en e et on doit avoir a b mod I I a b I a I I a a mod I I b mod I et a b mod I I a b I a I I a a mod I I b mod I Ainsi la structure d anneau si elle existe est unique l application I est evidemment surjective tout element x de A I s ecrivant a I est l image de a par I Pour montrer l existence on voudrait poser a mod I I b mod I a b mod I a mod I I b mod I a b mod I Le probleme est que un classe a mod I peut aussi s ecrire a mod I pour tout a a mod I On veut que le resultat ne depende par du choix de l element a Il su t donc de montrer que si a mod I a mod I et b mod I b mod I alors a b mod I a b mod I et a b mod I a b mod I On doit donc montrer que a b a b I a b a b I On a a a I b b I et donc a b a b c d I I I car I est un sou groupe de A On a a b a b a b a b a b a b a b b a a b a I I b I I I car I est un ideal bilatere de A et donc stable par addition et multiplication a gauche et a droite par de element quelconques de A ici a et b ANNEAUX Le fait que le lois I et I soient associatives et distributives et que A mod I et A mod I en soit le element neutre provient de de nitions de ce lois et de proprietes correspondantes pour l anneau A A A Soit A B un morphisme tel que I ker On veut montrer l existence de I A I B veri ant En particulier comme I est surjectif un tel morphisme si il existe est unique Pour montrer l existence", "a given object s in S to its canonical form s*? Canonical forms are generally used to make operating with equivalence classes more effective. For example, in modular arithmetic, the canonical form for a residue class is usually taken as the least non-negative integer in it. Operations on classes are carried out by combining these representatives, and then reducing the result to its least non-negative residue. The uniqueness requirement is sometimes relaxed, allowing the forms to be unique up to some finer equivalence relation, such as allowing for reordering of terms (if there is no natural ordering on terms). A canonical form may simply be a convention, or a deep theorem. For example, polynomials are conventionally written with the terms in descending powers: it is more usual to write x2 + x + 30 than x + 30 + x2, although the two forms define the same polynomial. By contrast, the existence of Jordan canonical form for a matrix is a deep theorem. History According to OED and LSJ, the term canonical stems from the Ancient Greek word kanonikós (κανονικός, \"regular, according to rule\") from kan<unk>n (κ<unk>νών, \"rod, rule\"). The sense of norm, standard, or archetype has been used in many disciplines. Mathematical usage is attested in a 1738 letter from Logan. The German term kanonische Form is attested in a 1846 paper by Eisenstein, later the same year Richelot uses the term Normalform in a paper, and in 1851 Sylvester writes: \"I now proceed to [...] the mode of reducing Algebraical Functions to their simplest and most symmetrical, or as my admirable friend M. Hermite well proposes to call them, their Canonical forms.\" In the same period, usage is attested by Hesse (\"Normalform\"), Hermite (\"forme canonique\"), Borchardt (\"forme canonique\"), and Cayley (\"canonical form\"). In 1865, the Dictionary of Science, Literature and Art defines canonical form as: \"In Mathematics, denotes a form, usually the simplest or most symmetrical, to which, without loss of generality," ]
[ "(activate, action)", "(hypothesis, hypotheses)", "(to go, went)", "(speaking, talking)" ]
['(hypothesis, hypotheses)', '(to go, went)']
1026
Consider the following lexicon \(L\): bear : V, N bears : V, N blue : Adj, N drink : N, V drinks : N, V Nice : Adj, N When using an order-1 HMM model (using \(L\)) to tag the word sequence:"Nice bears drink blue drinks"does the tag of drink depend on the tag of nice?
[ "C. Chappelier Tag sets (1/2) Complexity/Grain of tag set can vary a lot (even for the same language). Original Brown Corpus tagset contains 87 PoS tags (!) For instance, it contains 4 kind of adjectives: JJ adjective recent, over-all, possible, hard-fought [.] JJR comparative adjective greater, older, further, earlier [.] JJS semantically superlative adjective top, chief, principal, northernmost [.] JJT morphologically superla- tive adjective best, largest, coolest, calmest [.] Part of Speech Tagging – 8 / 23 Introduction PoS tagging with HMMs Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Tag sets (2/2) NLTK “universal” tagset is much shorter : 12 tags (from NLTK documentation): Tag Meaning Examples ADJ adjective new, good, high, special, big, local ADP adposition on, of, at, with, by, into, under ADV adverb really, already, still, early, now CONJ conjunction and, or, but, if, while, although DET determiner, article the, a, some, most, every, no, which NOUN noun year, home, costs, time, Africa NUM numeral twenty-four, fourth, 1991, 14:24 PRT particle at, on, out, over per, that, up, with PRON pronoun he, their, her, its, my, I, us VERB verb is, say, told, given, playing, would. punctuation marks., ;! X other ersatz, esprit, dunno, gr8, univeristy Part of Speech Tagging – 9 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Contents ̄ Part-of-Speech Tagging Probabilistic: HMM tagging Part of Speech Tagging – 10 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EP", "an/Det arrow/N) = 1.13·10-11 P(time/Adj flies/N like/V an/Det arrow/N) = 6.75·10-10 Details of one of such computation: P(time/N flies/V like/Adv an/Det arrow/N) = PI(N)·P(time|N)·P(V|N)·P(flies|V)·P(Adv|V)·P(like|Adv) ·P(Det|Adv)·P(an/Det)·P(N|Det)·P(arrow|N) = 2e-1·1e-1·3e-1·1e-2·5e-3·5e-3·1e-1·3e-1·5e-1·5e-1 = 1.13·10-11 The aim is to choose the most probable tagging among the possible ones (e.g. as provided by the lexicon) Part of Speech Tagging – 17 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier HMMs HMM advantage: well formalized framework, efficient algorithms O Viterbi: linear algorithm (O(n)) that computes the sequence T n 1 maximizing P(T n 1 |wn 1 ) (provided the former hypotheses) O Baum-Welch : iterative algorithm for estimating parameters from unsupervised data (words only, not the corresponding tag sequences) (parameters = P(w|Ti), P(Tj|T j-1 j-k ), PI(T1.Tk)) Part of Speech Tagging – 18 / 23 Introduction PoS tagging with HMMs Formalization order-1 HMM definition Learning Other models Conclusion c <unk>EPFL M. Rajman & J.-C. Chappelier Parameter estimation » supervised (i.e. manually tagged text corpus) Direct computation Problem of missing data » unsupervised (i.e. raw text only, no tag) Baum-Welch Algorithm High initial conditions sensitivity Good compromise: hybrid methods: unsupervised", "ch ▶the nice neighbor he sat with talked of the cat on the lovely couch ▶the neighbor he sat with talked lovely of the cat on the nice couch ▶the neighbor he sat on talked with the nice couch of the lovely cat Syntactic parsing: Introduction & CYK Algorithm – 6 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What is acceptable and what is not? A sequence of words can be rejected for several different reasons: ▶the words are not in the “right” order: cat the on sat the couch nice the rules defining what are the acceptable word orders in a given language are called “positional constraints” ▶related word pairs are not matching “right”: cats eats mice the rules defining what are the acceptable word pairs in a given language are called “selectional constraints” (e.g. “agreement rules”) Syntactic parsing: Introduction & CYK Algorithm – 7 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What is acceptable and what is not? (2) It is not enough for a sequence of words to satisfy all positional and selectional constraints to be acceptable, see Chomsky’s famous example: Colorless green ideas sleep furiously. but the reason is different: the sequence is rejected because it is meaningless; indeed, how can something colorless be green? or a sleep to be furious? As this type of problem is related to meaning, it will not be considered here; we will consider any sequence satisfying all positional and selectional constraints as acceptable; to avoid potential confusion, we will refer to such sequences as “syntactically acceptable”. Syntactic parsing: Introduction & CYK Algorithm – 8 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm", "7, 311, 331, 347, 359, 367, 379, 383, 419, 431, 439, 443, 463, 467, 479, 487, 491, 499, 503 (OEIS: A002145) Good primes Primes pn for which pn2 > pn−i pn+i for all 1 ≤ i ≤ n−1, where pn is the nth prime. 5, 11, 17, 29, 37, 41, 53, 59, 67, 71, 97, 101, 127, 149, 179, 191, 223, 227, 251, 257, 269, 307 (OEIS: A028388) Happy primes Happy numbers that are prime. 7, 13, 19, 23, 31, 79, 97, 103, 109, 139, 167, 193, 239, 263, 293, 313, 331, 367, 379, 383, 397, 409, 487, 563, 617, 653, 673, 683, 709, 739, 761, 863, 881, 907, 937, 1009, 1033, 1039, 1093 (OEIS: A035497) Harmonic primes Primes p for which there are no solutions to Hk ≡ 0 (mod p) and Hk ≡ −ωp (mod p) for 1 ≤ k ≤ p−2, where Hk denotes the k-th harmonic number and ωp denotes the Wolstenholme quotient. 5, 13, 17, 23, 41, 67, 73, 79, 107, 113, 139, 149, 157, 179, 191, 193, 223, 239, 241, 251, 263, 277, 281, 293, 307, 311, 317, 331, 337, 349 (OEIS: A092101) Higgs primes for squares Primes p for which p − 1 divides the square of the product of all earlier terms. 2, 3, 5, 7, 11, 13, 19, 23, 29, 31, 37, 43, 47, 53, 59, 61, 67, 71, 79, 101, 107, 127, 131, 139, 149, 151, 157, 173, 181, 191, 197,", "}{\\hat {f}}(x){\\big ]}+\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}-{\\hat {f}}(x){\\big )}^{2}{\\Big ]}\\\\&={\\color {Blue}\\mathbb {E} {\\Big [}{\\big (}f(x)-\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}{\\big )}^{2}{\\Big ]}}\\,+\\,2\\ {\\color {PineGreen}\\mathbb {E} {\\Big [}{\\big (}f(x)-\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}{\\big )}{\\big (}\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}-{\\hat {f}}(x){\\big )}{\\Big ]}}\\,+\\,\\mathbb {E} {\\Big [}{\\big (}\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}-{\\hat {f}}(x){\\big )}^{2}{\\Big ]}\\end{aligned}}} We show that: E [ ( f ( x ) − E [ f ^ ( x ) ] ) 2 ] = E [ f ( x ) 2 ] − 2 E [ f ( x ) E [ f ^ ( x ) ] ] + E [ E [ f ^ ( x ) ] 2 ] = f ( x ) 2 − 2 f ( x ) E [ f ^ ( x ) ] + E [ f ^ ( x ) ] 2 = ( f ( x ) − E [ f ^ ( x ) ] ) 2 {\\displaystyle {\\begin{aligned}{\\color {Blue}\\mathbb {E} {\\Big [}{\\big (}f(x)-\\mathbb {E} {\\big [}{\\hat {f}}(x){\\big ]}{\\big )}^{2}{\\Big" ]
[ "yes, because the HMM approach relies on a global maximum.", "no, the hypotheses make the two tags independent from each other." ]
['yes, because the HMM approach relies on a global maximum.']
1031
What could Out of Vocabulary (OoV) forms consist of? Select all that apply.A penalty will be applied for wrong answers.
[ "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "A bipolar violation, bipolarity violation, or BPV, is a violation of the bipolar encoding rules where two pulses of the same polarity occur without an intervening pulse of the opposite polarity. This indicates an error in the transmission of the signal. T-carrier and E-carrier signals are transmitted using a scheme called bipolar encoding, a.k.a. Alternate Mark Inversion (AMI), where ONE is represented by a pulse, and a ZERO is represented by no pulse. Pulses (which represent ones) always alternate in polarity, so that if, for example two positive pulses are received in succession, the receiver knows that an error occurred (a violation) in that one or more bits were either added or deleted from the original signal. Reliable transmission of data using this scheme requires a regular stream of pulses; too many zero bits in succession can cause a loss of synchronization between transmitter and receiver. To ensure that this is always present, there exist a number of modified AMI codes which use judiciously placed bipolar violations to encode long strings of consecutive zeroes.", ") ) {\\displaystyle V(y_{i},f(x))} and the l 0 {\\displaystyle \\ell _{0}} \"norm\" as the regularization penalty: min w ∈ R d 1 n ∑ i = 1 n V ( y i, ⟨ w, x i ⟩ ) + λ ‖ w ‖ 0, {\\displaystyle \\min _{w\\in \\mathbb {R} ^{d}}{\\frac {1}{n}}\\sum _{i=1}^{n}V(y_{i},\\langle w,x_{i}\\rangle )+\\lambda \\|w\\|_{0},} where x, w ∈ R d {\\displaystyle x,w\\in \\mathbb {R^{d}} }, and ‖ w ‖ 0 {\\displaystyle \\|w\\|_{0}} denotes the l 0 {\\displaystyle \\ell _{0}} \"norm\", defined as the number of nonzero entries of the vector w {\\displaystyle w}. f ( x ) = ⟨ w, x i ⟩ {\\displaystyle f(x)=\\langle w,x_{i}\\rangle } is said to be sparse if ‖ w ‖ 0 = s < d {\\displaystyle \\|w\\|_{0}=s<d}. Which means that the output Y {\\displaystyle Y} can be described by a small subset of input variables. More generally, assume a dictionary φ j : X → R {\\displaystyle \\phi _{j}:X\\rightarrow \\mathbb {R} } with j = 1,..., p {\\displaystyle j=1,...,p} is given, such that the target function f ( x ) {\\displaystyle f(x)} of a learning problem can be written as: f ( x ) = ∑ j = 1 p φ j ( x ) w j {\\displaystyle f(x)=\\sum _{j=1}^{p}\\phi _{j}(x)w_{j}}, ∀ x", "receive a substantial prison sentence and fine. The ship operator was fined US$1.65 million and ordered to \"implement a comprehensive Environmental Compliance Plan.\" On older OWS systems bypass pipes were fitted with regulatory approval. These approved pipes are no longer fitted on newer vessels. In some serious emergencies ship's crews are allowed to discharge untreated bilge water overboard, but they need to declare these emergencies in the ship's records and oil record book. Unregistered discharges violate the MARPOL 73/78 international pollution control treaty. Motivation and responsibility The problem is worsened by a lack of facilities in developing countries; some port reception facilities do not allow for oily water to be discharged easily and cost effectively. Crew members, engineers, and ship owners can receive huge fines and even imprisonment if they continue to use a magic pipe to pollute the environment. Conclusively, some engineers use the magic pipe manipulation technique because of: Lack of training Lack of shore side assistance with regard to bilge water treatment Simple disregard of the ocean environment. Proper process The oily bilge waste comes from a ship's engines and fuel systems. The waste is required to be offloaded when a ship is in port and either burned in an incinerator or taken to a waste management facility. In rare occasions, bilge water can be discharged into the ocean but only after almost all oil is separated out. See also International Maritime Organization – Regulatory agency Marpol Annex I – Detailed implementation of Marpol 73/78 Oil–water separator (general) Oil content meter Oil discharge monitoring equipment", "✬ ✫ ✩ ✪ Introduction to Natural Language Processing Out of Vocabulary Forms Spelling Error correction Jean-C ́edric Chappelier Jean-Cedric.Chappelier@epfl.ch and Martin Rajman Martin.Rajman@epfl.ch Artificial Intelligence Laboratory LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 1/34 ✬ ✫ ✩ ✪ Contents ➥Out of Vocabulary Forms ➥Spelling Error Correction ✈Edit distance ✈Spelling error correction with FSA ✈Weighted edit distance LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 2/34 ✬ ✫ ✩ ✪ Out of Vocabulary forms • Out of Vocabulary (OoV) forms matter: they occur quite frequently (e.g. <unk>10% in newspapers) What do they consist of? – spelling errors: foget, summmary, usqge,. – neologisms: Internetization, Tacherism,. – borrowings: gestalt, rendez-vous,. – forms difficult to exhaustively lexicalize: (numbers,) proper names, abbreviations,. • identification based on patterns is not well-adapted for all OoV forms ☞We will focus here on spelling errors, neologisms and borrowings LIA I&C Introduction to Natural Language Processing (CS-431) M. Rajman J.-C. Chappelier 3/34 ✬ ✫ ✩ ✪ Spelling errors and neologisms • for spelling errors (resp. neologisms), distortions (resp. derivations) are modelled by transformations, i.e. rewriting rules (sometimes weighted) Example: – Transposition (distortion): XY →YX [1.0] where X and Y stands for variables – tripling (distortion): XX →XXX [1.0] – name derivation: ize:INF →ization:N [1.0] • a given lexicon (regular language) and a set of" ]
[ "Words from the lexicon", "Words borrowed from other languages", "Words with spelling errors", "Neologisms", "Abbreviations" ]
['Words borrowed from other languages', 'Words with spelling errors', 'Neologisms', 'Abbreviations']
1034
Select all the statements that are true.A penalty will be applied for any incorrect answers selected.
[ "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.", "the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. See also Artificial intelligence and law Computational law Lawbot Legal expert system Legal informatics Legal technology References External links Official website", "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 73 Orientation penalties End-to-End orientation (along axis of insertion) Rotational Orientation (about axis of insertion) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 74 Handling Sensitivity Index (Pg) Component handling sensitivity Index (Pg) Fragile 0.4 Flexible 0.6 Adherent 0.5 Tangle/severely tangle 0.8/1.5 Severely nest 0.7 Sharp / abrasive 0.3 Hot/Contaminated 0.5 Thin (gripping problem) 0.2 None of the above 0 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 75 Component Fitting analysis • Definition i i n n f f a i 1 i 1 F A P P = = <unk> <unk> = + + <unk> <unk> <unk> <unk> <unk> <unk> ∑ ∑ Basic fitting index for an ideal design using a given assembly process Insertion penalty Penalty for additional processes on parts in place © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 76 Basic Component Fitting index (AF) Assembly process Index (Af) Insertion only 1 Snap fit 1.3 Screw fastener 4 Rivet fastener 2.5 Clip fastener (plastic bending) 3 © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 77 Insertion direction penalty (source: Swift, Booker, ‘Manufacturing process selection handbook’, Butterworth-Heinemann) © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 78 Inser", "i}))^{2}+\\lambda \\left\\|f\\right\\|_{\\mathcal {H}}^{2}} and the RKHS allows for expressing this RLS estimator as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} where ( K + n λ I ) c = Y {\\displaystyle (K+n\\lambda I)c=Y} with c = ( c 1,..., c n ) {\\displaystyle c=(c_{1},\\dots,c_{n})}. The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\\displaystyle \\min _{f\\in {\\mathcal {H}}}{\\frac {1}{n}}\\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} can be written as f S λ ( X ) = ∑ i = 1 n c i k ( x, x i ) {\\displaystyle f_{S}^{\\lambda }(X)=\\sum _{i=1}^{n}c_{i}k(x,x_{i})} such that K c = Y {\\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y }. {\\displaystyle \\left\\{\\min _{f\\in {", ", x ′ ) ) + n log ⁡ 2 π ) {\\displaystyle \\log p(f(x')\\mid \\theta,x)=-{\\frac {1}{2}}\\left(f(x)^{\\mathsf {T}}K(\\theta,x,x')^{-1}f(x')+\\log \\det(K(\\theta,x,x'))+n\\log 2\\pi \\right)} and maximizing this marginal likelihood towards θ provides the complete specification of the Gaussian process f. One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified θ, making predictions about unobserved values ⁠ f ( x ∗ ) {\\displaystyle f(x^{*})} ⁠ at coordinates x* is then only a matter of drawing samples from the predictive distribution p ( y ∗ ∣ x ∗, f ( x ), x ) = N ( y ∗ ∣ A, B ) {\\displaystyle p(y^{*}\\mid x^{*},f(x),x)=N(y^{*}\\mid A,B)} where the posterior mean estimate A is defined as A = K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 f ( x ) {\\displaystyle A=K(\\theta,x^{*},x)K(\\theta,x,x')^{-1}f(x)} and the posterior variance estimate B is defined as: B = K ( θ, x ∗, x ∗ ) − K ( θ, x ∗, x ) K ( θ, x, x ′ ) − 1 K ( θ, x ∗, x ) T {\\displaystyle B=K(\\theta,x^{*},x^{*})-K(\\theta,x^{" ]
[ "The Luhn law states that if a set of words are ranked by the decreasing order of their frequencies, the high-ranked words are the best features for identifying the topics that occur in the document collection.", "The order of words are ignored in the bag-of-words model.", "High values of document frequency means that the word is not very discriminative.", "Documents that are orthogonal to each other gives a cosine similarity measure of 1.", "Cosine similarity is independent of the length of the documents." ]
['The order of words are ignored in the bag-of-words model.', 'High values of document frequency means that the word is not very discriminative.', 'Cosine similarity is independent of the length of the documents.']
1035
Consider:Non-terminals: S (top-level), NP (for "noun phrase"), VP (for "verbal phrase"), N (for "Noun"), V (for "Verb"), Det (for "Determiner").PoS tags: N, V, DetTerminals: I, yesterday, in, rain, went, home, the, cat, goOut of the following, select the ones which are possible valid "syntactic rules" as defined in a context-free grammar for processing (a tiny part of) English.A penalty will be applied for any incorrect answers.
[ "b\\right\\}} and non-terminal S symbols and a blank ε {\\displaystyle \\epsilon } may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded. An example of a grammar: S → a S, S → b S, S → ε {\\displaystyle S\\to aS,S\\to bS,S\\to \\epsilon } This grammar can be shortened using the '|' ('or') character into: S → a S | b S | ε {\\displaystyle S\\to aS|bS|\\epsilon } Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as \"beginning from a non-terminal S the emission can generate either a or b or ε {\\displaystyle \\epsilon } \". Its derivation is: S ⇒ a S ⇒ a b S ⇒ a b b S ⇒ a b b {\\displaystyle S\\Rightarrow aS\\Rightarrow abS\\Rightarrow abbS\\Rightarrow abb} Ambiguous grammar may result in ambiguous parsing if applied on homographs since the same word sequence can have more than one interpretation. Pun sentences such as the newspaper headline \"Iraqi Head Seeks Arms\" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early as Pāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a \"most likely\" (winner-take-all) interpretation. As usage patterns are altered in dia", "J.-C. Chappelier Context Free Grammars A Context Free Grammar (CFG) G is (in the NLP framework) defined by: ▶a set C of syntactic categories (called \"non-terminals\") ▶a set L of words (called \"terminals\") ▶an element S of C, called the top level category, corresponding to the category identifying complete sentences ▶a proper subset T of C, which defines the morpho-syntactic categories or “Part-of-Speech tags” ▶a set R of rewriting rules, called the syntactic rules, of the form: X →X1 X2.Xn where X ∈C\\T and X1.Xn ∈C ▶a set L of rewriting rules, called the lexical rules, of the form: X →w where X ∈T and w is a word of the language described by G. L is indeed the lexicon Syntactic parsing: Introduction & CYK Algorithm – 17 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier A simplified example of a Context Free Grammar terminals: a, cat, ate, mouse, the PoS tags: N, V, Det non-terminals: S, NP, VP, N, V, Det rules: R1: S→NP VP R2: VP →V R3: VP →V NP R4: NP →Det N lexicon: N →cat Det →the. Syntactic parsing: Introduction & CYK Algorithm – 18 / 47 Introduction Syntax Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier Syntactically Correct A word sequence is syntactically correct (according to G) ⇐⇒it can be derived from the upper symbol S of G in a finite number of rewriting steps corresponding to the application of rules in G. Notation: S ⇒∗w1.wn Any sequence of rules corresponding to a possible way of deriving a given sentence W = w1.wn is called a", "M Rajman J C Chappelier A simpli ed example of a Context Free Grammar terminal a cat ate mouse the PoS tag N V Det non terminal S NP VP N V Det rule R S NP VP R VP V R VP V NP R NP Det N lexicon N cat Det the Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Syntactically Correct A word sequence is syntactically correct according to G it can be derived from the upper symbol S of G in a nite number of rewriting step corresponding to the application of rule in G Notation S w wn Any sequence of rule corresponding to a possible way of deriving a given sentence W w wn is called a derivation of W The set not necessary nite of syntactically correct sequence according to G is by de nition the language recognized by G A elementary rewriting step is noted several consecutive rewriting step with and C L Example if a rule we have X a Y b and Z c then for instance X Y Z aYZ and X Y Z abc Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence the cat ate a mouse is syntactically correct according to the former example grammar S R NP VP R Det N VP L the N VP L the cat VP R the cat V NP L the cat ate NP R the cat ate Det N L the cat ate a N L the cat ate a mouse Its derivation is R R L L R L R L L Syntactic parsing Introduction CYK Algorithm Introduction Syntax Context Free Grammars CYK Algorithm c EPFL M Rajman J C Chappelier Example The sequence ate a mouse the cat is syntactically wrong according to the former example grammar S R NP VP R Det N VP X ate Det N VP Exercise Some colorless green idea sleep furiously Syntactically correct Semantically correct Syntactic parsing Introduction C", "How to deal with selectional constraints? As already mentioned, selectional constraints are taking into account constraints such as agreement rules that are further restricting the word sequences to be considered as (syntactically) acceptable For example, in English “cats eat mice” is acceptable, while “cats eats mice” is not, because the number agreement between “cats” (plural) and “eats” (singular) is violated. Agreement rules can be taken into account by preserving the required morpho-syntactic features in the PoS tags assigned to words (e.g. a number agreement will require to use PoS tags such as NOUNs (noun singular), NOUNp (noun plural), VERBs (verb singular), and VERBp (verb plural). Syntactic parsing: Introduction & CYK Algorithm – 12 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier What formalism? ▶symbolic grammars / statistical grammars ▶symbolic grammars: ▶phrase-structure grammars (a.k.a constituency grammars, syntagmatic grammars) recursively decompose sentences into constituants, the atomic parts of which are words (\"terminals\"). Well suited for ordered languages, not adapted to free-order languages. Better expresses structural dependencies. ▶dependency grammars focus on words and their relations (not necessarly in sequence): functional role of words (rather than categories, e.g. \"agent\"/\"actor\" rather than \"noun\"). More lexicaly oriented. Dependency grammars provide simpler structures (with less nodes, 1 for each word, and less deep), but are less rich than phrase-structure grammars Modern approach: combine both Syntactic parsing: Introduction & CYK Algorithm – 13 / 47 Introduction Syntax Syntactic level and Parsing Syntactic acceptability Formalisms Context-Free Grammars CYK Algorithm c <unk>EPFL M. Rajman & J.-C. Chappelier", "waste facility. Australia The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders." ]
[ "S → NP VP", "NP → Det N", "V → VP N ", "NP → N", "VP → VP NP", "VP NP → V N", "VP → the cat", "Det → went", "Det N → NP", "S → VP" ]
['S\xa0→ NP VP', 'NP → Det N', 'NP → N', 'VP → VP NP', 'S → VP']
1157
Which of the following statements are true?
[ "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o oo o o o o o o o o oo o oo o o o o o oo o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o ooo o o o o o o o oo o o o o o oo oo o o o o o o o o o oooo o o o o o o o o o o o o o oo o o o o o o o o o oo o o o o o o o o o oo o o o oo ooo o o o o o o oo o o o oo o o oo o o o o oo o o o o o o o o o oo o o o oo oo o o o o o o oo o o o o o o o o o o oo o oo o oo o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o", "o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o oo o o o ooo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o oo o o o o o o o oo oo o o oo o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o oo oo o o o o oo o o o o o o o o o o o o o o oo o o o oo o o o o o o o oo o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o", "o o o o o oo o o oo o o oo o oo o o o o o oo oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o oo o o oo oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o ooo o oo ooo o o o o o o o o o o o o oo o oo o o o o o o o oo oo o o ooo o o oo o o oo o o o o o o oo oooo ooo oo o o o o o oooo o ooo o o o o o oo o ooo oo o o o o o o oo oo o o o o o o o oooo oo o o oo oo o o o o o o o o oo o o oo oo oo o oo oo o o o oo o o o o o o o oo oo o oo o o o oo o o o o o o o o ooo o ooo o o o o o oo o o ooo o ooo o o o oo o o oo o o o o o o o oo oo o o o o o oo o o oo o o o o o o o o o o o oooooo o ooooooo o oooo o o o oo o o o o o o oo o o o o o o o o o o o o o oo o o o oo o o o o oo o o oo oo oo o o o o o o oo o oo o o o o o o ooo oo o o o oo o o o o o o ooo o o o o o o o o o oo oo o o o o o o o o oo o o o o oo o o o o o o o oo o o o o o oo o o o ooo o o o ooo o o o oo o o ooo oo o o oo o o ooo o o o o o o", "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o oo o oo o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o oo o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o ooo oo o o o o o o o o o o oooo o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo oo o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o oo o o o o o o o o o o o o o o o o o o o o o o oo oo oo o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o oo o o o o o o o o o o o o o oo o o o o o o oo o o o o oo oo o o o o o o o o oo o o o o o o o o o ooo o o", "o o o o o o o o o o o o oo o o o oo o o o o oo o o oo oo oo o o o o o o oo o oo o o o o o o ooo oo o o o oo o o o o o o ooo o o o o o o o o o oo oo o o o o o o o o oo o o o o oo o o o o o o o oo o o o o o oo o o o ooo o o o ooo o o o oo o o ooo oo o o oo o o ooo o o o o o o oo o o o o oo o o o o o o o o o o o o o o o o oo ooo o o o o o o o o o o oo o o o o oo o oo o o oooo o o o oo o o o o o o o o o oo o o o o o o o o o o o o o o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o o o o o o o ooo o o o o o o oo oo o oo o oo o o o o o o o o oo o o oo o o oo o o oo o o o o o o o o o o o o o o o o o oo o o o ooo o oo o o o o o o o oo o o ooo o o o o o o oo o oo o o o oo o o o o o oo o o oooo o o o o oo o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o oo o o o oo o o oo o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o oo o o oo oo o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o oo o o o oo o o o o o o o o o o o o o o oo o o o o oo o o o o oooo oo o ooo o o o o o o o oo" ]
[ "The more training examples, the more accurate the prediction of a $k$-nearest-neighbor classifier.", "k-nearest-neighbors cannot be used for regression.", "A $k$-nearest-neighbor classifier is sensitive to outliers.", "Training a $k$-nearest-neighbor classifier takes more computational time than applying it / using it for prediction." ]
['The more training examples, the more accurate the prediction of a $k$-nearest-neighbor classifier.', 'A $k$-nearest-neighbor classifier is sensitive to outliers.']
1158
Let $n$ be an integer such that $n\geq 2$ and let $A \in \R^{n imes n}$, and $xv \in \R^n$, consider the function $f(xv) = xv^ op A xv$ defined over $\R^n$. Which of the following is the gradient of the function $f$?
[ "∇f : E →E defined by the following property: ∀x, v ∈E, ⟨∇f(x), v⟩= Df(x)[v]. Exercise 2.19. Show that Definition 2.18 is valid, that is, show that ∇f(x) exists under the stated conditions, and is unique. Exercise 2.20. Let E = Rn with the usual inner product. Show that (in this important particular case) ∇f(x) ∈Rn is the vector whose entries are the n partial derivatives of f with respect to x1,, xn: ∇f(x) = [∂f(x) ∂x1,, ∂f(x) ∂xn ]<unk>. 16 CHAPTER 2. THE SETUP, THE RULES, THE GOAL Exercise 2.21. Consider the following function f : Sym(n) →R, where Sym(n) is equipped with the trace inner product: f(X) = 1 2∥X -M∥2, where M ∈Rn×n is not necessarily symmetric. Give an expression for ∇f(X). Be mindful that, by Definition 2.18, ∇f(X) must belong to Sym(n). We say f : E →R is twice differentiable if ∇f : E →E is differentiable. When such is the case, the differential of ∇f at x ∈E, namely, D(∇f)(x) is a linear operator from E to E, called the Hessian of f at x. For convenience, we denote it by ∇2f(x). Definition 2.22. The Hessian at x of a twice differentiable function f : E → R on a Euclidean space E is the linear map ∇2f(x): E →E defined by the following property: ∀v ∈E, ∇2f(x)[v] = D(∇f)(x)[v] = lim t→0 ∇f(x + tv) -∇f(x) t. Exercise 2.23. Let E = Rn with the usual inner product. Show that (in this important particular case) ∇2f(x) can be represented as a matrix", "( v ) + f 2 ( v ) {\\displaystyle f(\\mathbf {v} )=f_{1}(\\mathbf {v} )+f_{2}(\\mathbf {v} )} then ∂ f ∂ v ⋅ u = ( ∂ f 1 ∂ v + ∂ f 2 ∂ v ) ⋅ u {\\displaystyle {\\frac {\\partial f}{\\partial \\mathbf {v} }}\\cdot \\mathbf {u} =\\left({\\frac {\\partial f_{1}}{\\partial \\mathbf {v} }}+{\\frac {\\partial f_{2}}{\\partial \\mathbf {v} }}\\right)\\cdot \\mathbf {u} } If f ( v ) = f 1 ( v ) f 2 ( v ) {\\displaystyle f(\\mathbf {v} )=f_{1}(\\mathbf {v} )~f_{2}(\\mathbf {v} )} then ∂ f ∂ v ⋅ u = ( ∂ f 1 ∂ v ⋅ u ) f 2 ( v ) + f 1 ( v ) ( ∂ f 2 ∂ v ⋅ u ) {\\displaystyle {\\frac {\\partial f}{\\partial \\mathbf {v} }}\\cdot \\mathbf {u} =\\left({\\frac {\\partial f_{1}}{\\partial \\mathbf {v} }}\\cdot \\mathbf {u} \\right)~f_{2}(\\mathbf {v} )+f_{1}(\\mathbf {v} )~\\left({\\frac {\\partial f_{2}}{\\partial \\mathbf {v} }}\\cdot \\mathbf {u} \\right)} If f ( v ) = f 1 ( f 2 ( v ) ) {\\displaystyle f(\\mathbf {v} )=f_{1}(f_{2}(\\mathbf {v} ))} then ∂ f ∂ v ⋅ u = ∂ f 1 ∂ f", "\\varphi }}_{r}}}\\right)={\\frac {d}{dt}}\\left(Y{\\dot {\\varphi }}_{r}\\right)={\\frac {1}{2}}F{\\frac {\\partial Y}{\\partial \\varphi _{r}}}-{\\frac {\\partial V}{\\partial \\varphi _{r}}}.} Multiplying both sides by 2 Y φ ̇ r {\\displaystyle 2Y{\\dot {\\varphi }}_{r}}, re-arranging, and exploiting the relation 2T = YF yields the equation 2 Y φ ̇ r d d t ( Y φ ̇ r ) = 2 T φ ̇ r ∂ Y ∂ φ r − 2 Y φ ̇ r ∂ V ∂ φ r = 2 φ ̇ r ∂ ∂ φ r [ ( E − V ) Y ], {\\displaystyle 2Y{\\dot {\\varphi }}_{r}{\\frac {d}{dt}}\\left(Y{\\dot {\\varphi }}_{r}\\right)=2T{\\dot {\\varphi }}_{r}{\\frac {\\partial Y}{\\partial \\varphi _{r}}}-2Y{\\dot {\\varphi }}_{r}{\\frac {\\partial V}{\\partial \\varphi _{r}}}=2{\\dot {\\varphi }}_{r}{\\frac {\\partial }{\\partial \\varphi _{r}}}\\left[(E-V)Y\\right],} which may be written as d d t ( Y 2 φ ̇ r 2 ) = 2 E φ ̇ r ∂ Y ∂ φ r − 2 φ ̇ r ∂ W ∂ φ r = 2 E φ ̇ r d χ r d φ r − 2 φ ̇ r d ω r d φ r, {\\displaystyle {\\frac {d}{dt}}\\left(Y^{2}{\\dot {\\varphi }}_{r}^{2}\\right)=2", "Xn) are given by The n2 functions gij[f] form the entries of an n × n symmetric matrix, G[f]. If v = ∑ i = 1 n v i X i, w = ∑ i = 1 n w i X i {\\displaystyle v=\\sum _{i=1}^{n}v^{i}X_{i}\\,,\\quad w=\\sum _{i=1}^{n}w^{i}X_{i}} are two vectors at p ∈ U, then the value of the metric applied to v and w is determined by the coefficients (4) by bilinearity: g ( v, w ) = ∑ i, j = 1 n v i w j g ( X i, X j ) = ∑ i, j = 1 n v i w j g i j [ f ] {\\displaystyle g(v,w)=\\sum _{i,j=1}^{n}v^{i}w^{j}g\\left(X_{i},X_{j}\\right)=\\sum _{i,j=1}^{n}v^{i}w^{j}g_{ij}[\\mathbf {f} ]} Denoting the matrix (gij[f]) by G[f] and arranging the components of the vectors v and w into column vectors v[f] and w[f], g ( v, w ) = v [ f ] T G [ f ] w [ f ] = w [ f ] T G [ f ] v [ f ] {\\displaystyle g(v,w)=\\mathbf {v} [\\mathbf {f} ]^{\\mathsf {T}}G[\\mathbf {f} ]\\mathbf {w} [\\mathbf {f} ]=\\mathbf {w} [\\mathbf {f} ]^{\\mathsf {T}}G[\\mathbf {f} ]\\mathbf {v} [\\mathbf {f} ]} where v[f]T and w[f]T denote the transpose of the vectors v[f] and w[f", "existe et Df a v L a v c En particulier toutes le d eriv ee partielles existent et f xk a L a ek d Le graident de f existe en a et f a L a e L a e L a en e Pour tout v v Rn on a Df a v f a v f Pour tout v Rn v on a Df a v f a Le gradient donne la direction de la plus grande pente de f en a Plan tangent Soit E R x y E et f E R une fonction d erivable en x y Alors l equation du plan tangent au graphique de f en point x y f x y est z f x y f x y x x y y Soit f E R telle que la d eriv ee partielle f xk x existe en tout x E Si la fonction f xk admet a son tour une d eriv ee partielle par rapport a xi sur E on obtient la d eriv ee partielle d ordre xi f xk f xi xk On peut d e nir ainsi le d eriv ee partielles d ordre p Soit E Rn sou ensemble ouvert et p un nombre naturel Une fonction f E R est dite de classe Cp dans E si toutes le d eriv ee partielles de f d ordre p existent et sont continues dans E Soit E Rn sou ensemble ouvert et p un nombre naturel Alors f Cp E implique f Ck E pour tout k p Condition su sante pour que la fonction soit d erivalbe a un point Soit E Rn a E et f E R une fonction Supposons qu il existe tel que toutes le d eriv ee partielles de f existent dans une boule ouverte de centre a et de rayon et qu elles sont continues en a Alors f est d erivalbe en a En particulier f C E implique la d erivabilit e de f dans E Th eor eme de Schwarz Soit E Rn a E et f E R telle que f xi xk et f xk xi existent et sont continues au point a Alors f xi xk a f xk xi a En particulier f C E implique l egalit e de d eriv ee partielles secondes mixtes de f dans E Soit f E R telle que toutes le d eriv ee partielle d ordre f xk xi a existent en a E Alors la matrice hessienne" ]
[ "$2 xv^\top A$", "$2Axv$", "$A^\top xv + Axv$", "$2A^\top xv$" ]
$A^ op xv + Axv$$ abla f (xv)= A^ op xv+Axv$. Here the matrix $A$ is not symmetric.
1160
Consider a classification problem using either SVMs or logistic regression and separable data. For logistic regression we use a small regularization term (penality on weights) in order to make the optimum welldefined. Consider a point that is correctly classified and distant from the decision boundary. Assume that we move this point slightly. What will happen to the decision boundary?
[ "s, including structured prediction problems. It is not clear that SVMs have better predictive performance than other linear models, such as logistic regression and linear regression. Motivation Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of support vector machines, a data point is viewed as a p {\\displaystyle p} -dimensional vector (a list of p {\\displaystyle p} numbers), and we want to know whether we can separate such points with a ( p − 1 ) {\\displaystyle (p-1)} -dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower the generalization error of the classifier. A lower generalization error means that the implementer is less likely to experience overfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of", "case of separable data. This is shown in the following two figures (left with a large margin and therefore small ∥w∥, and right with small margin that corresponds to a large ∥w∥). Assume that λ is small so that the cost function is dominated by the sum over the hinge losses on the left. What will the optimal w look like in this case? It is clear that we want the following: 1. A separating hyperplane. 2. A scaling of w so that no point of the data is in the margin. 3. That separating hyperplane and scaling for which the margin is the largest. Conditions (1) and (2) ensure that there is no cost incurred in the first expression (the sum over the terms 1-ynx<unk> n w +). Since by assumption that λ is small this is the dominant term, we cannot hope to do better than having this term to be 0. The condition (3) ensures that we have the mini- mum possible cost associated for the regularization term, i.e. the minimal possible normed squared ∥w∥2. Geometrically this corresponds to a hyperplane with maximal “spacing” to the left and right, i.e., a hyperplane with maximal margin (corresponding to the figure on the left). NOTE: We have introduced our formulation of SVMs for the general case where the data is not necessarily linearly sepa- rable. This is sometimes called the soft-margin formulation. The hard-margin formulation concerns the case where the data is linearly separable and where we insist that the de- cision region is given in terms of a separating hyperplane. In this case the formulation would require us to find a sepa- rating hyperplane with minimal ∥w∥2. In other words, the optimal solution is a separating hyperplane with maximal margins. And in this case there must be some feature vec- tors which lie exactly on the boundaries (otherwise we could enlarge the margin, contradicting optimality). These feature vectors “support” the boundaries and hence the name “sup- port vector.” For the soft-margin case this interpretation is only approximately true as we have explained in the previous paragraphs. Optimization Now where we have established what function we are opti- mizing, let us look at", "In machine learning, the margin of a single data point is defined to be the distance from the data point to a decision boundary. Note that there are many distances and decision boundaries that may be appropriate for certain datasets and goals. A margin classifier is a classification model that utilizes the margin of each example to learn such classification. There are theoretical justifications (based on the VC dimension) as to why maximizing the margin (under some suitable constraints) may be beneficial for machine learning and statistical inference algorithms. For a given dataset, there may be many hyperplanes that could classify it. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the classes. Hence, one should choose the hyperplane such that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane, and the linear classifier it defines is known as a maximum margin classifier (or, equivalently, the perceptron of optimal stability).", "In a statistical-classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class. A decision boundary is the region of a problem space in which the output label of a classifier is ambiguous. If the decision surface is a hyperplane, then the classification problem is linear, and the classes are linearly separable. Decision boundaries are not always clear cut. That is, the transition from one class in the feature space to another is not discontinuous, but gradual. This effect is common in fuzzy logic based classification algorithms, where membership in one class or another is ambiguous. Decision boundaries can be approximations of optimal stopping boundaries. The decision boundary is the set of points of that hyperplane that pass through zero. For example, the angle between a vector and points in a set must be zero for points that are on or close to the decision boundary. Decision boundary instability can be incorporated with generalization error as a standard for selecting the most accurate and stable classifier. In neural networks and support vector models In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the universal approximation theorem, thus it can have an arbitrary decision boundary. In particular, support vector machines find a hyperplane that separates the feature space into two classes with the maximum margin. If the problem is not originally linearly separable, the kernel trick can be used to turn it into a linearly separable one, by increasing the number of dimensions. Thus a general hypersurface in a small dimension space is turned into a hyperplane in a space with much larger dimensions. Neural networks try to learn", "look at schemes where the boundary between the two classes is a hyperplane. We can ask how to pick this boundary. To keep things simple, assume that there exists a “separating hyperplane” i.e., a hyperplane so that no point in the training set is misclassified. In general, there might be many hyperplanes that do the trick (assuming there is at least one). So which one should we pick? One idea is to pick a hyperplane so that the decision has as much “robustness/margin” with respect to the training set as is possible. I.e., if we slightly change the training set by “wiggling” the inputs we would like that the number of misclassifications stays low. In the figure below (taken from Wikipedia) we see that H1 does not separate the data, but H2 and H3 do. Between H2 and H3, H3 is preferable, since it has a larger “margin.” This idea will lead us to support vector machines (SVM) as well as logistic regression. Non-linear decision boundaries In many cases linear decision boundaries will not allow us to separate the data and non-linearities are needed. One option is to augment the feature vector with some non-linear functions. (The kernel trick is a method of doing this in an efficient way. We will learn about this at a later stage.) Another way is to find an appropriate non-linear transform of the input so that the transformed input is then linearly separable. This is what is done when we are using neural networks. Optimal classification for known generating model It is instructive to think about how one could classify in an optimal fashion, i.e., how one could minimize the probabil- ity of misclassification, if the distribution of the generating model was known. To be concrete, assume that we know the joint distribution p(x, y) and that y takes on elements in a discrete set Y. Given the “observation” (input x), let ˆy(x) be our estimate of the class label. What is the optimum choice for this func- tion? Note that our estimate is only a function of the input x. Further, for a" ]
[ "Small change for SVMs and small change for logistic regression.", "No change for SVMs and large change for logistic regression.", "No change for SVMs and no change for logistic regression.", "No change for SVMs and a small change for logistic regression.", "Large change for SVMs and large change for logistic regression.", "Large change for SVMs and no change for logistic regression.", "Small change for SVMs and no change for logistic regression.", "Small change for SVMs and large change for logistic regression.", "Large change for SVMs and small change for logistic regression." ]
No change for SVMs and a small change for logistic regression.
1161
You are given a distribution on $X, Y$, and $Z$ and you know that the joint distribution can be written in the form $p(x, y, z)=p(x) p(y \mid x) p(z \mid y)$. What conclusion can you draw? [Recall that $\perp$ means independent and $\mid \cdots$ means conditioned on $\cdots$.
[ "_{n},C_{k})=p(x_{i}\\mid C_{k})\\,.} Thus, the joint model can be expressed as p ( C k ∣ x 1,..., x n ) <unk> p ( C k, x 1,..., x n ) = p ( C k ) p ( x 1 ∣ C k ) p ( x 2 ∣ C k ) p ( x 3 ∣ C k ) ⋯ = p ( C k ) ∏ i = 1 n p ( x i ∣ C k ), {\\displaystyle {\\begin{aligned}p(C_{k}\\mid x_{1},\\ldots,x_{n})\\varpropto \\ &p(C_{k},x_{1},\\ldots,x_{n})\\\\&=p(C_{k})\\ p(x_{1}\\mid C_{k})\\ p(x_{2}\\mid C_{k})\\ p(x_{3}\\mid C_{k})\\ \\cdots \\\\&=p(C_{k})\\prod _{i=1}^{n}p(x_{i}\\mid C_{k})\\,,\\end{aligned}}} where <unk> {\\displaystyle \\varpropto } denotes proportionality since the denominator p ( x ) {\\displaystyle p(\\mathbf {x} )} is omitted. This means that under the above independence assumptions, the conditional distribution over the class variable C {\\displaystyle C} is: p ( C k ∣ x 1,..., x n ) = 1 Z p ( C k ) ∏ i = 1 n p ( x i ∣ C k ) {\\displaystyle p(C_{k}\\mid x_{1},\\ldots,x_{n})={\\frac {1}{Z}}\\ p(C_{k})\\prod _{i=1}^{n}p(x_{i}\\mid C_{k})} where the evidence Z = p ( x", "yle P(X\\mid Y=y)} is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label values P ( Y ) {\\displaystyle P(Y)}, together with the distribution of observations given a label, P ( X ∣ Y ) {\\displaystyle P(X\\mid Y)} symbolically, P ( X, Y ) = P ( X ∣ Y ) P ( Y ). {\\displaystyle P(X,Y)=P(X\\mid Y)P(Y).} Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution, P ( X, Y ) {\\displaystyle P(X,Y)}, the distribution of the individual variables can be computed as the marginal distributions P ( X ) = ∑ y P ( X, Y = y ) {\\displaystyle P(X)=\\sum _{y}P(X,Y=y)} and P ( Y ) = ∫ x P ( Y, X = x ) {\\displaystyle P(Y)=\\int _{x}P(Y,X=x)} (considering X as continuous, hence integrating over it, and Y as discrete, hence summing over it), and either conditional distribution can be computed from the definition of conditional probability: P ( X ∣ Y ) = P ( X, Y ) / P ( Y ) {\\displaystyle P(X\\mid Y)=P(X,Y)/P(Y)} and P ( Y ∣ X ) = P ( X, Y ) / P ( X ) {\\displaystyle P(Y\\mid X)=P(X,Y)/P(X)}. Given a model of one conditional probability, and estimated probability distributions for the variables X and Y, denoted P ( X ) {\\displaystyle P(X)} and P ( Y ) {\\displaystyle P(Y)}, one can estimate the opposite condition", "distribution F of density f is a set of n independent random variables which all have a distribution F. Equivalently we say that X1,, Xn are independent and identically distributed (iid) with distribution F, or with density f, and write X1,, Xn iid ∼F or X1,, Xn iid ∼f. By independence, the joint density of X1,, Xn iid ∼f is fX1,.,Xn(x1,, xn) = n 3 j=1 fX(xj). Example 161. If X1, X2, X3 iid ∼exp(λ), give their joint density. Probability and Statistics for SIC slide 187 111 Note to Example 159 (a) Since fX(0)fY (2) = 2 3 × 1 6 <unk>= fX,Y (0, 2) = 0, X and Y are dependent. This is obvious, because if I have the wrong hat (i.e., X = 0), then it is impossible that both other persons have the correct hats (i.e., Y = 2 is impossible). Finding a single pair (x, y) giving fX,Y (x, y) <unk>= fX(x)fY (y) is enough to show dependence, while to show independence it must be true that fX,Y (x, y) = fX(x)fY (y) for every possible (x, y). (b) In this case fX,Y (x, y) = + 2e-x-y, y > x > 0, 0, otherwise. and we previously saw that fX(x) = 2 exp(-2x)I(x > 0), fY (y) = 2 exp(-y){1 -exp(-y)}I(y > 0), so obviously the joint density is not the product of the marginals. This is equally obvious on looking at the conditional densities. In this case, the dependence is clear without any computations, as the support of (X, Y ) cannot be the product of sets IA(x)IB(y), but it would have to be if they were independent. (c) The density factorizes and", "displaystyle Q} 1. J joint distribution Given two random variables X and Y, the joint distribution of X and Y is the probability distribution of X and Y together. joint probability The probability of two events occurring together. The joint probability of A and B is written P ( A ∩ B ) {\\displaystyle P(A\\cap B)} or P ( A, B ) {\\displaystyle P(A,\\ B)}. K Kalman filter kernel kernel density estimation kurtosis A measure of the \"tailedness\" of the probability distribution of a real-valued random variable. There are different ways of quantifying, estimating, and interpreting kurtosis, but a common interpretation is that kurtosis represents the degree to which the shape of the distribution is influenced by infrequent extreme observations (outliers); in this case, higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent modestly sized deviations. L L-moment law of large numbers (LLN) A theorem according to which the average of the results obtained from performing the same experiment a large number of times should be close to the experiment's expected value, and tends to become closer to the expected value as more trials are performed. The law suggests that a sufficiently large number of trials is necessary for the results of any experiment to be considered reliable, and by extension that performing only a small number of trials may produce an incomplete or misleading interpretation of the experiment's outcomes. likelihood function A conditional probability function considered a function of its second argument with its first argument held fixed. For example, imagine pulling a numbered ball with a number k from a bag of n balls, numbered 1 to n; a likelihood function for the random variable N could be described as the probability of pulling k given that there are n balls: the likelihood will be 1/n for n greater than or equal to k, and 0 for n smaller than k. Unlike a probability distribution function, this likelihood function will not sum up to 1 on the sample space. loss function likelihood-ratio test M M-estimator marginal distribution Given two jointly distributed random variables X", "requires computing the joint distribution of Y1 and Y2 (which might be difficult depending on the functions f1 and f2), but the following proposition [given here without proof, but connected to the fact that the knowledge of a distribution μX is equivalent to that of its cdf FX] allows for a much cleaner answer. 17 Proposition 3.4. X1, X2 are independent if and only if P({X1 ∈B1, X2 ∈B2}) = P({X1 ∈B1}) P({X2 ∈B2}), ∀B1, B2 ∈B(R) From this, we deduce the following: Proposition 3.5. Let f1, f2 : R →R be two Borel-measurable functions. If X1, X2 are independent random variables, then Y1 = f1(X1) and Y2 = f2(X2) are also independent random variables. Proof. From the assumption made, we have for every B1, B2 ∈B(R): P({Y1 ∈B1, Y2 ∈B2}) = P({f1(X1) ∈B1, f2(X2) ∈B2}) = P({X1 ∈f -1 1 (B1), X2 ∈f -1 2 (B2)}) = P({X1 ∈f -1 1 (B1)}) P({X2 ∈f -1 2 (B2)}) = P({f1(X1) ∈B1}) P({f2(X2) ∈B2}) = P({Y1 ∈B1}) P({Y2 ∈B2}) Note that f1, f2 need not be invertible for the above equalities to hold: f -1 i (Bi) is just a notation for the pre-image of Bi via the function fi. Further simplifications of Definition 3.3 occur in the two following situations [again, without proof]: - Assume X1, X2 are two discrete random variables, taking values in a common discrete set D 3. Then X1, X2 are independent if and only if P({X1 = x1, X2 = x2}) = P({" ]
[ "$Y \\perp Z$", "$X \\perp Y \\mid Z$", "$Y \\perp Z \\quad X$", "$X \\perp Z$", "$X \\perp Y$", "$X \\perp Z \\quad \\mid Y$" ]
$X \perp Z \quad \mid Y$
1164
(Weight initialization) The choice of weight initialization will not impact the optimization behavior of the neural network.
[ "In deep learning, weight initialization or parameter initialization describes the initial step in creating a neural network. A neural network contains trainable parameters that are modified during training: weight initialization is the pre-training step of assigning initial values to these parameters. The choice of weight initialization method affects the speed of convergence, the scale of neural activation within the network, the scale of gradient signals during backpropagation, and the quality of the final model. Proper initialization is necessary for avoiding issues such as vanishing and exploding gradients and activation function saturation. Note that even though this article is titled \"weight initialization\", both weights and biases are used in a neural network as trainable parameters, so this article describes how both of these are initialized. Similarly, trainable parameters in convolutional neural networks (CNNs) are called kernels and biases, and this article also describes these. Constant initialization We discuss the main methods of initialization in the context of a multilayer perceptron (MLP). Specific strategies for initializing other network architectures are discussed in later sections. For an MLP, there are only two kinds of trainable parameters, called weights and biases. Each layer l {\\displaystyle l} contains a weight matrix W ( l ) ∈ R n l − 1 × n l {\\displaystyle W^{(l)}\\in \\mathbb {R} ^{n_{l-1}\\times n_{l}}} and a bias vector b ( l ) ∈ R n l {\\displaystyle b^{(l)}\\in \\mathbb {R} ^{n_{l}}}, where n l {\\displaystyle n_{l}} is the number of neurons in that layer. A weight initialization method is an algorithm for setting the initial values for W ( l ), b ( l ) {\\displaystyle W^{(l)},b^{(l)}} for each layer l {\\displaystyle l}. The simplest form is zero initialization: W ( l ) = 0, b ( l ) = 0 {\\displaystyle W^{(l)}=0,b^", "Newton method to directly train deep networks. The work generated considerable excitement that initializing networks without pre-training phase was possible. However, a 2013 paper demonstrated that with well-chosen hyperparameters, momentum gradient descent with weight initialization was sufficient for training neural networks, without needing either quasi-Newton method or generative pre-training, a combination that is still in use as of 2024. Since then, the impact of initialization on tuning the variance has become less important, with methods developed to automatically tune variance, like batch normalization tuning the variance of the forward pass, and momentum-based optimizers tuning the variance of the backward pass. There is a tension between using careful weight initialization to decrease the need for normalization, and using normalization to decrease the need for careful weight initialization, with each approach having its tradeoffs. For example, batch normalization causes training examples in the minibatch to become dependent, an undesirable trait, while weight initialization is architecture-dependent. See also Backpropagation Gradient descent Vanishing gradient problem References Further reading Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). \"8.4 Parameter Initialization Strategies\". Deep learning. Adaptive computation and machine learning. Cambridge, Mass: The MIT press. ISBN 978-0-262-03561-3. Narkhede, Meenal V.; Bartakke, Prashant P.; Sutaone, Mukul S. (June 28, 2021). \"A review on weight initialization strategies for neural networks\". Artificial Intelligence Review. 55 (1). Springer Science and Business Media LLC: 291–322. doi:10.1007/s10462-021-10033-z. ISSN 0269-2821.", "{R} ^{n_{l}}}, where n l {\\displaystyle n_{l}} is the number of neurons in that layer. A weight initialization method is an algorithm for setting the initial values for W ( l ), b ( l ) {\\displaystyle W^{(l)},b^{(l)}} for each layer l {\\displaystyle l}. The simplest form is zero initialization: W ( l ) = 0, b ( l ) = 0 {\\displaystyle W^{(l)}=0,b^{(l)}=0} Zero initialization is usually used for initializing biases, but it is not used for initializing weights, as it leads to symmetry in the network, causing all neurons to learn the same features. In this page, we assume b = 0 {\\displaystyle b=0} unless otherwise stated. Recurrent neural networks typically use activation functions with bounded range, such as sigmoid and tanh, since unbounded activation may cause exploding values. (Le, Jaitly, Hinton, 2015) suggested initializing weights in the recurrent parts of the network to identity and zero bias, similar to the idea of residual connections and LSTM with no forget gate. In most cases, the biases are initialized to zero, though some situations can use a nonzero initialization. For example, in multiplicative units, such as the forget gate of LSTM, the bias can be initialized to 1 to allow good gradient signal through the gate. For neurons with ReLU activation, one can initialize the bias to a small positive value like 0.1, so that the gradient is likely nonzero at initialization, avoiding the dying ReLU problem.: 305 Random initialization Random initialization means sampling the weights from a normal distribution or a uniform distribution, usually independently. LeCun initialization LeCun initialization, popularized in (LeCun et al., 1998), is designed to preserve the variance of neural activations during the forward pass. It samples each entry in W ( l ) {\\displaystyle W^{(l)}} independently from a distribution with mean 0 and variance 1 / n l − 1 {\\displayst", "Training Neural Networks Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given n training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in the opposite direction of the gradient Repeat this process until convergence Given training pair input and label Gradient Descent Computationally intensive Algorithm Initialize weight randomly Loop until convergence Compute gradient Update weight Return weight Stochastic Gradient Descent Easy to compute but very noisy Algorithm Initialize weight randomly Loop until convergence Pick random single sample Compute gradient $ + )+ ○ Update weights, ←- )* + )+ ○ Return weights!, \" &'75 Mini-Batch Gradient Descent ● Algorithm: ○ Initialize weights randomly ~0, & ○ Loop until convergence: ○ Pick a mini-batch of data samples ○ Compute gradient, )* + )+ = %, ∑-.%, )*% + )+ ○ Update weights, ←- )* + )+ ○ Return weights Better estimation of true gradient and fast to compute, smoother convergence!, \" &'76 Backpropagation Using Chain Rule % F % & & = F ∗F & Let’s apply chain rule! 77 Backpropagation Using Chain Rule % F % & % = F ∗F % Apply chain rule 78 Backpropagation Using Chain Rule % = F ∗F % ∗% % % F % & Apply chain rule 79 Training Deep Networks % = h/ ∗h/ h/$ $% h/$ In most", "', ⋯ Given n training pairs ';'(input and labels) 69 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < '.! 4 L' ;,'∗= arg min 3 = &, ', ⋯ &'&,'Given training pairs ';'(input and labels) 70 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < '.! 4 L' ;,'∗= arg min 3 = &, ', ⋯ ○ Randomly pick a point (, % ○ Compute gradient, )* + )+ ○ Take a small step in opposite direction of gradient &'&,'Given training pairs ';'(input and labels) 71 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < '.! 4 L' ;,'∗= arg min 3 = &, ', ⋯ ○ Randomly pick a point (, % ○ Compute gradient, )* + )+ ○ Take a small step in opposite direction of gradient &'&,'Given training pairs ';'(input and labels) 72 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < '.! 4 L' ;,'∗= arg min 3 = &, ', ⋯ ○ Randomly pick a point (, % ○ Compute gradient, )* + )+ ○ Take a small step in the opposite direction of the gradient ○ Repeat this process until convergence &'&,'Given training pairs ';'(input and labels) 73 Gradient Descent Computationally intensive ● Algorithm: ○ Initialize weights randomly ~0, & ○ Loop until convergence: ○ Compute gradient, )* + )+ ○ Update weights, ←- )* + )+ ○ Return weights!, \" &'74 Stochastic Gradient Descent Easy to compute but very noisy! ● Algorithm: ○ Initialize weights randomly ~0, & ○ Loop until convergence:" ]
[ "True", "False" ]
False
1165
Under certain conditions, maximizing the log-likelihood is equivalent to minimizing mean-squared error for linear regression. The mean-squared error can be defined as $\mathcal{L}_{m s e}(\mathbf{w}):=$ $\frac{1}{2 N} \sum_{n=1}^{N}\left(y_{n}-\widetilde{\mathbf{x}}_{n}^{\top} \mathbf{w}\right)^{2}$ and $y_{n}=\widetilde{\mathbf{x}}_{n}^{\top} \mathbf{w}+\varepsilon_{n}$ is assumed for the probabilistic model. Which of following conditions is necessary for the equivalence?
[ "| w) = p(X | w)p(y | X, w) = p(X)p(y | X, w), where in the second step we have made the natural assumption that the X data does not depend on the parameter we choose in our model. Note that this is an assumption and part of our model. But now note that the factor p(X) is a constant wrt to the choice of w, and hence plays no role when we apply the maximum likelihood criterion. Maximum likelihood criterion Recall what we did so far. Under the assumption that the samples are independent we have written down the likelihood of the data given a particular choice of weights w. We then choose the weights w that maximize this likelihood. Equivalently, we choose the weights that maximize the log-likelihood. This is called the maximum- likelihood criterion. In a final reformulation, we added a negative sign to bring the cost function to our standard form and called it L(w). In this form, we are looking for the weights w that minimize L(w). In formulae, we choose the weight w⋆, so that w⋆= argminw L(w). As we discussed in that context of the probabilis- tic interpretation of the least squares problem, one justification of the maximum-likelihood criterion is that, under some mild technical conditions, it is consistent. I.e., if we assume that the data was generated according to a model in this class and we have i.i.d. samples and we use this procedure to estimate the underlying parameter, then our esti- mate will converge to the true parameter if we get more and more data. Of course, in practice the data is unlikely being generated in this way and there might not be any probabilistic model under- lying it. But nevertheless, this gives our method a theoretical justification. Conditions of optimality As we want to minimize L(w), let us look at the stationary points of this function by computing the gradient, setting it to zero, and solving for w. Note that ∂ln[1 + exp(x)] ∂x = σ(x). Therefore ∇L(w) = 1 N N X n=1 x", "Machine Learning Course - CS-433 Maximum Likelihood Oct 5, 2022 Martin Jaggi Last updated on: October 3, 2022 credits to Mohammad Emtiyaz Khan & R ̈udiger Urbanke Motivation In the previous lecture 3a we arrived at the least-squares problem in the following way: we postulated a particular cost function (square loss) and then, given data, found that model that minimizes this cost function. In the current lec- ture we will take an alternative route. The final answer will be the same, but our starting point will be probabilistic. In this way we find a second interpretation of the least-squares problem. 1.2 1.4 1.6 1.8 2 0 20 40 60 80 100 120 140 x y -20 -10 0 10 20 0 200 400 600 800 1000 1200 Error in prediction Gaussian distribution and independence Recall the definition of a Gaussian random variable in R with mean μ and variance σ2. It has a density of p(y | μ, σ2) = N(y | μ, σ2) = 1 √ 2πσ2 exp -(y -μ)2 2σ2. In a similar manner, the density of a Gaussian random vector with mean μ and covariance Σ (which must be a positive semi-definite matrix) is N(y | μ, Σ) = 1 p (2π)D det(Σ) exp -1 2(y -μ)<unk>Σ-1(y -μ). Also recall that two random vari- ables X and Y are called indepen- dent when p(x, y) = p(x)p(y). A probabilistic model for least-squares We assume that our data is gener- ated by the model, yn = x<unk> n w + εn, where the εn (the noise) is a zero- mean Gaussian random variable with variance σ2 and the noise that is added to the various samples is independent of each other, and in- dependent of the input. Note that the model w is unknown. Therefore, given N samples, the likelihood of the data vector y = (y1, · · ·, yN) given the input X (each row is one input) and the model w is equal to p(y | X, w) = N Y n", "Machine Learning Course - CS-433 Least Squares Oct 4, 2022 Martin Jaggi Last updated on: October 3, 2022 credits to Mohammad Emtiyaz Khan & R ̈udiger Urbanke Motivation In rare cases, one can compute the optimum of the cost function ana- lytically. Linear regression using a mean-squared error cost function is one such case. Here the solution can be obtained explicitly, by solving a linear system of equations. These equations are sometimes called the normal equations. This method is one of the most popular methods for data fitting. It is called least squares. To derive the normal equations, we first show that the problem is con- vex. We then use the optimality conditions for convex functions (see the previous lecture notes on opti- mization). I.e., at the optimum pa- rameter, call it w⋆, it must be true that the gradient of the cost function is 0. I.e., ∇L(w⋆) = 0. This is a system of D equations. Normal Equations Recall that the cost function for lin- ear regression with mean-squared er- ror is given by L(w) = 1 2N N X n=1", ") (e)= arg min w -log \" N Y n=1 p(yn|xn, w) (f) = arg min w -log \" N Y n=1 N(yn | x<unk> n w, σ2) = arg min w -log \" N Y n=1 1 √ 2πσ2e-1 2σ2(yn-x<unk>n w)2 = arg min w -N log( 1 √ 2πσ2) + N X n=1 1 2σ2(yn -x<unk> n w)2 = arg min w 1 2σ2 N X n=1 (yn -x<unk> n w)2 In step (a) on the right we wrote down the negative of the log of the likelihood. The maximum likelihood criterion choses that parameter w that minimizes this quantity (i.e., maximizes the likelihood). In step (b) we factored the likelihood. The usual assumption is that the choice of the input samples xn does not depend on the model parameter (which only influces the output given the input. Hence, in step (c) we removed the conditioning. Since the factor p(X) does not depend on w, i.e., is a constant wrt to w) we can remove it. This is done in step (d). In step (e) we used the assumption that the samples are iid. In step (f) we then used our assumption that the samples have the form yn = w<unk> n w + Zn, where Zn is a Gaussian noise with mean zero and variance σ2. The rest is calculus. Ridge regression has a very similar interpretation. Now we start with the posterior p(w|X, y) and chose that parameter w that maximizes this posterior. Hence this is called the maximum-a-posteriori (MAP) estimate. As before, we take the log and add a minus sign and minimize instead. In order to compute the posterior we use Bayes law and we assume that the components of the weight vector are iid Gaussians with mean zero and variance 1 λ. wridge = arg min w -log p(w|X, y) (a) = arg min w -log p(y, X|w)p(w) p(y, X) (b", "n w + εn, where the εn (the noise) is a zero- mean Gaussian random variable with variance σ2 and the noise that is added to the various samples is independent of each other, and in- dependent of the input. Note that the model w is unknown. Therefore, given N samples, the likelihood of the data vector y = (y1, · · ·, yN) given the input X (each row is one input) and the model w is equal to p(y | X, w) = N Y n=1 p(yn | xn, w) = N Y n=1 N(yn | x<unk> n w, σ2). The probabilistic view point is that we should maximize this likelihood over the choice of model w. I.e., the “best” model is the one that maxi- mizes this likelihood. Defining cost with log-likelihood Instead of maximizing the likeli- hood, we can take the logarithm of the likelihood and maximize it in- stead. Expression is called the log- likelihood (LL). LLL(w) := log p(y | X, w) = -1 2σ2 N X n=1 (yn -x<unk> n w)2 + cnst. Compare the LL to the MSE (mean squared error) LLL(w) = -1 2σ2 N X n=1 (yn -x<unk> n w)2 + cnst LMSE(w) = 1 2N N X n=1 (yn -x<unk> n w)2 Maximum-likelihood estimator (MLE) It is clear that maximizing the LL is equivalent to minimizing the MSE: arg min w LMSE(w) = arg max w LLL(w). This gives us another way to design cost functions. MLE can also be interpreted as find- ing the model under which the ob- served data is most likely to have been generated from (probabilisti- cally). This interpretation has some advantages that we discuss now. Properties of MLE MLE is a sample approximation to the expected log-likelihood: LLL(w) ≈Ep(y,x) log p(y | x," ]
[ "The noise parameter $\\varepsilon_{n}$ should have a normal distribution.", "The target variable $y_{n}$ should have a normal distribution.", "The i.i.d. assumption on the variable $\\mathbf{w}$.", "The conditional probability $p\\left(y_{n} \\mid \\widetilde{\\mathbf{x}}_{n}, \\mathbf{w}\\right)$ should follow a Laplacian distribution.", "The noise parameter $\\varepsilon_{n}$ should have non-zero mean." ]
The noise parameter $\varepsilon_{n}$ should have a normal distribution.
1167
Consider our standard least-squares problem $$ \operatorname{argmin}_{\mathbf{w}} \mathcal{L}(\mathbf{w})=\operatorname{argmin}_{\mathbf{w}} \frac{1}{2} \sum_{n=1}^{N}\left(y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right)^{2}+\frac{\lambda}{2} \sum_{d=1}^{D} w_{d}^{2} $$ Here, $\left\{\left(\mathbf{x}_{n} y_{n}\right)\right\}_{n=1}^{N}$ is the data. The $N$-length vector of outputs is denoted by $\mathbf{y}$. The $N \times D$ data matrix is called $\mathbf{X}$. It's rows contain the tuples $\mathbf{x}_{n}$. Finally, the parameter vector of length $D$ is called $\mathbf{w}$. (All just like we defined in the course). Mark any of the following formulas that represent an equivalent way of solving this problem.
[ ") − y j ) 2 = ( ⟨ w, x j ⟩ − y j ) 2 {\\displaystyle V(f(x_{j}),y_{j})=(f(x_{j})-y_{j})^{2}=(\\langle w,x_{j}\\rangle -y_{j})^{2}} is used to compute the vector w {\\displaystyle w} that minimizes the empirical loss I n [ w ] = ∑ j = 1 n V ( ⟨ w, x j ⟩, y j ) = ∑ j = 1 n ( x j T w − y j ) 2 {\\displaystyle I_{n}[w]=\\sum _{j=1}^{n}V(\\langle w,x_{j}\\rangle,y_{j})=\\sum _{j=1}^{n}(x_{j}^{\\mathsf {T}}w-y_{j})^{2}} where y j ∈ R. {\\displaystyle y_{j}\\in \\mathbb {R}.} Let X {\\displaystyle X} be the i × d {\\displaystyle i\\times d} data matrix and y ∈ R i {\\displaystyle y\\in \\mathbb {R} ^{i}} is the column vector of target values after the arrival of the first i {\\displaystyle i} data points. Assuming that the covariance matrix Σ i = X T X {\\displaystyle \\Sigma _{i}=X^{\\mathsf {T}}X} is invertible (otherwise it is preferential to proceed in a similar fashion with Tikhonov regularization), the best solution f ∗ ( x ) = ⟨ w ∗, x ⟩ {\\displaystyle f^{*}(x)=\\langle w^{*},x\\rangle } to the linear least squares problem is given by w ∗ = ( X T X ) − 1 X T y = Σ i − 1 ∑ j = 1 i x j y j. {\\displaystyle w^{*}=(", ". This is the task of estimating w ∗ {\\displaystyle {\\mathbf {w}}^{*}} given N {\\displaystyle N} samples ( x i, y i ) {\\displaystyle ({\\mathbf {x}}_{i},y_{i})} generated from y ∗ ( x ) = w ∗ ⋅ x {\\displaystyle y^{*}({\\mathbf {x}})={\\mathbf {w}}^{*}\\cdot {\\mathbf {x}}}, where each x i {\\displaystyle \\mathbf {x} _{i}} is drawn according to some input data distribution. In this setup, w ∗ {\\displaystyle {\\mathbf {w}}^{*}} is the weight vector which defines the true function y ∗ {\\displaystyle y^{*}} we wish to use the training samples to develop a model w ^ {\\displaystyle \\mathbf {\\hat {w}} } which approximates w ∗ {\\displaystyle {\\mathbf {w}}^{*}}. We do this by minimizing the mean-square error between our model and the training samples: w ^ = arg ⁡ min w 1 N ∑ i = 0 N | | y ∗ ( x i ) − w ⋅ x i | | 2 {\\displaystyle {\\mathbf {\\hat {w}} }=\\arg \\min _{\\mathbf {w} }{\\frac {1}{N}}\\sum _{i=0}^{N}||y^{*}({\\mathbf {x} }_{i})-{\\mathbf {w} }\\cdot {\\mathbf {x} }_{i}||^{2}} There exists an explicit solution for w ^ {\\displaystyle \\mathbf {\\hat {w}} } which minimizes the squared error: w ^ = ( X X T ) − 1 X y {\\displaystyle {\\mathbf {\\hat {w}}}=({\\mathbf {X}}{\\mathbf {X}}^{T})^{-1}{\\mathbf {X}}{\\mathbf {y}", "CS-448 Sublinear Algorithms for Big Data Analysis October 27, 2021 Lecture 6 Lecturer: Michael Kapralov Scribes: Michael Kapralov 1 Least squares regression The exact least squares regression is the following problem: given A ∈Rn×d and b ∈Rn, find x∗= argminx∈Rd||Ax -b||2. The least squares problem often comes up in the following setting. We are observing samples ai ∈Rd, i = 1,, n (rows of A) together with a value of some unknown function f on the samples, perhaps corrupted by noise. The value of the function on the i-th sample is denoted by bi. Then if the function f is linear in the attributes of the sample, i.e. coordinates of ai, the least squares problem is asking to recover the coefficients x that allow one to predict bi from ai. In fact, in fairly general settings (e.g. when the vector b equals the value of the unknown linear function plus i.i.d. noise), a least squares fit is the best (unbiased) estimate of the linear function that one can obtain from the samples – see the Gauss-Markov theorem. How do we solve least squares in general? The solution is (AT A)+AT b, where (AT A)+ is the Moore-Penrose pseudoinverse of AT A, and can be computed via an SVD computation, taking O(nd2) time. The approximate least squares problem is the following. We are given A ∈Rn×d, b ∈Rn, ε ∈(0, 1). Let x∗:= argminx∈Rd||Ax -b||2. We would like to find x′ ∈Rd such that ||Ax′ -b||2 ≤(1 + ε)||Ax∗-b||2. (1) We will solve least squares approximately using subspace embeddings: Definition 1 A random matrix Π ∈Rm×n is a (d, ε, δ)-subspace embedding if for every d-dimensional subspace P", "Λ is the diagonal matrix of eigenvalues λ(k) of XTX. λ(k) is equal to the sum of the squares over the dataset associated with each component k, that is, λ(k) = Σi tk2(i) = Σi (x(i) ⋅ w(k))2. Dimensionality reduction The transformation P = X W maps a data vector x(i) from an original space of x variables to a new space of p variables which are uncorrelated over the dataset. To non-dimensionalize the centered data, let Xc represent the characteristic values of data vectors Xi, given by: ‖ X ‖ ∞ {\\displaystyle \\|X\\|_{\\infty }} (maximum norm), 1 n ‖ X ‖ 1 {\\displaystyle {\\frac {1}{n}}\\|X\\|_{1}} (mean absolute value), or 1 n ‖ X ‖ 2 {\\displaystyle {\\frac {1}{\\sqrt {n}}}\\|X\\|_{2}} (normalized Euclidean norm), for a dataset of size n. These norms are used to transform the original space of variables x, y to a new space of uncorrelated variables p, q (given Yc with same meaning), such that p i = X i X c, q i = Y i Y c {\\displaystyle p_{i}={\\frac {X_{i}}{X_{c}}},\\quad q_{i}={\\frac {Y_{i}}{Y_{c}}}} and the new variables are linearly related as: q = α p {\\displaystyle q=\\alpha p}. To find the optimal linear relationship, we minimize the total squared reconstruction error: E ( α ) = 1 1 − α 2 ∑ i = 1 n ( α p i − q i ) 2 {\\displaystyle E(\\alpha )={\\frac {1}{1-\\alpha ^{2}}}\\sum _{i=1}^{n}(\\alpha p_{i}-q_{i})^{2}} such that setting the derivative of the error function", "}}_{1},{\\hat {y}}_{2},\\ldots,{\\hat {y}}_{n})} using least squares. The objective function to be minimized is Q ( w ) = ∑ i = 1 n Q i ( w ) = ∑ i = 1 n ( y ^ i − y i ) 2 = ∑ i = 1 n ( w 1 + w 2 x i − y i ) 2. {\\displaystyle Q(w)=\\sum _{i=1}^{n}Q_{i}(w)=\\sum _{i=1}^{n}\\left({\\hat {y}}_{i}-y_{i}\\right)^{2}=\\sum _{i=1}^{n}\\left(w_{1}+w_{2}x_{i}-y_{i}\\right)^{2}.} The last line in the above pseudocode for this specific problem will become: [ w 1 w 2 ] ← [ w 1 w 2 ] − η [ ∂ ∂ w 1 ( w 1 + w 2 x i − y i ) 2 ∂ ∂ w 2 ( w 1 + w 2 x i − y i ) 2 ] = [ w 1 w 2 ] − η [ 2 ( w 1 + w 2 x i − y i ) 2 x i ( w 1 + w 2 x i − y i ) ]. {\\displaystyle {\\begin{bmatrix}w_{1}\\\\w_{2}\\end{bmatrix}}\\leftarrow {\\begin{bmatrix}w_{1}\\\\w_{2}\\end{bmatrix}}-\\eta {\\begin{bmatrix}{\\frac {\\partial }{\\partial w_{1}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\\\\{\\frac {\\partial }{\\partial w_{2}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\\end{bmatrix}}={\\begin{bmatrix}w" ]
[ "$\\operatorname{argmin}_{\\boldsymbol{\\alpha}} \\frac{1}{2} \\boldsymbol{\\alpha}^{\\top}\\left(\\mathbf{X X}^{\\top}+\\lambda \\mathbf{I}_{N}\\right) \\boldsymbol{\\alpha}-\\boldsymbol{\\alpha}^{\\top} \\mathbf{y}$", "$\\operatorname{argmin}_{\\mathbf{w}} \\sum_{n=1}^{N}\\left[1-y_{n} \\mathbf{x}_{n}^{\\top} \\mathbf{w}\\right]_{+}+\\frac{\\lambda}{2}\\|\\mathbf{w}\\|^{2}$. Recall: $[z]_{+}=\\max \\{0, z\\}$", "$\\operatorname{argmin}_{\\mathbf{w}}-\\log p(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{w}) p(\\mathbf{w})$, where $p(\\mathbf{w})$ correspond to the density of a $D$-length vector of iid zero-mean Gaussians with variance $1 / \\lambda$ and $p(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{w})$ corresponds to the density of a vector of length $N$ of independent Gaussians of mean $\\mathbf{x}_{n}^{\\top} \\mathbf{w}$, variance 1 and observation $\\mathbf{y}_{n}$ for component $n$.", "$\\square \\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2} \\sum_{n=1}^{N} \\ln \\left(1+e^{\\mathbf{x}_{n}^{\\top} \\mathbf{w}}\\right)-y_{n} \\mathbf{x}_{n}^{\\top} \\mathbf{w}$", "$\\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2}\\|\\mathbf{y}-\\mathbf{X} \\mathbf{w}\\|^{2}+\\frac{\\lambda}{2}\\|\\mathbf{w}\\|^{2}$" ]
['$\\operatorname{argmin}_{\\boldsymbol{\\alpha}} \\frac{1}{2} \\boldsymbol{\\alpha}^{\\top}\\left(\\mathbf{X X}^{\\top}+\\lambda \\mathbf{I}_{N}\\right) \\boldsymbol{\\alpha}-\\boldsymbol{\\alpha}^{\\top} \\mathbf{y}$', '$\\operatorname{argmin}_{\\mathbf{w}}-\\log p(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{w}) p(\\mathbf{w})$, where $p(\\mathbf{w})$ correspond to the density of a $D$-length vector of iid zero-mean Gaussians with variance $1 / \\lambda$ and $p(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{w})$ corresponds to the density of a vector of length $N$ of independent Gaussians of mean $\\mathbf{x}_{n}^{\\top} \\mathbf{w}$, variance 1 and observation $\\mathbf{y}_{n}$ for component $n$.', '$\\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2}\\|\\mathbf{y}-\\mathbf{X} \\mathbf{w}\\|^{2}+\\frac{\\lambda}{2}\\|\\mathbf{w}\\|^{2}$']
1173
In Text Representation learning, which of the following statements is correct?
[ "representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input. Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types. Text Word2vec is a word embedding technique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text. The model has two possible training schemes to produce word vector representations, one generative and one contrastive. The first is word prediction given each of the neighboring words as an input. The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context. Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context", "Machine Learning Course - CS-433 Text Representation Learning Dec 20, 2022 Martin Jaggi Last updated on: December 20, 2022 Motivation Finding numerical representations for words is fundamental for all machine learning methods dealing with text data. Goal: For each word, find mapping (embedding) wi 7→wi ∈RK Representation should capture se- mantics of the word. Constructing good feature represen- tations (= representation learning) benefits all ML applications. The Co-Occurence Matrix A big corpus of un-labeled text can be represented as the co-occurrence counts nij := #contexts where word wi oc- curs together with word wj. 1 1 3 1 2 1 1 1 1 1 1 1 Needs definition of • Context e.g. document, para- graph, sentence, window • Vocabulary V := {w1,, wD} For words wd = 1, 2,, D and con- text words wn = 1, 2,, N, the co-occurence counts nij form a very sparse D × N matrix. Learning Word-Representations (Using Matrix Factorization) Find a factorization of the co- occurence matrix! Typically uses log of the actual counts, i.e. xdn := log(ndn). We will aim to find W, Z s.t. X ≈WZ<unk>. So for each pair of words (wd, wn), we try to ‘explain’ their co-occurence count by a numerical representation of the two words - in fact by the inner product of the two feature vectors Wd:, Zn:. min W,Z L(W, Z) := 1 2 X (d,n)∈Ω fdn xdn -(WZ<unk>)dn 2 where W ∈RD×K and Z ∈RN×K are tall matrices, having only K ≪ D, N columns. The set Ω⊆[D] × [N] collects the indices of non-zeros of the count ma- trix X. Each row of those matrices forms a representation of a word (W) or a context word (Z) respectively. GloVe This model is called GloVe, and is", "Representation proc EMNLP T Mikolov et al Distributed Representations of Words and Phrases and their Compositionality proc NIPS R Lebret and R Collobert Word Emdeddings through Hellinger PCA proc EACL more about this topic in two week Textual Data Analysis Introduction Classi cation Framework Methods Evaluation Visualization Conclusion c EPFL J C Chappelier Classi cation evaluation classi cation supervised evaluation is easy test corpus some known sample kept for testing only clustering unsupervised objective evaluation is more dif cult what are the criterion supervised Classi cation REMINDER see Evaluation lecture Check IAA if possible Measure the misclassi cation error on the test corpus really separated from the learning set and also from the validation set if any criterion confusion matrix error rate Is the difference in the result statistically signi cant Textual Data Analysis Introduction Classi cation Framework Methods Evaluation Visualization Conclusion c EPFL J C Chappelier Clustering unsupervised learning evaluation There is no absolute scheme with which to evaluate clustering but a variety of ad hoc measure from diverse area point of view For K non overlapping cluster with object having a probability p standard measure include Intra cluster variance to be minimized v K k x Ck p x d x xk Inter cluster variance to be maximized V K k x Ck p x z p Ck d xk x The best way is to think to how you want to assess the quality of a clustering w r t your need usually high intra cluster similarity and low inter cluster similarity but what doe similar mean One way also is to have manual evaluation of the clustering Note and if you already have a gold standard with class why not use supervised classi cation in the rst place rather than using a supervised corpus to assess unsupervised method Textual Data Analysis Introduction Classi cation Visualization Framework Linear projection Non linear projection Mappings Conclusion c EPFL J C Chappelier Visualization Visualize project map data in D or D More generaly technique presented in this section are to lower the dimension of data go form N D to n D with n N or even n N usualy mean go from sparse to dense representation visualization projection in a low dimension space class", "ive: relative similarities of representations correlate with syntactic/semantic similarity of words/phrases. ▶two key ideas: 1. representation(composition of words) = vectorial-composition(representations(word)) for instance: representation(document) = ∑ word∈document representation(word) 2. remove sparsness, compactify representation: dimension reduction ▶have been aroud for a long time (renewal these days with the “deep learning buzz”) Harris, Z. (1954), \"Distributional structure\", Word 10(23):146–162. Firth, J.R. (1957), \"A synopsis of linguistic theory 1930-1955\", Studies in Linguistic Analysis. pp 1–32. Textual Data Analysis – 28 / 48 Introduction Classification Framework Methods Evaluation Visualization Conclusion c <unk>EPFL J.-C. Chappelier Word Embedings: different techniques “Many recent publications (and talks) on word embeddings are surprisingly oblivious of the large body of previous work [.]” (from https://www.gavagai.se/blog/2015/09/30/a-brief-history-of-word-embeddings/) Main techniques: ▶co-occurence matrix; often reduced (LSI, Hellinger-PCA) ▶probabilistic/distribution (DSIR, LDA) ▶shallow (Mikolov) or deep-learning Neural Networks There are theoretical and empirical correspondences between these different models [see e.g. Levy, Goldberg and Dagan (2015), Pennington et al. (2014), Österlund et al. (2015)]. Textual Data Analysis – 29 / 48 Introduction Classification Framework Methods Evaluation Visualization Conclusion c <unk>EPFL J.-C. Chappelier about Deep Learning ▶there is NO need of deep learning for good word-embedding ▶not all Neural Network models (NN) are deep learners ▶models: convolutional NN (CNN) or recurrrent NN (RNN, incl. LSTM) ▶still suffer the same old problems: overfitting and computational power a final word, from Michel Jordan (IEEE Spectrum, 2014): “deep learning is largely a rebranding", "based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context. Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context. Image The domain of image representation learning has employed many different self-supervised training techniques, including transformation, inpainting, patch discrimination and clustering. Examples of generative approaches are Context Encoders, which trains an AlexNet CNN architecture to generate a removed image region given the masked image as input, and iGPT, which applies the GPT-2 language model architecture to images by training on pixel prediction after reducing the image resolution. Many other self-supervised methods use siamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation. SimCLR is a contrastive approach which uses negative examples in order to generate image representations with a ResNet CNN. Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training. Graph The goal of many graph representation learning techniques is to produce an embedded representation of each node based on the overall network topology. node2vec extends the word2vec training technique to nodes in a graph by using co-occurrence in random walks through the graph as the measure of association. Another approach is to maximize mutual information, a measure of similarity, between the representations of associated structures within the graph. An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a “patch” around each node, and a summary representation of the entire" ]
[ "Learning GloVe vectors can be done using SGD in a streaming fashion, by streaming through the input text only once.", "Every recommender systems algorithm for learning a matrix factorization $\\boldsymbol{W} \\boldsymbol{Z}^{\\top}$ approximating the observed entries in least square sense does also apply to learn GloVe word vectors.", "FastText performs unsupervised learning of word vectors.", "If you fix all word vectors, and only train the remaining parameters, then FastText in the two-class case reduces to being just a linear classifier." ]
['Every recommender systems algorithm for learning a matrix factorization $\\boldsymbol{W} \\boldsymbol{Z}^{\\top}$ approximating the observed entries in least square sense does also apply to learn GloVe word vectors.', 'If you fix all word vectors, and only train the remaining parameters, then FastText in the two-class case reduces to being just a linear classifier.']
1174
Consider the following joint distribution on $X$ and $Y$, where both random variables take on the values $\{0,1\}: p(X=$ $0, Y=0)=0.1, p(X=0, Y=1)=0.2, p(X=1, Y=0)=0.3, p(X=1, Y=1)=0.4$. You receive $X=1$. What is the largest probability of being correct you can achieve when predicting $Y$ in this case?
[ "yle P(X\\mid Y=y)} is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label values P ( Y ) {\\displaystyle P(Y)}, together with the distribution of observations given a label, P ( X ∣ Y ) {\\displaystyle P(X\\mid Y)} symbolically, P ( X, Y ) = P ( X ∣ Y ) P ( Y ). {\\displaystyle P(X,Y)=P(X\\mid Y)P(Y).} Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution, P ( X, Y ) {\\displaystyle P(X,Y)}, the distribution of the individual variables can be computed as the marginal distributions P ( X ) = ∑ y P ( X, Y = y ) {\\displaystyle P(X)=\\sum _{y}P(X,Y=y)} and P ( Y ) = ∫ x P ( Y, X = x ) {\\displaystyle P(Y)=\\int _{x}P(Y,X=x)} (considering X as continuous, hence integrating over it, and Y as discrete, hence summing over it), and either conditional distribution can be computed from the definition of conditional probability: P ( X ∣ Y ) = P ( X, Y ) / P ( Y ) {\\displaystyle P(X\\mid Y)=P(X,Y)/P(Y)} and P ( Y ∣ X ) = P ( X, Y ) / P ( X ) {\\displaystyle P(Y\\mid X)=P(X,Y)/P(X)}. Given a model of one conditional probability, and estimated probability distributions for the variables X and Y, denoted P ( X ) {\\displaystyle P(X)} and P ( Y ) {\\displaystyle P(Y)}, one can estimate the opposite condition", "(y | x) with x = 0 (blue), 1 (red), 2 (black), 3 (green), 4 (cyan) (in order of decreasing maximal density). Probability and Statistics for SIC slide 189 5.2 Dependence slide 190 Joint moments Definition 163. Let X, Y be random variables of density fX,Y (x, y). Then if E{|g(X, Y )|} < \\infty, we can define the expectation of g(X, Y ) to be E{g(X, Y )} = +% x,y g(x, y)fX,Y (x, y), discrete case, CC g(x, y)fX,Y (x, y) dxdy, continuous case. In particular we define the joint moments and the joint central moments by E(XrY s), E [{X -E(X)}r {Y -E(Y )}s], r, s ∈N. The most important of these is the covariance of X and Y, cov(X, Y ) = E [{X -E(X)} {Y -E(Y )}] = E(XY ) -E(X)E(Y ). Probability and Statistics for SIC slide 191 114 Properties of covariance Theorem 164. Let X, Y, Z be random variables and a, b, c, d ∈R constants. The covariance satisfies: cov(X, X) = var(X); cov(a, X) = 0; cov(X, Y ) = cov(Y, X), (symmetry); cov(a + bX + cY, Z) = b cov(X, Z) + c cov(Y, Z), (bilinearity); cov(a + bX, c + dY ) = bd cov(X, Y ); var(a + bX + cY ) = b2 var(X) + 2bc cov(X, Y ) + c2 var(Y ); cov(X, Y )2 ≤ var(X)var(Y ), (Cauchy–Schwarz inequality). Probability and Statistics for", "clients are more likely to make a claim than others. Find the joint distribution of (X, Y ), the marginal distribution of X, and the conditional distribution of Y given X = x. Probability and Statistics for SIC slide 188 112 Insurance and learning 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 10 Accident rate, y Initial density 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 Mean=0.1 Number of accidents, x Probability 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 Mean=2 Number of accidents, x Probability 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 10 Accident rate, y Conditional density The graph shows how the knowledge of the number of accidents changes the distribution of the rate of accidents y for an insured party. Top left: the original density fY (y). Top right: the conditional mass function fX|Y (x | y = 0.1) for a good driver. Bottom left: the conditional mass function fX|Y (x | y = 2) for a bad driver. Bottom right: the conditional densities fY |X(y | x) with x = 0 (blue), 1 (red), 2 (black), 3 (green), 4 (cyan) (in order of decreasing maximal density). Probability and Statistics for SIC slide 189 5.2 Dependence slide 190 Joint moments Definition 163. Let X, Y be random variables of density fX,Y (x, y). Then if E{|g(X, Y )|} < \\infty, we can define the expectation of g(X, Y ) to be E{g(X, Y )} = +% x,y g(x, y)fX,Y (x, y), discrete case, CC g(x, y)fX,Y (x, y) dxdy, continuous case. In particular we define the joint moments and the joint central moments by E(XrY s), E [{X -E(X)}r {Y -E(Y )}s], r, s ∈N. The most important of these is the covariance of X and Y, cov(X,", "displaystyle Q} 1. J joint distribution Given two random variables X and Y, the joint distribution of X and Y is the probability distribution of X and Y together. joint probability The probability of two events occurring together. The joint probability of A and B is written P ( A ∩ B ) {\\displaystyle P(A\\cap B)} or P ( A, B ) {\\displaystyle P(A,\\ B)}. K Kalman filter kernel kernel density estimation kurtosis A measure of the \"tailedness\" of the probability distribution of a real-valued random variable. There are different ways of quantifying, estimating, and interpreting kurtosis, but a common interpretation is that kurtosis represents the degree to which the shape of the distribution is influenced by infrequent extreme observations (outliers); in this case, higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent modestly sized deviations. L L-moment law of large numbers (LLN) A theorem according to which the average of the results obtained from performing the same experiment a large number of times should be close to the experiment's expected value, and tends to become closer to the expected value as more trials are performed. The law suggests that a sufficiently large number of trials is necessary for the results of any experiment to be considered reliable, and by extension that performing only a small number of trials may produce an incomplete or misleading interpretation of the experiment's outcomes. likelihood function A conditional probability function considered a function of its second argument with its first argument held fixed. For example, imagine pulling a numbered ball with a number k from a bag of n balls, numbered 1 to n; a likelihood function for the random variable N could be described as the probability of pulling k given that there are n balls: the likelihood will be 1/n for n greater than or equal to k, and 0 for n smaller than k. Unlike a probability distribution function, this likelihood function will not sum up to 1 on the sample space. loss function likelihood-ratio test M M-estimator marginal distribution Given two jointly distributed random variables X", "| x 0 ) p ( x 0 ) {\\displaystyle \\alpha (x_{0})=p(y_{0}|x_{0})p(x_{0})}. Once the joint probability α ( x t ) = p ( x t, y 1 : t ) {\\displaystyle \\alpha (x_{t})=p(x_{t},y_{1:t})} has been computed using the forward algorithm, we can easily obtain the related joint probability p ( y 1 : t ) {\\displaystyle p(y_{1:t})} as p ( y 1 : t ) = ∑ x t p ( x t, y 1 : t ) = ∑ x t α ( x t ) {\\displaystyle p(y_{1:t})=\\sum _{x_{t}}p(x_{t},y_{1:t})=\\sum _{x_{t}}\\alpha (x_{t})} and the required conditional probability p ( x t | y 1 : t ) {\\displaystyle p(x_{t}|y_{1:t})} as p ( x t | y 1 : t ) = p ( x t, y 1 : t ) p ( y 1 : t ) = α ( x t ) ∑ x t α ( x t ). {\\displaystyle p(x_{t}|y_{1:t})={\\frac {p(x_{t},y_{1:t})}{p(y_{1:t})}}={\\frac {\\alpha (x_{t})}{\\sum _{x_{t}}\\alpha (x_{t})}}.} Once the conditional probability has been calculated, we can also find the point estimate of x t {\\displaystyle x_{t}}. For instance, the MAP estimate of x t {\\displaystyle x_{t}} is given by x ^ t M A P = arg ⁡ max x t p ( x t | y 1 : t ) = arg ⁡ max x t α ( x t ), {\\displaystyle" ]
[ "$\\frac{1}{3}$", "$\\frac{3}{4}$", "$\\frac{1}{7}$", "$0$", "$1$", "$\\frac{2}{3}$", "$\\frac{6}{7}$", "$\\frac{4}{7}$", "$\\frac{3}{7}$", "$\\frac{1}{4}$", "$\\frac{2}{4}$" ]
$\frac{4}{7}$
1177
Which of the following statements are correct?
[ "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o oo o o o o o o o o oo o oo o o o o o oo o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o ooo o o o o o o o oo o o o o o oo oo o o o o o o o o o oooo o o o o o o o o o o o o o oo o o o o o o o o o oo o o o o o o o o o oo o o o oo ooo o o o o o o oo o o o oo o o oo o o o o oo o o o o o o o o o oo o o o oo oo o o o o o o oo o o o o o o o o o o oo o oo o oo o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o", "o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o oo o o o ooo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o oo o o o o o o o oo oo o o oo o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o oo oo o o o o oo o o o o o o o o o o o o o o oo o o o oo o o o o o o o oo o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o", "o o o o o oo o o oo o o oo o oo o o o o o oo oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o oo o o oo oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o ooo o oo ooo o o o o o o o o o o o o oo o oo o o o o o o o oo oo o o ooo o o oo o o oo o o o o o o oo oooo ooo oo o o o o o oooo o ooo o o o o o oo o ooo oo o o o o o o oo oo o o o o o o o oooo oo o o oo oo o o o o o o o o oo o o oo oo oo o oo oo o o o oo o o o o o o o oo oo o oo o o o oo o o o o o o o o ooo o ooo o o o o o oo o o ooo o ooo o o o oo o o oo o o o o o o o oo oo o o o o o oo o o oo o o o o o o o o o o o oooooo o ooooooo o oooo o o o oo o o o o o o oo o o o o o o o o o o o o o oo o o o oo o o o o oo o o oo oo oo o o o o o o oo o oo o o o o o o ooo oo o o o oo o o o o o o ooo o o o o o o o o o oo oo o o o o o o o o oo o o o o oo o o o o o o o oo o o o o o oo o o o ooo o o o ooo o o o oo o o ooo oo o o oo o o ooo o o o o o o", "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o oo o oo o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o oo o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o ooo oo o o o o o o o o o o oooo o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo oo o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o oo o o o o o o o o o o o o o o o o o o o o o o oo oo oo o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o oo o o o o o o o o o o o o o oo o o o o o o oo o o o o oo oo o o o o o o o o oo o o o o o o o o o ooo o o", "o o o o o o o o o o o o oo o o o oo o o o o oo o o oo oo oo o o o o o o oo o oo o o o o o o ooo oo o o o oo o o o o o o ooo o o o o o o o o o oo oo o o o o o o o o oo o o o o oo o o o o o o o oo o o o o o oo o o o ooo o o o ooo o o o oo o o ooo oo o o oo o o ooo o o o o o o oo o o o o oo o o o o o o o o o o o o o o o o oo ooo o o o o o o o o o o oo o o o o oo o oo o o oooo o o o oo o o o o o o o o o oo o o o o o o o o o o o o o o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o o o o o o o ooo o o o o o o oo oo o oo o oo o o o o o o o o oo o o oo o o oo o o oo o o o o o o o o o o o o o o o o o oo o o o ooo o oo o o o o o o o oo o o ooo o o o o o o oo o oo o o o oo o o o o o oo o o oooo o o o o oo o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o oo o o o oo o o oo o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o oo o o oo oo o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o oo o o o oo o o o o o o o o o o o o o o oo o o o o oo o o o o oooo oo o ooo o o o o o o o oo" ]
[ "One iteration of standard SGD for SVM costs roughly $\\Theta(D)$, where $D$ is the dimension.", "Unions of convex sets are convex.", "Hinge loss (as in SVMs) is typically preferred over L2 loss (least squares loss) in classification tasks.", "In PCA, the first principal direction is the eigenvector of the data matrix $\\boldsymbol{X}$ with largest associated eigenvalue.", "MSE (mean squared error) is typically more sensitive to outliers than MAE (mean absolute error).", "One iteration of standard SGD for logistic regression costs roughly $\\Theta(N D)$, where $N$ is the number of samples and $D$ is the dimension." ]
['Hinge loss (as in SVMs) is typically preferred over L2 loss (least squares loss) in classification tasks.', 'MSE (mean squared error) is typically more sensitive to outliers than MAE (mean absolute error)', 'One iteration of standard SGD for SVM costs roughly $\\Theta(D)$, where $D$ is the dimension']
1179
(Backpropagation) Training via the backpropagation algorithm always learns a globally optimal neural network if there is only one hidden layer and we run an infinite number of iterations and decrease the step size appropriately over time.
[ ", backpropagation can be understood in terms of automatic differentiation, where backpropagation is a special case of reverse accumulation (or \"reverse mode\"). Intuition Motivation The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output. Learning as an optimization problem To understand the mathematical derivation of the backpropagation algorithm, it helps to first develop some intuition about the relationship between the actual output of a neuron and the correct output for a particular training example. Consider a simple neural network with two input units, one output unit and no hidden units, and in which each neuron uses a linear output (unlike most work on neural networks, in which mapping from inputs to outputs is non-linear) that is the weighted sum of its input. Initially, before training, the weights will be set randomly. Then the neuron learns from training examples, which in this case consist of a set of tuples ( x 1, x 2, t ) {\\displaystyle (x_{1},x_{2},t)} where x 1 {\\displaystyle x_{1}} and x 2 {\\displaystyle x_{2}} are the inputs to the network and t is the correct output (the output the network should produce given those inputs, when it has been trained). The initial network, given x 1 {\\displaystyle x_{1}} and x 2 {\\displaystyle x_{2}}, will compute an output y that likely differs from t (given random weights). A loss function L ( t, y ) {\\displaystyle L(t,y)} is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical cross-entropy can be used. As an example consider a regression problem using the square error as a loss: L ( t, y ) = ( t − y ) 2 =", "In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes the gradient of a loss function with respect to the weights of the network for a single input–output example, and does so efficiently, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this can be derived through dynamic programming. Strictly speaking, the term backpropagation refers only to an algorithm for efficiently computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm – including how the gradient is used, such as by stochastic gradient descent, or as an intermediate step in a more complicated optimizer, such as Adaptive Moment Estimation. The local minimum convergence, exploding gradient, vanishing gradient, and weak control of learning rate are main disadvantages of these optimization algorithms. The Hessian and quasi-Hessian optimizers solve only local minimum convergence problem, and the backpropagation works longer. These problems caused researchers to develop hybrid and fractional optimization algorithms. Backpropagation had multiple discoveries and partial discoveries, with a tangled history and terminology. See the history section for details. Some other names for the technique include \"reverse mode of automatic differentiation\" or \"reverse accumulation\". Overview Backpropagation computes the gradient in weight space of a feedforward neural network, with respect to a loss function. Denote: x {\\displaystyle x} input (vector of features) y {\\displaystyle y} target output For classification, output will be a vector of class probabilities (e.g., ( 0.1, 0.7, 0.2 ) {\\displaystyle (0.1,0.7,0.2)}, and target output is a specific class, encoded by the one-hot/dummy variable (e.g., ( 0, 1, 0 ) {\\displaystyle (0,1,0)} ). C {\\displaystyle", "Training Neural Networks Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given n training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in the opposite direction of the gradient Repeat this process until convergence Given training pair input and label Gradient Descent Computationally intensive Algorithm Initialize weight randomly Loop until convergence Compute gradient Update weight Return weight Stochastic Gradient Descent Easy to compute but very noisy Algorithm Initialize weight randomly Loop until convergence Pick random single sample Compute gradient $ + )+ ○ Update weights, ←- )* + )+ ○ Return weights!, \" &'75 Mini-Batch Gradient Descent ● Algorithm: ○ Initialize weights randomly ~0, & ○ Loop until convergence: ○ Pick a mini-batch of data samples ○ Compute gradient, )* + )+ = %, ∑-.%, )*% + )+ ○ Update weights, ←- )* + )+ ○ Return weights Better estimation of true gradient and fast to compute, smoother convergence!, \" &'76 Backpropagation Using Chain Rule % F % & & = F ∗F & Let’s apply chain rule! 77 Backpropagation Using Chain Rule % F % & % = F ∗F % Apply chain rule 78 Backpropagation Using Chain Rule % = F ∗F % ∗% % % F % & Apply chain rule 79 Training Deep Networks % = h/ ∗h/ h/$ $% h/$ In most", "neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not. Backpropagation learning does not require normalization of input vectors; however, normalization could improve performance. Backpropagation requires the derivatives of activation functions to be known at network design time. History Precursors Backpropagation had been derived repeatedly, as it is essentially an efficient application of the chain rule (first written down by Gottfried Wilhelm Leibniz in 1676) to neural networks. The terminology \"back-propagating error correction\" was introduced in 1962 by Frank Rosenblatt, but he did not know how to implement this. In any case, he only studied neurons whose outputs were discrete levels, which only had zero derivatives, making backpropagation impossible. Precursors to backpropagation appeared in optimal control theory since 1950s. Yann LeCun et al credits 1950s work by Pontryagin and others in optimal control theory, especially the adjoint state method, for being a continuous-time version of backpropagation. Hecht-Nielsen credits the Robbins–Monro algorithm (1951) and Arthur Bryson and Yu-Chi Ho's Applied Optimal Control (1969) as presages of backpropagation. Other precursors were Henry J. Kelley 1960, and Arthur E. Bryson (1961). In 1962, Stuart Dreyfus published a simpler derivation based only on the chain rule. In 1973, he adapted parameters of controllers in proportion to error gradients. Unlike modern backpropagation, these precursors used standard Jacobian matrix calculations from one stage to the previous one, neither addressing direct links across several stages nor potential additional efficiency gains due to network sparsity. The ADALINE (1960) learning algorithm was gradient descent with a squared error loss for a single layer. The first multilayer perceptron (MLP) with more than one layer trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. The MLP had 5 layers, with 2 learnable layers, and it learned to classify patterns not linearly separable. Modern backpropagation Modern backpropagation was", "t+1],..., a[t+k−1] p := forward-propagate the inputs over the whole unfolded network e := y[t+k] − p; // error = target − prediction Back-propagate the error, e, back across the whole unfolded network Sum the weight changes in the k instances of f together. Update all the weights in f and g. x := f(x, a[t]); // compute the context for the next time-step Advantages BPTT tends to be significantly faster for training recurrent neural networks than general-purpose optimization techniques such as evolutionary optimization. Disadvantages BPTT has difficulty with local optima. With recurrent neural networks, local optima are a much more significant problem than with feed-forward neural networks. The recurrent feedback in such networks tends to create chaotic responses in the error surface which cause local optima to occur frequently, and in poor locations on the error surface. See also Backpropagation through structure" ]
[ "True", "False" ]
False
1183
Which of the following statements about the $\mathrm{SVD}$ of an $N \times D$ matrix $\mathbf{X}$ are correct?
[ "is 0 thereafter. We claim that UKU<unk> KX = UKU<unk> KUSV<unk>= US(K)V<unk>. (2) With this interpretation, the lemma states that the best rank-K approximation to a matrix X is obtained by com- puting the SVD and by setting all the singular values sj, j ≥K + 1 to zero. The claim (2) is easily seen by checking that U<unk> KU = (IK×K; 0) ∈RK×D is a D ×D matrix whose first K columns are the K identity and whose remaining columns are 0. Example Application. Let us now discuss the implica- tions of the SVD. One way to visualize the usefulness of this statement is to consider a particular compression problem. For a set of images, we take the vector of D pixels that repre- sent each image. We can then compress an image by running Figure 2: Compression via PCA. The original image is 50 × 50. The large image on the right is reconstructed from the top K = 10 principal components. SVD and compress the picture with the scheme above, pro- jecting the image onto the first K columns of U. To see how well this works we can then reconstruct this image back to the original image space RD and visualize it next to its original. This is shown in Figure 2 above3. Note that this is a slightly different application of what we had in mind when we started – as here we care about the compression, not so much about the lower-dimensional rep- resentations in RD. But it gives a good intuition why this is a useful method. The compression aspect can also be visu- 3Taken from the book Understanding Machine Learning by Shalev-Shwartz and Ben-David. x x x x x x x o o o o o o o + + + + + + + Figure 3: Compression via PCA. The images after dimensionality reduction to R2 (K = 2). The different marks indicate different individuals. alized nicely, as shown in Figure 3 here. SVD and Matrix Factorization In previous lectures we have seen already several applications of matrix factorizations. Let us now discuss how the SVD relates to this problem. Assume that we are given the data", "~. {\\displaystyle U^{T}AU=U^{T}V_{2}^{N}W\\Sigma ^{-1}\\equiv {\\tilde {S}}.} Because A {\\displaystyle A} and S ~ {\\displaystyle {\\tilde {S}}} are related via similarity transform, the eigenvalues of S ~ {\\displaystyle {\\tilde {S}}} are the eigenvalues of A {\\displaystyle A}, and if y {\\displaystyle y} is an eigenvector of S ~ {\\displaystyle {\\tilde {S}}}, then U y {\\displaystyle Uy} is an eigenvector of A {\\displaystyle A}. In summary, the SVD-based approach is as follows: Split the time series of data in V 1 N {\\displaystyle V_{1}^{N}} into the two matrices V 1 N − 1 {\\displaystyle V_{1}^{N-1}} and V 2 N {\\displaystyle V_{2}^{N}}. Compute the SVD of V 1 N − 1 = U Σ W T {\\displaystyle V_{1}^{N-1}=U\\Sigma W^{T}}. Form the matrix S ~ = U T V 2 N W Σ − 1 {\\displaystyle {\\tilde {S}}=U^{T}V_{2}^{N}W\\Sigma ^{-1}}, and compute its eigenvalues λ i {\\displaystyle \\lambda _{i}} and eigenvectors y i {\\displaystyle y_{i}}. The i {\\displaystyle i} -th DMD eigenvalue is λ i {\\displaystyle \\lambda _{i}} and i {\\displaystyle i} -th DMD mode is the U y i {\\displaystyle Uy_{i}}. The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of V 1 N − 1 {\\displaystyle V_{1}^{N-1}}. As noted in", "SVD Recall that any D N matrix X can be written in the form X USV This decomposition is depicted graphically in Figure For simplicity in the following we assume that D N This Figure Graphical depiction of SVD is an arbitrary choice but by consistently sticking with this convention it will make it easier to tell the dimension apart Here U is of size D D and V is of size N N and both matrix are unitary i e UU U U ID D VV V V IN N Recall that the condition UU ID D mean that the matrix U ha orthonormal i e orthogonal and norm row and that U U But if UU ID D then also U U U U ID D so that also the column of U are orthonormal Therefore requiring that a square matrix is unitary is the same a requiring that it ha orthonormal Our notation assumes that the matrix is real valued In this case all the ma trice in the SVD are also real valued and U and V are said to be orthogonal matrix In the more general case of complex valued matrix one say that the matrix is unitary In this case the transpose operator is supposed to be interpreted a the usual transpose and complex conjugation We will refer to U and V a unitary even though we assume that they are real valued row or requiring that it ha orthonormal column One useful property of a unitary matrix is that the linear transform it represents can be interpreted a a rotation i e it doe not change the length of the vector that is being transformed Ux x U Ux x x x The matrix S is a diagonal matrix of size D N with non negative entry along the diagonal These diagonal entry are called the singular value The column of U and V are called the left and right singular vector respectively By convention the singular value appear in a descending order in S i e we have s s s sD where sj is the j th singular value We will see that this transform play a key role in our dis cussion We will take this representation for granted and not give a proof of the SVD But we will show how to perform an optimal dimensionality reduction given this representation SVD and Dimensionality Reduction We want to compress", "standing Machine Learning by Shalev-Shwartz and Ben-David. x x x x x x x o o o o o o o + + + + + + + Figure 3: Compression via PCA. The images after dimensionality reduction to R2 (K = 2). The different marks indicate different individuals. alized nicely, as shown in Figure 3 here. SVD and Matrix Factorization In previous lectures we have seen already several applications of matrix factorizations. Let us now discuss how the SVD relates to this problem. Assume that we are given the data matrix X. Use the SVD to write it as X = USV<unk>. X = USV<unk>= U |{z} W SV<unk> | {z } Z<unk> = WZ<unk>. So we have achieved a perfect factorization of our data ma- trix. There are two differences compared to matrix factorization problems. First, in the matrix factorization problem we typically re- strict W and Z to have few columns only, lets say K, where in SVD we can control the rank at any time later, and can let it range up to min{D, N}. Of course, in the low-rank case we cannot hope for a perfect factorization but we are looking for the best possible approximation. This difference can be easily addressed as we have already seen. Let 1 ≤K ≤min{D, N}. Let S(K) be the matrix that is equal to S except that all singular values sj for j ≥K +1 are set to zero. We have seen this matrix already in our discussion of the SVD. This gives us the rank-K approximation XK := US(K)V<unk>, and indeed, as we have discussed, it is the best rank-K ap- proximation that we can find in the sense that the Frobenius norm of the difference is the smallest possible and is equal to P i≥K+1 s2 i, where the sj are again the singular values of X. We can again write the above approximation in a factorized form. Again, let UK be the matrix consisting of the first K columns of U. Similar to before we can now write XK = UKS(K)V<unk>= UK |{z} W S", "addressed a we have already seen Let K min D N Let S K be the matrix that is equal to S except that all singular value sj for j K are set to zero We have seen this matrix already in our discussion of the SVD This give u the rank K approximation XK US K V and indeed a we have discussed it is the best rank K ap proximation that we can find in the sense that the Frobenius norm of the difference is the smallest possible and is equal to P i K s i where the sj are again the singular value of X We can again write the above approximation in a factorized form Again let UK be the matrix consisting of the first K column of U Similar to before we can now write XK UKS K V UK z W S K V z Z WZ where W is an D K matrix and Z is a K N matrix The second difference is that in general matrix factorization problem we can have data matrix X that can have many missing entry Indeed one can construct a low rank factor ization that wa close in the known value in order to predict the missing value a we will see in the next lecture The method using the SVD on the other hand start with a com plete data matrix There doe not seem to be an easy fix to adapt the SVD to the case of missing value And so we see that despite some similarity between these problem there are also some significant difference PCA and Decorrelation There is another probabilistic view point that give insight why the PCA is a good idea Assume that the D dimensional data point are generated in an i i d fashion according to some unknown distribution Dx These N data point form the column of our D N matrix X Let u compute the empirical sample mean and co variance We have x N N X n xn K N N X n xn x xn x If indeed the data come from i i d sample then the sample mean will converge to the true mean and the sample co variance matrix will converge to the true convariance matrix a N Assume that we have pre processed the data matrix X by subtracting the mean from each row Using the SVD the empirical covariance matrix can be written a NK N X n xnx n XX USV VS U USS U" ]
[ "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X X}^{\\top}$. This has complexity $O\\left(N^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X X}^{\\top}$. This has complexity $O\\left(D^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X}^{\\top} \\mathbf{X}$. This has complexity $O\\left(N^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X}^{\\top} \\mathbf{X}$. This has complexity $O\\left(D^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X} \\mathbf{X}^{\\top}$ if only if $\\mathbf{X}$ is a square matrix. This has complexity $O\\left(D^{3}\\right)=O\\left(N^{3}\\right)$." ]
['We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X}^{\\top} \\mathbf{X}$. This has complexity $O\\left(D^{3}\\right)$.', 'We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X X}^{\\top}$. This has complexity $O\\left(N^{3}\\right)$.']
1184
Consider a linear regression problem with $N$ samples where the input is in $D$-dimensional space, and all output values are $y_{i} \in\{-1,+1\}$. Which of the following statements is correct?
[ ",\\Theta _{1},\\ldots,\\Theta _{M})={\\frac {1}{M}}\\sum _{j=1}^{M}m_{n}(\\mathbf {x},\\Theta _{j})}. For regression trees, we have m n = ∑ i = 1 n Y i 1 X i ∈ A n ( x, Θ j ) N n ( x, Θ j ) {\\displaystyle m_{n}=\\sum _{i=1}^{n}{\\frac {Y_{i}\\mathbf {1} _{\\mathbf {X} _{i}\\in A_{n}(\\mathbf {x},\\Theta _{j})}}{N_{n}(\\mathbf {x},\\Theta _{j})}}}, where A n ( x, Θ j ) {\\displaystyle A_{n}(\\mathbf {x},\\Theta _{j})} is the cell containing x {\\displaystyle \\mathbf {x} }, designed with randomness Θ j {\\displaystyle \\Theta _{j}} and dataset D n {\\displaystyle {\\mathcal {D}}_{n}}, and N n ( x, Θ j ) = ∑ i = 1 n 1 X i ∈ A n ( x, Θ j ) {\\displaystyle N_{n}(\\mathbf {x},\\Theta _{j})=\\sum _{i=1}^{n}\\mathbf {1} _{\\mathbf {X} _{i}\\in A_{n}(\\mathbf {x},\\Theta _{j})}}. Thus random forest estimates satisfy, for all x ∈ [ 0, 1 ] d {\\displaystyle \\mathbf {x} \\in [0,1]^{d}}, m M, n ( x, Θ 1,..., Θ M ) = 1 M ∑ j = 1 M ( ∑ i = 1 n Y i 1 X i ∈ A n (", "Machine Learning Course - CS-433 Exponential Families and Generalized Linear Models Oct 25nd, 2022 minor changes by Nicolas Flammarion 2022,2021,2020; changes by R ̈udiger Urbanke 2019,2018,2017,2016; c<unk>Mohammad Emtiyaz Khan 2015 Last updated on: October 24, 2022 0 1 2 3 4 5 6 7 8 9 10 X 0 1 2 3 4 5 6 7 8 9 Y Linear Regression Figure 1: Motivation Let us go back to regression. Consider the very simple one- dimensional example in Fig. 1. The horizontal axis represents the input x and the vertical axis the output y. Our aim is to find a model for this data. It is very natural in this case that we try a linear model: y = xw1 +w0 +Z. I.e., we model the data as a line plus noise. Perhaps the most natural choice for the noise is a zero-mean Gaussian with some variance σ2. As we discussed, this leads to least squares, assuming that we think of the data samples as independent and that we maximize the likelihood. This is what is typically meant when people talk about linear models (of course the data could be higher dimensional). Now consider the data given in Fig. 2. In this case a linear 0 1 2 3 4 5 6 7 8 9 10 X -5 0 5 10 15 20 25 Y Non-linear Regression Figure 2: model would not be a good fit. We have seen how we can get around this problem. Just add some additional features, e.g., x2 and x3. If we now use again a linear model, but in the extended feature space then we should be able to model the data well. So the idea was to augment or transform the feature space. But this is not the only option we have. Note that in the example above the linear model predicts the mean of a dis- tribution from which we then assume the data was sampled. Explicitly, we had y = xw1 + w0 + Z, where xw1 + w0 is the prediction of the linear model and represents the mean (i.e., the putatively “true” value for this data point) and then we get a noisy version as a sample. Here is now the extra", "gression Consider the regression model y = α + β T x + ε, where ε ⊥ ⊥ x. {\\displaystyle y=\\alpha +\\beta ^{T}{\\textbf {x}}+\\varepsilon,{\\text{ where }}\\varepsilon \\perp \\!\\!\\!\\perp {\\textbf {x}}.} Note that the distribution of y ∣ x {\\displaystyle y\\mid {\\textbf {x}}} is the same as the distribution of y ∣ β T x {\\displaystyle y\\mid \\beta ^{T}{\\textbf {x}}}. Hence, the span of β {\\displaystyle \\beta } is a dimension reduction subspace. Also, β T x {\\displaystyle \\beta ^{T}{\\textbf {x}}} is 1-dimensional (unless β = 0 {\\displaystyle \\beta ={\\textbf {0}}} ), so the structural dimension of this regression is d = 1 {\\displaystyle d=1}. The OLS estimate β ^ {\\displaystyle {\\hat {\\beta }}} of β {\\displaystyle \\beta } is consistent, and so the span of β ^ {\\displaystyle {\\hat {\\beta }}} is a consistent estimator of S y ∣ x {\\displaystyle {\\mathcal {S}}_{y\\mid x}}. The plot of y {\\displaystyle y} versus β ^ T x {\\displaystyle {\\hat {\\beta }}^{T}{\\textbf {x}}} is a sufficient summary plot for this regression. See also Dimension reduction Sliced inverse regression Principal component analysis Linear discriminant analysis Curse of dimensionality Multilinear subspace learning Notes References External links Sufficient Dimension Reduction", "In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency. Dimension reduction has long been a primary goal of regression analysis. Given a response variable y and a p-dimensional predictor vector x {\\displaystyle {\\textbf {x}}}, regression analysis aims to study the distribution of y ∣ x {\\displaystyle y\\mid {\\textbf {x}}}, the conditional distribution of y {\\displaystyle y} given x {\\displaystyle {\\textbf {x}}}. A dimension reduction is a function R ( x ) {\\displaystyle R({\\textbf {x}})} that maps x {\\displaystyle {\\textbf {x}}} to a subset of R k {\\displaystyle \\mathbb {R} ^{k}}, k < p, thereby reducing the dimension of x {\\displaystyle {\\textbf {x}}}. For example, R ( x ) {\\displaystyle R({\\textbf {x}})} may be one or more linear combinations of x {\\displaystyle {\\textbf {x}}}. A dimension reduction R ( x ) {\\displaystyle R({\\textbf {x}})} is said to be sufficient if the distribution of y ∣ R ( x ) {\\displaystyle y\\mid R({\\textbf {x}})} is the same as that of y ∣ x {\\displaystyle y\\mid {\\textbf {x}}}. In other words, no information about the regression is lost in reducing the dimension of x {\\displaystyle {\\textbf {x}}} if the reduction is sufficient. Graphical motivation In a regression setting, it is often useful to summarize the distribution of y ∣ x {\\displaystyle y\\mid {\\textbf {x}}} graphically. For instance, one may consider a scatterplot of y {\\displaystyle y} versus one or more of the predictors or a linear combination of the predictors. A scatterplot that contains all available regression information is called a sufficient summary plot. When x {\\", "Y}y\\,d\\rho (y\\mid x),\\,x\\in X,} where ρ ( y ∣ x ) {\\displaystyle \\rho (y\\mid x)} is the conditional distribution at x {\\displaystyle x} induced by ρ {\\displaystyle \\rho }. One common choice for approximating the regression function is to use functions from a reproducing kernel Hilbert space. These spaces can be infinite dimensional, in which they can supply solutions that overfit training sets of arbitrary size. Regularization is, therefore, especially important for these methods. One way to regularize non-parametric regression problems is to apply an early stopping rule to an iterative procedure such as gradient descent. The early stopping rules proposed for these problems are based on analysis of upper bounds on the generalization error as a function of the iteration number. They yield prescriptions for the number of iterations to run that can be computed prior to starting the solution process. Example: Least-squares loss (Adapted from Yao, Rosasco and Caponnetto, 2007) Let X ⊆ R n {\\displaystyle X\\subseteq \\mathbb {R} ^{n}} and Y = R. {\\displaystyle Y=\\mathbb {R}.} Given a set of samples z = { ( x i, y i ) ∈ X × Y : i = 1,..., m } ∈ Z m, {\\displaystyle \\mathbf {z} =\\left\\{(x_{i},y_{i})\\in X\\times Y:i=1,\\dots,m\\right\\}\\in Z^{m},} drawn independently from ρ {\\displaystyle \\rho }, minimize the functional E ( f ) = ∫ X × Y ( f ( x ) − y ) 2 d ρ {\\displaystyle {\\mathcal {E}}(f)=\\int _{X\\times Y}(f(x)-y)^{2}\\,d\\rho } where, f {\\displaystyle f} is a member of the reproducing kernel Hil" ]
[ "(a) linear regression cannot \"work\" if $N \\gg D$", "(b) linear regression cannot \"work\" if $N \\ll D$", "(c) linear regression can be made to work perfectly if the data is linearly separable" ]
(c)
1185
Consider a matrix factorization problem of the form $\mathbf{X}=\mathbf{W Z}^{\top}$ to obtain an item-user recommender system where $x_{i j}$ denotes the rating given by $j^{\text {th }}$ user to the $i^{\text {th }}$ item . We use Root mean square error (RMSE) to gauge the quality of the factorization obtained. Select the correct option.
[ "Matrix Factorization (ScalableNMF), Distributed Stochastic Singular Value Decomposition. Online: how to update the factorization when new data comes in without recomputing from scratch, e.g., see online CNSC Collective (joint) factorization: factorizing multiple interrelated matrices for multiple-view learning, e.g. multi-view clustering, see CoNMF and MultiNMF Cohen and Rothblum 1993 problem: whether a rational matrix always has an NMF of minimal inner dimension whose factors are also rational. Recently, this problem has been answered negatively. See also Multilinear algebra Multilinear subspace learning Tensor Tensor decomposition Tensor software Sources and external links Notes =", "standing Machine Learning by Shalev-Shwartz and Ben-David. x x x x x x x o o o o o o o + + + + + + + Figure 3: Compression via PCA. The images after dimensionality reduction to R2 (K = 2). The different marks indicate different individuals. alized nicely, as shown in Figure 3 here. SVD and Matrix Factorization In previous lectures we have seen already several applications of matrix factorizations. Let us now discuss how the SVD relates to this problem. Assume that we are given the data matrix X. Use the SVD to write it as X = USV<unk>. X = USV<unk>= U |{z} W SV<unk> | {z } Z<unk> = WZ<unk>. So we have achieved a perfect factorization of our data ma- trix. There are two differences compared to matrix factorization problems. First, in the matrix factorization problem we typically re- strict W and Z to have few columns only, lets say K, where in SVD we can control the rank at any time later, and can let it range up to min{D, N}. Of course, in the low-rank case we cannot hope for a perfect factorization but we are looking for the best possible approximation. This difference can be easily addressed as we have already seen. Let 1 ≤K ≤min{D, N}. Let S(K) be the matrix that is equal to S except that all singular values sj for j ≥K +1 are set to zero. We have seen this matrix already in our discussion of the SVD. This gives us the rank-K approximation XK := US(K)V<unk>, and indeed, as we have discussed, it is the best rank-K ap- proximation that we can find in the sense that the Frobenius norm of the difference is the smallest possible and is equal to P i≥K+1 s2 i, where the sj are again the singular values of X. We can again write the above approximation in a factorized form. Again, let UK be the matrix consisting of the first K columns of U. Similar to before we can now write XK = UKS(K)V<unk>= UK |{z} W S", "{1},\\dots,v_{n})}. This greatly improves the quality of data representation of W. Furthermore, the resulting matrix factor H becomes more sparse and orthogonal. Nonnegative rank factorization In case the nonnegative rank of V is equal to its actual rank, V = WH is called a nonnegative rank factorization (NRF). The problem of finding the NRF of V, if it exists, is known to be NP-hard. Different cost functions and regularizations There are different types of non-negative matrix factorizations. The different types arise from using different cost functions for measuring the divergence between V and WH and possibly by regularization of the W and/or H matrices. Two simple divergence functions studied by Lee and Seung are the squared error (or Frobenius norm) and an extension of the Kullback–Leibler divergence to positive matrices (the original Kullback–Leibler divergence is defined on probability distributions). Each divergence leads to a different NMF algorithm, usually minimizing the divergence using iterative update rules. The factorization problem in the squared error version of NMF may be stated as: Given a matrix V {\\displaystyle \\mathbf {V} } find nonnegative matrices W and H that minimize the function F ( W, H ) = ‖ V − W H ‖ F 2 {\\displaystyle F(\\mathbf {W},\\mathbf {H} )=\\left\\|\\mathbf {V} -\\mathbf {WH} \\right\\|_{F}^{2}} Another type of NMF for images is based on the total variation norm. When L1 regularization (akin to Lasso) is added to NMF with the mean squared error cost function, the resulting problem may be called non-negative sparse coding due to the similarity to the sparse coding problem, although it may also still be referred to as NMF. Online NMF Many standard NMF algorithms analyze all the data together; i.e., the whole matrix is available from the start.", "). By definition, r = rank(A) and each of the columns ai of A = a1, a2,, an can be expressed as a linear combination 1 2 Version September 14, 2020 Chapter 1. Basics of these basis vectors: ai = b1ci1 + b2ci2 + · · · + brcir = b1,, br <unk> <unk> ci1. cir <unk> <unk>, for some coefficients cij ∈R with i = 1,, n, j = 1,, r. Stacking these relations column by column yields a1,, an = b1,, br <unk> <unk> c11 · · · cn1.. c1r · · · cnr <unk> <unk> Letting B denote the first factor and CT the second factor, we have thus arrived at the factorization A = BCT, B ∈Rm×r, C ∈Rn×r. (1.1) We say that A has low rank if rank(A) ≪m, n. For such matrices, storing the factors B and C instead of the original matrix significantly reduces memory requirements. Lemma 1.2. A matrix A has rank r if and only if it admits an factorization of the form (1.1) with factors B, C having linearly independent columns. Proof. The result follows directly from combining the factorization (1.1) with the property rank(A) ≤min{rank(B), rank(C)} from Lemma 1.1.3. A factorization of the form (1.1) with r = rank(A) is sometimes called full rank factorization. Note that such a factorization is highly non-unique. In fact, for any invertible matrix Q ∈Rr×r, we have A = BCT = ̃B ̃CT, with ̃B = BQ, ̃C = CQ-T, (1.2) with Q-T := (QT )-1. Problems Problem 1.1.1. 1. Find 2 × 2 matrices A, B such that rank(AB) <unk>= rank(BA). 2. Let n be fixed but arbitrary. Determine the smallest value of m such that there are m × n matrices A, B with the following property: rank(A)", "Low-Rank Approximation Lecture Notes Prof. Daniel Kressner EPFL / MATH / ANCHP Fall 2018 September 14, 2020 ii Version September 14, 2020 Chapter 1 Basics The purpose of this chapter is to collect the theoretical foundations of low-rank matrix approximation. 1.1 Matrix rank and factorization In this lecture, we only consider matrices with real entries. The extension of results to matrices with complex entries is often trivial and will sometimes be sketched. Also, when considering a rectangular m × n matrix A we will usually assume that m ≥n. The case m < n can usually be obtained by considering AT instead of A. The rank of a matrix A ∈Rm×n is denoted by rank(A) and commonly defined as the dimension of range(A), the space spanned by the columns of A. The following lemma collects basic properties of the rank. Lemma 1.1. Let A ∈Rm×n. Then 1. rank(AT ) = rank(A); 2. rank(PAQ) = rank(A) for invertible matrices P ∈Rm×m, Q ∈Rn×n; 3. rank(AB) ≤min{rank(A), rank(B)} for any matrix B ∈Rn×p. 4. rank A11 A12 0 A22 = rank(A11) + rank(A22) for A11 ∈Rm1×n1, A12 ∈ Rm1×n2, A22 ∈Rm2×n2. We now explain the connection between matrix rank and matrix factorizations. Let {b1,, br} ⊂Rm be a basis of range(A). By definition, r = rank(A) and each of the columns ai of A = a1, a2,, an can be expressed as a linear combination 1 2 Version September 14, 2020 Chapter 1. Basics of these basis vectors: ai = b1ci1 + b2ci2 + · · · + brcir = b1,, br <unk> <unk> ci1. cir <unk> <unk>, for some coefficients cij ∈R with i = 1,, n, j = 1,, r. Stacking these relations column by colum" ]
[ "Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from scratch to generate robust ratings for the user-item pairs containing this item.", "Regularization terms for $\\mathbf{W}$ and $\\mathbf{Z}$ in the form of their respective Frobenius norms are added to the RMSE so that the resulting objective function becomes convex.", "For obtaining a robust factorization of a matrix $\\mathbf{X}$ with $D$ rows and $N$ elements where $N \\ll D$, the latent dimension $\\mathrm{K}$ should lie somewhere between $D$ and $N$.", "None of the other options are correct." ]
None of the other options are correct.
1191
Consider the composite function $f(x)=g(h(x))$, where all functions are $\mathbb{R}$ to $\mathbb{R}$. Which of the following is the weakest condition that guarantees that $f(x)$ is convex?
[ "x^{*}}. If the convex function f {\\displaystyle f} is defined on the whole line and is everywhere differentiable, then f ∗ ( p ) = sup x ∈ I ( p x − f ( x ) ) = ( p x − f ( x ) ) | x = ( f ′ ) − 1 ( p ) {\\displaystyle f^{*}(p)=\\sup _{x\\in I}(px-f(x))=\\left(px-f(x)\\right)|_{x=(f')^{-1}(p)}} can be interpreted as the negative of the y {\\displaystyle y} -intercept of the tangent line to the graph of f {\\displaystyle f} that has slope p {\\displaystyle p}. Definition in n-dimensional real space The generalization to convex functions f : X → R {\\displaystyle f:X\\to \\mathbb {R} } on a convex set X ⊂ R n {\\displaystyle X\\subset \\mathbb {R} ^{n}} is straightforward: f ∗ : X ∗ → R {\\displaystyle f^{*}:X^{*}\\to \\mathbb {R} } has domain X ∗ = { x ∗ ∈ R n : sup x ∈ X ( ⟨ x ∗, x ⟩ − f ( x ) ) < ∞ } {\\displaystyle X^{*}=\\left\\{x^{*}\\in \\mathbb {R} ^{n}:\\sup _{x\\in X}(\\langle x^{*},x\\rangle -f(x))<\\infty \\right\\}} and is defined by f ∗ ( x ∗ ) = sup x ∈ X ( ⟨ x ∗, x ⟩ − f ( x ) ), x ∗ ∈ X ∗, {\\displaystyle f^{*}(x^{*})=\\sup _{x\\in X}(\\langle x^{*},x\\rangle -f(x)),\\quad x^{*", "f(x) + ∇f(x)<unk>(y -x) = f(x) for all y ∈dom(f), so x is a global minimum. 16 The converse is also true and is a corollary of Lemma 1.22 [BV04, 4.2.3]. Lemma 1.17. Suppose that f : dom(f) →R is convex and differentiable over an open domain dom(f) ⊆Rd. Let x ∈dom(f). If x is a global minimum then ∇f(x) = 0. 1.5.1 Strictly convex functions In general, a global minimum of a convex function is not unique (think of f(x) = 0 as a trivial example). However, if we forbid “flat” parts of the graph of f, a global minimum becomes unique (if it exists at all). Definition 1.18 ([BV04, 3.1.1]). A function f : dom(f) →R is strictly con- vex if (i) dom(f) is convex and (ii) for all x <unk>= y ∈dom(f) and all λ ∈(0, 1), we have f(λx + (1 -λ)y) < λf(x) + (1 -λ)f(y). (1.6) This means that the open line segment connecting (x, f(x)) and (y, f(y)) is pointwise strictly above the graph of f. For example, f(x) = x2 is strictly convex. Lemma 1.19 ([BV04, 3.1.4]). Suppose that dom(f) is open and that f is twice continuously differentiable. If the Hessian ∇2f(x) <unk>0 for every x ∈dom(f) (i.e., z<unk>∇2f(x)z > 0 for any z <unk>= 0), then f is strictly convex. The converse is false, though: f(x) = x4 is strictly convex but has van- ishing second derivative at x = 0. Lemma 1.20. Let f : dom(f) →R be strictly convex. Then f has at most one global minimum. Proof. Suppose x⋆", "VEX FUNCTIONS 3. f(x) = |x|a with a ≥1 Show that the following functions from a Euclidean space E to R are convex. 4. f(x) = ⟨w, x⟩+ b with w ∈E, b ∈R 5. f(x) = 1 2 ⟨x, Ax⟩+ ⟨b, x⟩+ c with A: E →E a symmetric positive semidefinite linear map, b ∈E, c ∈R. Among all of these functions, which are strictly convex? Which are μ-strongly convex and with which constant μ? (You might find this exercise easier to solve after reading the section about convexity and derivatives.) Exercise 4.13. Show that if f1, f2 are two convex functions on E and a1, a2 ≥ 0 are two nonnegative real numbers then f = a1f1 + a2f2 defined by f(x) = a1f1(x) + a2f2(x) is a convex function on E. Extend the claim to f = a1f1 + · · · + akfk. What can you say about strict or strong convexity of f depending on properties of f1,, fk? Exercise 4.14. Show that if f1, f2 are two convex functions on E, then the max of those two functions, f = max(f1, f2) defined by f(x) = max(f1(x), f2(x)), is convex. Extend your reasoning to f = max(f1,, fk). Deduce that the function f(x) = |x| is convex. Exercise 4.15. Using some of the exercises above, show that the function f : Rn →R which to each vector x associates the sum of the k largest entries of x is convex. Exercise 4.16. Based on some of the exercises above, show that the log- sum-exp function is convex (t > 0 is a fixed, real parameter): f(x) = t log k X i=1 exi/t!. (4.1) (You might", "convex functions, λ1, λ2,, λm ∈R+. Then f := Pm i=1 λifi is convex on dom(f) := Tm i=1 dom(fi). (ii) Let f be a convex function with dom(f) ⊆Rd, g : Rm →Rd an affine function, meaning that g(x) = Ax + b, for some matrix A ∈Rd×m and some vector b ∈Rd. Then the function f ◦g (that maps x to f(Ax + b)) is convex on dom(f ◦g) := {x ∈Rm : g(x) ∈dom(f)}. 1.5 Minimizing convex functions The main feature that makes convex functions attractive in optimization is that every local minimum is a global one, so we cannot “get stuck” in 15 local optima. This is quite intuitive if we think of the graph of a convex function as being bowl-shaped. Definition 1.14. A local minimum of f : dom(f) →R is a point x such that there exists ε > 0 with f(x) ≤f(y) ∀y ∈dom(f) satisfying ∥y -x∥< ε. Lemma 1.15. Let x⋆be a local minimum of a convex function f : dom(f) →R. Then x⋆is a global minimum, meaning that f(x⋆) ≤f(y) ∀y ∈dom(f). Proof. Suppose there exists y ∈dom(f) such that f(y) < f(x⋆) and define y′ := λx⋆+ (1 -λ)y for λ ∈(0, 1). From convexity (1.1), we get that that f(y′) < f(x⋆). Choosing λ so close to 1 that ∥y′ -x⋆∥< ε yields a contradiction to x⋆being a local minimum. This does not mean that a convex function always has a global mini- mum. Think of f(x) = x as a trivial example. But also if f is bounded from below over dom(f), it", "I ∗ = { x ∗ ∈ R : sup x ∈ I ( x ∗ x − f ( x ) ) < ∞ } {\\displaystyle f^{*}(x^{*})=\\sup _{x\\in I}(x^{*}x-f(x)),\\ \\ \\ \\ I^{*}=\\left\\{x^{*}\\in \\mathbb {R} :\\sup _{x\\in I}(x^{*}x-f(x))<\\infty \\right\\}} where sup {\\textstyle \\sup } denotes the supremum over I {\\displaystyle I}, e.g., x {\\textstyle x} in I {\\textstyle I} is chosen such that x ∗ x − f ( x ) {\\textstyle x^{*}x-f(x)} is maximized at each x ∗ {\\textstyle x^{*}}, or x ∗ {\\textstyle x^{*}} is such that x ∗ x − f ( x ) {\\displaystyle x^{*}x-f(x)} has a bounded value throughout I {\\textstyle I} (e.g., when f ( x ) {\\displaystyle f(x)} is a linear function). The function f ∗ {\\displaystyle f^{*}} is called the convex conjugate function of f {\\displaystyle f}. For historical reasons (rooted in analytic mechanics), the conjugate variable is often denoted p {\\displaystyle p}, instead of x ∗ {\\displaystyle x^{*}}. If the convex function f {\\displaystyle f} is defined on the whole line and is everywhere differentiable, then f ∗ ( p ) = sup x ∈ I ( p x − f ( x ) ) = ( p x − f ( x ) ) | x = ( f ′ ) − 1 ( p ) {\\displaystyle f^{*}(p)=\\sup _{x\\in I}(px-f(x))=\\left(px-f(x)\\right)|_{x" ]
[ "$g(x)$ and $h(x)$ are convex and $g(x)$ and $h(x)$ are increasing", "$g(x)$ is convex and $g(x)$ is increasing", "$g(x)$ and $h(x)$ are convex and $h(x)$ is increasing", "$g(x)$ and $h(x)$ are convex and $g(x)$ is increasing", "$g(x)$ is convex and $g(x)$ and $h(x)$ are increasing", "$h(x)$ is convex and $g(x)$ and $h(x)$ are increasing", "$g(x)$ is convex and $h(x)$ is increasing" ]
$g(x)$ and $h(x)$ are convex and $g(x)$ is increasing
1193
Matrix Factorizations: The function $f(\mathbf{v}):=g\left(\mathbf{v} \mathbf{v}^{\top}\right)$ is convex over the vectors $\mathbf{v} \in \mathbb{R}^{2}$, when $g: \mathbb{R}^{2 \times 2} \rightarrow \mathbb{R}$ is defined as
[ "function f : E →R is convex if f((1 -t)x + ty) ≤(1 -t)f(x) + tf(y) for all x, y ∈E and all t ∈[0, 1]. There are several reasons why convexity is a delightful property to have in optimization. One of them is that convex functions do not have non-global local minima. Theorem 4.2. [Proof for exam 2025] If f is convex and x∗is a local minimum for f, then x∗is a global minimum for f. Proof. For contradiction, assume x∗is not a global minimum of f. Then, there exists a point y ∈E such that f(y) < f(x∗). Convexity implies that the value of f is strictly decreasing along the line segment going from x∗to y. Indeed, for all t ∈(0, 1]: f((1 -t)x∗+ ty) ≤(1 -t)f(x∗) + tf(y) < (1 -t)f(x∗) + tf(x∗) = f(x∗). This contradicts the fact that x∗is a local minimum. 31 32 CHAPTER 4. CONVEX FUNCTIONS Thus, we need not worry about optimization algorithms getting trapped in local minima of a convex f. This is already great as such, but it doesn’t do much for us unless (a) convex functions come up in applications, and (b) convex functions are easy to recognize. As it turns out, we can check both of these boxes. In this chapter, we review some basic examples and properties of convex functions: this allows us to “spot” convexity in applications. Then, we study the behavior of algorithms on convex functions. Convex analysis is a broad field which is of interest to mathematicians in both fundamental and applied research also beyond optimization. Refer- ences for this chapter include [BV04, Roc70] and lecture notes by Stephen J. Wright.1 4.1 Basic definitions In Definition 4.1, we stated what it means for a real-valued function f on a linear space E to be con", "ity is a delightful property to have in optimization One of them is that convex function do not have non global local minimum Theorem Proof for exam If f is convex and x is a local minimum for f then x is a global minimum for f Proof For contradiction assume x is not a global minimum of f Then there exists a point y E such that f y f x Convexity implies that the value of f is strictly decreasing along the line segment going from x to y Indeed for all t f t x ty t f x tf y t f x tf x f x This contradicts the fact that x is a local minimum CHAPTER CONVEX FUNCTIONS Thus we need not worry about optimization algorithm getting trapped in local minimum of a convex f This is already great a such but it doesn t do much for u unless a convex function come up in application and b convex function are easy to recognize As it turn out we can check both of these box In this chapter we review some basic example and property of convex function this allows u to spot convexity in application Then we study the behavior of algorithm on convex function Convex analysis is a broad field which is of interest to mathematician in both fundamental and applied research also beyond optimization Refer ences for this chapter include BV Roc and lecture note by Stephen J Wright Basic definition In Definition we stated what it mean for a real valued function f on a linear space E to be convex Let u add a few additional related term to our lexicon Definition A function f E R is strictly convex if f t x ty t f x tf y for all x y E distinct and all t Exercise It is clear that if f is strictly convex then it is convex On the other hand give an example of a function which is convex yet is not strictly convex Definition Assume E is a Euclidean space with norm A function f E R is strongly convex if x f x x is convex with Exercise Show that if f is strongly convex then f is strictly convex On the other hand check that f x x from R to R is strictly convex yet not strongly convex Definition A function f E R is concave if", "review some basic examples and properties of convex functions: this allows us to “spot” convexity in applications. Then, we study the behavior of algorithms on convex functions. Convex analysis is a broad field which is of interest to mathematicians in both fundamental and applied research also beyond optimization. Refer- ences for this chapter include [BV04, Roc70] and lecture notes by Stephen J. Wright.1 4.1 Basic definitions In Definition 4.1, we stated what it means for a real-valued function f on a linear space E to be convex. Let us add a few additional related terms to our lexicon. Definition 4.3. A function f : E →R is strictly convex if f((1 -t)x + ty) < (1 -t)f(x) + tf(y) for all x, y ∈E distinct and all t ∈(0, 1). Exercise 4.4. It is clear that if f is strictly convex then it is convex. On the other hand, give an example of a function which is convex yet is not strictly convex. Definition 4.5. Assume E is a Euclidean space with norm ∥· ∥. A function f : E →R is μ-strongly convex if x 7→f(x) -μ 2∥x∥2 is convex with μ > 0. Exercise 4.6. Show that if f is μ-strongly convex then f is strictly convex. On the other hand, check that f(x) = x4 from R to R is strictly convex yet not strongly convex. Definition 4.7. A function f : E →R is concave if -f is convex. Likewise, f is strictly or μ-strongly concave if -f is strictly or μ-strongly convex, respectively. Exercise 4.8. Show that f is simultaneously convex and concave if and only if f is an affine function, that is, f(x) = ⟨w, x⟩+ b for some w ∈E and b ∈R. 1http://www.optimization-online.org/DB_FILE/2016/12/5748.pdf 4.2. RECOGNIZING", "}}} sconv ⁡ F {\\displaystyle \\operatorname {sconv} {\\mathcal {F}}} being the collection of functions of the form ∑ i = 1 m α i f i {\\displaystyle \\sum _{i=1}^{m}\\alpha _{i}f_{i}} with ∑ i = 1 m | α i | ≤ 1 {\\displaystyle \\sum _{i=1}^{m}|\\alpha _{i}|\\leq 1}. Then if N ( ε ‖ F ‖ Q, 2, F, L 2 ( Q ) ) ≤ C ε − V {\\displaystyle N\\left(\\varepsilon \\|F\\|_{Q,2},{\\mathcal {F}},L_{2}(Q)\\right)\\leq C\\varepsilon ^{-V}} the following is valid for the convex hull of F {\\displaystyle {\\mathcal {F}}} log ⁡ N ( ε ‖ F ‖ Q, 2, sconv ⁡ F, L 2 ( Q ) ) ≤ K ε − 2 V V + 2 {\\displaystyle \\log N\\left(\\varepsilon \\|F\\|_{Q,2},\\operatorname {sconv} {\\mathcal {F}},L_{2}(Q)\\right)\\leq K\\varepsilon ^{-{\\frac {2V}{V+2}}}} The important consequence of this fact is that 2 V V + 2 < 2, {\\displaystyle {\\frac {2V}{V+2}}<2,} which is just enough so that the entropy integral is going to converge, and therefore the class sconv ⁡ F {\\displaystyle \\operatorname {sconv} {\\mathcal {F}}} is going to be P-Donsker. Finally an example of a VC-subgraph class is considered. Any finite-dimensional vector space F {\\displaystyle {\\mathcal {F}}} of measurable functions f : X → R {\\displaystyle f:{\\mathcal {X}}\\to", "A ∨ ¬B) are Function composition is generally noncommutative. For example, if f ( x ) = 2 x + 1 {\\displaystyle f(x)=2x+1} and g ( x ) = 3 x + 7 {\\displaystyle g(x)=3x+7}. Then ( f ∘ g ) ( x ) = f ( g ( x ) ) = 2 ( 3 x + 7 ) + 1 = 6 x + 15 {\\displaystyle (f\\circ g)(x)=f(g(x))=2(3x+7)+1=6x+15} and ( g ∘ f ) ( x ) = g ( f ( x ) ) = 3 ( 2 x + 1 ) + 7 = 6 x + 10. {\\displaystyle (g\\circ f)(x)=g(f(x))=3(2x+1)+7=6x+10.} Matrix multiplication of square matrices of a given dimension is a noncommutative operation, except for ⁠ 1 × 1 {\\displaystyle 1\\times 1} ⁠ matrices. For example: [ 0 2 0 1 ] = [ 1 1 0 1 ] [ 0 1 0 1 ] ≠ [ 0 1 0 1 ] [ 1 1 0 1 ] = [ 0 1 0 1 ] {\\displaystyle {\\begin{bmatrix}0&2\\\\0&1\\end{bmatrix}}={\\begin{bmatrix}1&1\\\\0&1\\end{bmatrix}}{\\begin{bmatrix}0&1\\\\0&1\\end{bmatrix}}\\neq {\\begin{bmatrix}0&1\\\\0&1\\end{bmatrix}}{\\begin{bmatrix}1&1\\\\0&1\\end{bmatrix}}={\\begin{bmatrix}0&1\\\\0&1\\end{bmatrix}}} The vector product (or cross product) of two vectors in three dimensions is anti-commutative; i.e., b × a = − ( a × b ) {\\displaystyle \\mathbf {b} \\times \\mathbf {a} =-(\\mathbf {a} \\times \\mathbf {b} )" ]
[ "(a) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}$.", "(b) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}+X_{22}$." ]
['(a)', '(b)']
1194
(Neural networks) Training only the first layer of a deep neural network using the logistic loss is equivalent to training a logistic regression over a transformed feature space.
[ "data in the same way as we learn the weights of the linear classifier? This is what neural networks allow us to do. There is currently a lot of excitement about neural networks and its many applications. At the end of this short tutorial you will unlikely be able to program a neural net to play Go like a grandmaster. Many small tricks and lots of patience and computing power are needed to train neural nets for complicated tasks. But you will be able to write small scripts to solve standard handwriting recognition challenges. We will focus on basic questions. We highly recommend the web tutorial by Michael Nielsen, neuralnetworksanddeeplearning.com and we will follow it in many aspects. The Basic Structure Let us look at the structure of a neural network. It is shown in Figure 1. This is a neural net with one input layer of size D, L hidden layers of size K, and one output layer. It is a feedfoward network: the computation performed by the network starts with the input from the left and flows to the right. There is no feedback loop. As always, we assume that our input is a D-dimensional vector. We see that there is a node drawn in Figure 1 for each of the D components of x. We denote these nodes by x(0) i, where the superscript (0) specifies that this is the input layer. The same network can be used for regression as well as clas- sification. The only difference will be in the output layer. Let us discuss the exact computation that is performed by this network. We already described the input layer. Let us now look at the hidden layers. Let us assume that there are Figure 1: A neural network with one input layer, L hidden layers, and one output layer. K nodes in each hidden layer, where K is a hyper-parameter that has to be chosen by the user and can/should be opti- mized via validation. There is no reason that all hidden layers should have the same size but we will stick to this simple model. How many layers are there typically? Not long ago, typical networks might have just had one or a few hidden layers. Modern applications have “deep” nets with sometimes hundreds of layers. Training such deep nets poses new and challenging problems and we", "In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, and training cost. Introduction In general, a deep learning model can be characterized by four parameters: model size, training dataset size, training cost, and the post-training error rate (e.g., the test set error rate). Each of these variables can be defined as a real number, usually written as N, D, C, L {\\displaystyle N,D,C,L} (respectively: parameter count, dataset size, computing cost, and loss). A neural scaling law is a theoretical or empirical statistical law between these parameters. There are also other parameters with other scaling laws. Size of the model In most cases, the model's size is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models. With sparse models, during inference, only a fraction of their parameters are used. In comparison, most other kinds of neural networks, such as transformer models, always use all their parameters during inference. Size of the training dataset The size of the training dataset is usually quantified by the number of data points within it. Larger training datasets are typically preferred, as they provide a richer and more diverse source of information from which the model can learn. This can lead to improved generalization performance when the model is applied to new, unseen data. However, increasing the size of the training dataset also increases the computational resources and time required for model training. With the \"pretrain, then finetune\" method used for most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset. In some cases, a small amount of high quality data suffices for finetuning, and more data does not necessarily improve performance. Cost of training Training cost is typically measured in terms of time (how long it takes to train the model", "Training Neural Networks Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given n training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in opposite direction of gradient Given training pair input and label Loss Optimization Finding network weight that achieve the lowest loss arg min arg min Randomly pick a point Compute gradient Take a small step in the opposite direction of the gradient Repeat this process until convergence Given training pair input and label Gradient Descent Computationally intensive Algorithm Initialize weight randomly Loop until convergence Compute gradient Update weight Return weight Stochastic Gradient Descent Easy to compute but very noisy Algorithm Initialize weight randomly Loop until convergence Pick random single sample Compute gradient $ + )+ ○ Update weights, ←- )* + )+ ○ Return weights!, \" &'75 Mini-Batch Gradient Descent ● Algorithm: ○ Initialize weights randomly ~0, & ○ Loop until convergence: ○ Pick a mini-batch of data samples ○ Compute gradient, )* + )+ = %, ∑-.%, )*% + )+ ○ Update weights, ←- )* + )+ ○ Return weights Better estimation of true gradient and fast to compute, smoother convergence!, \" &'76 Backpropagation Using Chain Rule % F % & & = F ∗F & Let’s apply chain rule! 77 Backpropagation Using Chain Rule % F % & % = F ∗F % Apply chain rule 78 Backpropagation Using Chain Rule % = F ∗F % ∗% % % F % & Apply chain rule 79 Training Deep Networks % = h/ ∗h/ h/$ $% h/$ In most", "learner\" at Google, due in part to its scalability. Neural networks are also used as classifiers. Artificial neural networks An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers. Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function. In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful architecture for recurrent neural networks. Perceptrons use only a single layer of neurons; deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are \"close\" to each other—this is especially important in image processing, where a local set of neurons must identify an \"edge\" before the network can identify an object. Deep learning Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces. Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, and others. The reason that deep learning performs so well in so many applications is not known as of 2021.", "In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The \"signal\" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers. Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. Training Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data. History Early work Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feed" ]
[ "True", "False" ]
False
1196
Our task is to classify whether an animal is a dog (class 0) or a cat (class 1) based on the following features: egin{itemize} \item $x_1$: height \item $x_2$: length of whiskers \item $x_3$: thickness of fur \end{itemize} We perform standard normal scaling on the training features so that they have a mean of zero and standard deviation of 1. We have trained a Logistic Regression model to determine the probability that the animal is a cat, $p(1 | \mathbf{x,w})$. Our classifier learns that cats have a lower height and longer whiskers than dogs, while the thickness of fur is not relevant to the classification outcome. Which of the following is true about the weights~$\wv$ learned by the classifier?
[ "testing data is a critical one for pattern classification. In some simple examples of function learning, a relationship that could be explicitly stated for the complete set of possible feature vectors is learned. However, in classification problems we often have a huge or even infinite number of potential feature vectors, and the system must be trained using a very limited subset that has been labeled with class identity. Thus, even though the relationship between feature vector and class might be learned perfectly for some training set of data, likelihoods or posteriors may not be well-estimated for the general population of possible samples. In general, classification error on the training set patterns should be viewed as a lower bound. A better estimate of the classifier error is ob- tained using an independent test set. The larger this test set, the better the representation of the general population of possible test patterns. Conventional significance tests (e.g., a normal approximation to the bi- nomial distribution for correctness on the test set) should be done as a sanity check. For instance, a 49% error on a million-pattern test set is significantly different from chance (50% error) on a two-class pro- blem ; it represents 10,000 patterns. On the other hand, the same error percentage on a 100-pattern test set is indistinguishable from chance performance. (2) One way to effectively increase the size of a test set is to (2)For the normal approximation to a binomial distribution, the equivalent standard deviation is √npq, where n is the number of patterns, p is the probability of getting the class correct by chance, and q is the probability of getting the class 8 Pattern classification with realistic data use a “jackknife” procedure, in which each split of the data (e.g., fifths) is used in turn for test after using the remaining part for training. Thus, all of the available data is ultimately used for the test set. Training set size is also a major concern for real problems. The lar- ger the training set, the better the classifier will do on representing the underlying distributions. Also, the more complex the recognizer (e.g., the larger the number of independent parameters), the greater the risk of over-specializing to the", "if w = (1, 0) and w0 = 0 then the transition between the two levels happens at the x1 = 0 plane. By scaling w we can make the transition faster or slower and by changing w0 we can shift the decision region along the w vector. At this point it is hopefully clear how we use logistic regression to do classification. To repeat, given the weight vector w we predict the probability of the class label 1 to be p(1 | x, w) = σ(x<unk>w + w0) and then quantize. What we need to discuss next is how we learn the model, i.e., how we find a good weight vector w given some training set Strain. A word about notation In the beginning of this course we started with an arbitrary feature vector x. We then discussed that often it is useful to add the constant 1 to this fea- ture vector and we called the resulting vector ex. We also discussed that often it is useful to add fur- ther features and we called then the resulting vec- tor φ(x). Note that in particular for the logistic regression it is crucial that we have the constant term contained in x since this allows us to ”shift” the decision region. We will assume from now on that the vector x always contains the constant term as well as any further features we care to add. This will save us from a flood of notation. Hence, from now on we no longer need the extra term w0 but the term x<unk>w suffices since it contains already the constant. Training As always we assume that we have our training set Strain, consisting of iid samples {(xn, yn)}N n=1, sampled according to a fixed but unknown distri- bution D. Exploiting that the samples (xn, yn) are indepen- dent, the probability of y (vector of all labels) given X (matrix of all inputs) and w (weight vector) has a simple product form: p(y | X, w) = N Y n=1 p(yn|xn) = Y n:yn=1 p(yn = 1|xn) Y n:yn=0 p(yn = 0|xn) = N Y n=1 σ(x<unk> n w)", "only 1% of dogs are classified correctly. The image dataset consists of 100000 images, 90% of which are pictures of cats and 10% are pictures of dogs. In such a situation, the probability that the picture containing dog will be classified correctly is pretty low: P ( C − | − ) = 0.01 {\\displaystyle P(C-|-)=0.01} Not all the metrics are noticing this low probability: P 4 = 0.0388 {\\displaystyle \\mathrm {P} _{4}=0.0388} F 1 = 0.9478 {\\displaystyle \\mathrm {F} _{1}=\\mathbf {0.9478} } J = 0.0099 {\\displaystyle \\mathrm {J} =0.0099} (Informedness / Youden index) M K = 0.8183 {\\displaystyle \\mathrm {MK} =\\mathbf {0.8183} } (Markedness) See also F-score Informedness Markedness Matthews correlation coefficient Precision and Recall Sensitivity and Specificity NPV Confusion matrix", "In machine learning, a linear classifier makes a classification decision for each object based on a linear combination of its features. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use. Definition If the input feature vector to the classifier is a real vector x → {\\displaystyle {\\vec {x}}}, then the output score is y = f ( w → ⋅ x → ) = f ( ∑ j w j x j ), {\\displaystyle y=f({\\vec {w}}\\cdot {\\vec {x}})=f\\left(\\sum _{j}w_{j}x_{j}\\right),} where w → {\\displaystyle {\\vec {w}}} is a real vector of weights and f is a function that converts the dot product of the two vectors into the desired output. (In other words, w → {\\displaystyle {\\vec {w}}} is a one-form or linear functional mapping x → {\\displaystyle {\\vec {x}}} onto R.) The weight vector w → {\\displaystyle {\\vec {w}}} is learned from a set of labeled training samples. Often f is a threshold function, which maps all values of w → ⋅ x → {\\displaystyle {\\vec {w}}\\cdot {\\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ, 0 otherwise {\\displaystyle f(\\mathbf {x} )={\\begin{cases}1&{\\text{if }}\\ \\mathbf {w} ^{T}\\cdot \\mathbf {x} >\\theta,\\\\0&{\\text{otherwise}}\\end{cases}}} The superscript T indicates the transpose and θ {\\displaystyle \\theta } is a scalar threshold. A more complex f might give the probability that an item belongs to a", "1: Rare disease detection test Let us consider the medical test aimed to detect kind of rare disease. Population size is 100 000, while 0.05% population is infected. Test performance: 95% of all positive individuals are classified correctly (TPR=0.95) and 95% of all negative individuals are classified correctly (TNR=0.95). In such a case, due to high population imbalance, in spite of having high test accuracy (0.95), the probability that an individual who has been classified as positive is in fact positive is very low: P ( + ∣ C + ) = 0.0095 {\\displaystyle P(+\\mid C{+})=0.0095} And now we can observe how this low probability is reflected in some of the metrics: P 4 = 0.0370 {\\displaystyle \\mathrm {P} _{4}=0.0370} F 1 = 0.0188 {\\displaystyle \\mathrm {F} _{1}=0.0188} J = 0.9100 {\\displaystyle \\mathrm {J} =\\mathbf {0.9100} } (Informedness / Youden index) M K = 0.0095 {\\displaystyle \\mathrm {MK} =0.0095} (Markedness) Example 2: Image recognition - cats vs dogs We are training neural network based image classifier. We are considering only two types of images: containing dogs (labeled as 0) and containing cats (labeled as 1). Thus, our goal is to distinguish between the cats and dogs. The classifier overpredicts in favor of cats (\"positive\" samples): 99.99% of cats are classified correctly and only 1% of dogs are classified correctly. The image dataset consists of 100000 images, 90% of which are pictures of cats and 10% are pictures of dogs. In such a situation, the probability that the picture containing dog will be classified correctly is pretty low: P ( C − | − ) = 0.01 {\\displaystyle P(C-|-)=0.01} Not all the metrics are noticing this low probability: P 4 = 0.0388 {\\displaystyle \\mathrm {P} _{4}=0.0388} F 1" ]
[ "$w_1 < w_2 < w_3$", "$w_1 < w_3 < w_2$", "$w_2 < w_1 < w_3$", "$w_2 < w_3 < w_1$", "$w_3 < w_1 < w_2$", "$w_3 < w_2 < w_1$" ]
$w_1 < w_3 < w_2$$w_1 < w_3 < w_2$. When the features are standardized, a below-average height $x_1$ becomes negative. Negative heights increase the probability that the animal is a cat, so the height and cat probability are inversely correlated, and therefore $w_1 < 0$. Conversely, a positive whisker length $x_2$ increases the cat probability, so $w_2 > 0$. Since $x_3$ is not taken into account in the classifier, $w_3$ should be close to or equal to zero.
1201
Consider two fully connected networks, A and B, with a constant width for all layers, inputs and outputs. Network A has depth $3L$ and width $H$, network B has depth $L$ and width $2H$. Everything else is identical for the two networks and both $L$ and $H$ are large. In this case, performing a single iteration of backpropagation requires fewer scalar multiplications for network A than for network B.
[ ": Write down the network pre- tending that all the parameters are independent. Run the backpropagation algorithm. The gradient for a particular pa- rameter for the model where some weights are equal is now just the sum of the gradients (of the model where weights are independent) of all the edges that share the same weight.", "Δ w i {\\displaystyle \\Delta w_{i}} for all weights from input layer to hidden layer // backward pass continued update network weights // input layer not modified by error estimate until error rate becomes acceptably low return the network The lines labeled \"backward pass\" can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights.", "immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication. Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layer from right to left – \"backwards\" – with the gradient of the weights between each layer being a simple modification of the partial products (the \"backwards propagated error\"). Given an input–output pair ( x, y ) {\\displaystyle (x,y)}, the loss is: C ( y, f L ( W L f L − 1 ( W L − 1 ⋯ f 2 ( W 2 f 1 ( W 1 x ) ) ⋯ ) ) ) {\\displaystyle C(y,f^{L}(W^{L}f^{L-1}(W^{L-1}\\cdots f^{2}(W^{2}f^{1}(W^{1}x))\\cdots )))} To compute this, one starts with the input x {\\displaystyle x} and works forward; denote the weighted input of each hidden layer as z l {\\displaystyle z^{l}} and the output of hidden layer l {\\displaystyle l} as the activation a l {\\displaystyle a^{l}}. For backpropagation, the activation a l {\\displaystyle a^{l}} as well as the derivatives ( f l ) ′ {\\displaystyle (f^{l})'} (evaluated at z l {\\displaystyle z^{l}} ) must be cached for use during the backwards pass. The derivative of the loss in terms of the inputs is given by the chain rule; note that each term is a total derivative, evaluated at the value of the network (at each node) on the input x {\\displaystyle x} d C d a L ⋅ d a L d z L ⋅ d z L d a L − 1 ⋅ d a L − 1 d z L − 1 ⋅ d z L − 1 d a L − 2 ⋅... ⋅ d a 1 d z 1 ⋅", ") on each node (coordinate), but today is more varied, with rectifier (ramp, ReLU) being common. a j l {\\displaystyle a_{j}^{l}} activation of the j {\\displaystyle j} -th node in layer l {\\displaystyle l}. In the derivation of backpropagation, other intermediate quantities are used by introducing them as needed below. Bias terms are not treated specially since they correspond to a weight with a fixed input of 1. For backpropagation the specific loss function and activation functions do not matter as long as they and their derivatives can be evaluated efficiently. Traditional activation functions include sigmoid, tanh, and ReLU. Swish, mish, and other activation functions have since been proposed as well. The overall network is a combination of function composition and matrix multiplication: g ( x ) := f L ( W L f L − 1 ( W L − 1 ⋯ f 1 ( W 1 x ) ⋯ ) ) {\\displaystyle g(x):=f^{L}(W^{L}f^{L-1}(W^{L-1}\\cdots f^{1}(W^{1}x)\\cdots ))} For a training set there will be a set of input–output pairs, { ( x i, y i ) } {\\displaystyle \\left\\{(x_{i},y_{i})\\right\\}}. For each input–output pair ( x i, y i ) {\\displaystyle (x_{i},y_{i})} in the training set, the loss of the model on that pair is the cost of the difference between the predicted output g ( x i ) {\\displaystyle g(x_{i})} and the target output y i {\\displaystyle y_{i}} C ( y i, g ( x i ) ) {\\displaystyle C(y_{i},g(x_{i}))} Note the distinction: during model evaluation the weights are fixed while the inputs vary (and the target output may be unknown), and the network ends with the output layer (", "point is that since the only way a weight in W l {\\displaystyle W^{l}} affects the loss is through its effect on the next layer, and it does so linearly, δ l {\\displaystyle \\delta ^{l}} are the only data you need to compute the gradients of the weights at layer l {\\displaystyle l}, and then the gradients of weights of previous layer can be computed by δ l − 1 {\\displaystyle \\delta ^{l-1}} and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layer l {\\displaystyle l}, it is unnecessary to recompute all derivatives on later layers l + 1, l + 2,... {\\displaystyle l+1,l+2,\\ldots } each time. Second, it avoids unnecessary intermediate calculations, because at each stage it directly computes the gradient of the weights with respect to the ultimate output (the loss), rather than unnecessarily computing the derivatives of the values of hidden layers with respect to changes in weights ∂ a j ′ l ′ / ∂ w j k l {\\displaystyle \\partial a_{j'}^{l'}/\\partial w_{jk}^{l}}. Backpropagation can be expressed for simple feedforward networks in terms of matrix multiplication, or more generally in terms of the adjoint graph. Matrix multiplication For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication. Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layer from right to left – \"backwards\" – with the gradient of the weights between each layer being a simple modification of the partial products (the \"backwards propagated error\"). Given an input–output pair ( x, y )" ]
[ "True", "False" ]
TrueTrue. The number of multiplications required for backpropagation is linear in the depth and quadratic in the width, $3LH^2 < L (2H)^2 = 4LH^2$.
1207
Consider the loss function $L: \R^d o \R$, $L(\wv) = rac{eta}{2}\|\wv\|^2$, where $eta > 0$ is a constant. We run gradient descent on $L$ with a stepsize $\gamma > 0$ starting from some $\wv_0 eq 0$. Which of the statements below is true?
[ "{\\displaystyle m\\geq 1}, where h m ∈ H {\\displaystyle h_{m}\\in {\\mathcal {H}}} is a base learner function. Unfortunately, choosing the best function h m {\\displaystyle h_{m}} at each step for an arbitrary loss function L is a computationally infeasible optimization problem in general. Therefore, we restrict our approach to a simplified version of the problem. The idea is to apply a steepest descent step to this minimization problem (functional gradient descent). The basic idea is to find a local minimum of the loss function by iterating on F m − 1 ( x ) {\\displaystyle F_{m-1}(x)}. In fact, the local maximum-descent direction of the loss function is the negative gradient. Hence, moving a small amount γ {\\displaystyle \\gamma } such that the linear approximation remains valid: F m ( x ) = F m − 1 ( x ) − γ ∑ i = 1 n ∇ F m − 1 L ( y i, F m − 1 ( x i ) ) {\\displaystyle F_{m}(x)=F_{m-1}(x)-\\gamma \\sum _{i=1}^{n}{\\nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}} where γ > 0 {\\displaystyle \\gamma >0}. For small γ {\\displaystyle \\gamma }, this implies that L ( y i, F m ( x i ) ) ≤ L ( y i, F m − 1 ( x i ) ) {\\displaystyle L(y_{i},F_{m}(x_{i}))\\leq L(y_{i},F_{m-1}(x_{i}))}. Furthermore, we can optimize γ {\\displaystyle \\gamma } by finding the γ {\\displaystyle \\gamma } value for which the loss function has a minimum: γ m = arg ⁡ min γ ∑ i = 1 n L ( y i, F m", "invertible link are called proper loss functions. The sole minimizer of the expected risk, f φ ∗ {\\displaystyle f_{\\phi }^{*}}, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding f ( η ) {\\displaystyle f(\\eta )}. This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such as gradient boosting can be used to construct the minimizer. Proper loss functions, loss margin and regularization For proper loss functions, the loss margin can be defined as μ φ = − φ ′ ( 0 ) φ ′′ ( 0 ) {\\displaystyle \\mu _{\\phi }=-{\\frac {\\phi '(0)}{\\phi ''(0)}}} and shown to be directly related to the regularization properties of the classifier. Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing a γ {\\displaystyle \\gamma } parameter and writing the logistic loss as 1 γ log ⁡ ( 1 + e − γ v ) {\\displaystyle {\\frac {1}{\\gamma }}\\log(1+e^{-\\gamma v})} where smaller 0 < γ < 1 {\\displaystyle 0<\\gamma <1} increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting F m ( x ) = F m − 1 ( x ) + γ h m ( x ), {\\displaystyle F_{m}(x)=F_{m-1}(x)+\\gamma h_{m}(x),} where decreasing γ {\\displaystyle \\gamma } improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate of γ {\\displaystyle \\gamma } is used, the correct formula for retrieving the posterior probability is now η = f − 1 ( γ F ( x ) ) {\\displaystyle \\eta =f^{-1}(\\gamma F(x)", "i}x_{i}\\left(x_{i}^{\\mathsf {T}}w_{i-1}-y_{i}\\right)} is replaced by w i = w i − 1 − γ i x i ( x i T w i − 1 − y i ) = w i − 1 − γ i ∇ V ( ⟨ w i − 1, x i ⟩, y i ) {\\displaystyle w_{i}=w_{i-1}-\\gamma _{i}x_{i}\\left(x_{i}^{\\mathsf {T}}w_{i-1}-y_{i}\\right)=w_{i-1}-\\gamma _{i}\\nabla V(\\langle w_{i-1},x_{i}\\rangle,y_{i})} or Γ i ∈ R d × d {\\displaystyle \\Gamma _{i}\\in \\mathbb {R} ^{d\\times d}} by γ i ∈ R {\\displaystyle \\gamma _{i}\\in \\mathbb {R} }, this becomes the stochastic gradient descent algorithm. In this case, the complexity for n {\\displaystyle n} steps of this algorithm reduces to O ( n d ) {\\displaystyle O(nd)}. The storage requirements at every step i {\\displaystyle i} are constant at O ( d ) {\\displaystyle O(d)}. However, the stepsize γ i {\\displaystyle \\gamma _{i}} needs to be chosen carefully to solve the expected risk minimization problem, as detailed above. By choosing a decaying step size γ i ≈ 1 i, {\\displaystyle \\gamma _{i}\\approx {\\frac {1}{\\sqrt {i}}},} one can prove the convergence of the average iterate w ̄ n = 1 n ∑ i = 1 n w i {\\textstyle {\\overline {w}}_{n}={\\frac {1}{n}}\\sum _{i=1}^{n}w_{i}} ", "< ε A vector w⋆is a global minimum of L if it is no worse than all oth- ers, L(w⋆) ≤L(w), ∀w ∈RD A local or global minimum is said to be strict if the corresponding inequality is strict for w <unk>= w⋆. Smooth Optimization Follow the Gradient A gradient (at a point) is the slope of the tangent to the function (at that point). It points to the direction of largest increase of the function. For a 2-parameter model, MSE(w) and MAE(w) are shown below. (We used yn ≈w0 + w1xn1 with y<unk>= [2, -1, 1.5] and x<unk>= [-1, 1, -1]). -10 -5 0 5 10 -10 -5 0 5 10 0 20 40 60 80 100 120 140 160 -10 -5 0 5 10 -10 -5 0 5 10 0 5 10 15 Definition of the gradient: ∇L(w) := ∂L(w) ∂w1,, ∂L(w) ∂wD <unk> This is a vector, ∇L(w) ∈RD. Gradient Descent To minimize the function, we itera- tively take a step in the (opposite) direction of the gradient w(t+1) := w(t) -γ∇L(w(t)) where γ > 0 is the step-size (or learning rate). Then repeat with the next t. Example: Gradient descent for 1- parameter model to minimize MSE: w(t+1) 0 = (1 -γ)w(t) 0 + γ ̄y where ̄y := P n yn/N. When is this sequence guaranteed to converge? Gradient Descent for Linear MSE For linear regression y = <unk> <unk> y1 y2. yN <unk> <unk>, X = <unk> <unk> x11 x12x1D x21 x22x2DxN1 xN2xND <unk> <unk> We define the error vector e: e = y -Xw and MSE as follows: L(w) := 1 2N N X n=1", "ln q(x_{1:T}|x_{0})]} and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified to L ( θ ) = ∑ t = 1 T E x t − 1, x t ∼ q [ − ln ⁡ p θ ( x t − 1 | x t ) ] + E x 0 ∼ q [ D K L ( q ( x T | x 0 ) ‖ p θ ( x T ) ) ] + C {\\displaystyle L(\\theta )=\\sum _{t=1}^{T}E_{x_{t-1},x_{t}\\sim q}[-\\ln p_{\\theta }(x_{t-1}|x_{t})]+E_{x_{0}\\sim q}[D_{KL}(q(x_{T}|x_{0})\\|p_{\\theta }(x_{T}))]+C} where C {\\displaystyle C} does not depend on the parameter, and thus can be ignored. Since p θ ( x T ) = N ( x T | 0, I ) {\\displaystyle p_{\\theta }(x_{T})={\\mathcal {N}}(x_{T}|0,I)} also does not depend on the parameter, the term E x 0 ∼ q [ D K L ( q ( x T | x 0 ) ‖ p θ ( x T ) ) ] {\\displaystyle E_{x_{0}\\sim q}[D_{KL}(q(x_{T}|x_{0})\\|p_{\\theta }(x_{T}))]} can also be ignored. This leaves just L ( θ ) = ∑ t = 1 T L t {\\displaystyle L(\\theta )=\\sum _{t=1}^{T}L_{t}} with L t = E x t − 1, x t ∼ q [ − ln ⁡ p θ ( x t − 1 | x t ) ] {\\displaystyle L_{" ]
[ "Gradient descent converges to the global minimum for any stepsize $\\gamma > 0$.", "Gradient descent with stepsize $\\gamma = \frac{2}{\beta}$ produces iterates that diverge to infinity ($\\|\\wv_t\\| \to \\infty$ as $t\to \\infty$).", "Gradient descent converges in two steps for $\\gamma = \frac{1}{\beta}$ (i.e., $\\wv_2$ is the \textbf{first} iterate attaining the global minimum of $L$).", "Gradient descent converges to the global minimum for any stepsize in the interval $\\gamma \\in \big( 0, \frac{2}{\beta}\big)$." ]
Gradient descent converges to the global minimum for any stepsize in the interval $\gamma \in ig( 0, rac{2}{eta}ig)$.The update rule is $\wv_{t+1} = \wv_t - \gammaeta \wv_t = (1 - \gammaeta) \wv_t$. Therefore we have that the sequence $\{\|\wv_{t}\|\}_t$ is given by $\|\wv_{t+1}\| = \lvert 1 - \gammaeta vert \|\wv_t\| = \lvert 1 - \gammaeta vert^t \|\wv_0\|$. We can see that for $\gamma = rac{2}{eta}$ the elements of the aforementioned sequence never move from $\|\wv_0\|$ (so the algorithm does not diverge to infinity for this stepsize). For $\gamma = rac{1}{eta}$ the algorithm converges in one step, not two. And finally, for any $\gamma \in \left( 0, rac{2}{eta} ight)$ the algorithm will converge to the global minimum since $\lvert 1 - \gammaeta vert \in \left( 0, 1 ight)$.
1216
You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
[ "\" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns (\"everything is a recommendation\") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS. Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes. Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been", "thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge. Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables. Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning. Statistical introduction Given data x {\\displaystyle x\\,\\!} and parameter θ {\\displaystyle \\theta }, a simple Bayesian analysis starts with a prior probability (prior) p ( θ ) {\\displaystyle p(\\theta )} and likelihood p ( x ∣ θ ) {\\displaystyle p(x\\mid \\theta )} to compute a posterior probability p ( θ ∣ x ) <unk> p ( x ∣ θ ) p ( θ ) {\\displaystyle p(\\theta \\mid x)\\propto p(x\\mid \\theta )p(\\theta )}. Often the prior on θ {\\displaystyle \\theta } depends in turn on other parameters φ {\\displaystyle \\varphi } that are not mentioned in the likelihood. So, the prior p ( θ ) {\\displaystyle p(\\theta )} must be replaced by a likelihood p ( θ ∣ φ ) {\\displaystyle p(\\theta \\mid \\varphi )}, and a prior p ( φ ) {\\displaystyle p(\\varphi )} on the newly introduced parameters φ {\\displaystyle \\", "x_{1},x_{2},\\dots,x_{n})}. A set of attributes can be classified into categorical data (discrete factors such as race, sex, or affiliation) or numerical data (continuous values such as temperature, annual income, or speed). Every set of input values is fed into a neural network to predict a value y {\\displaystyle y}. In order to predict the output accurately, the weights of the neural network (which represent how much each predictor variable affects the outcome) must be incrementally adjusted via backpropagation to produce estimates closer to the actual data. Once an ML model is given enough adjustments through training to predict values closer to the ground truth, it should be able to correctly predict outputs of new data with little error. Maximizing accuracy In order to ensure maximum accuracy for a predictive learning model, the predicted values y ^ = F ( x ) {\\displaystyle {\\hat {y}}=F(x)} must not exceed a certain error threshold when compared to actual values y {\\displaystyle y} by the risk formula: R ( F ) = E x y L ( y, F ( x ) ) {\\displaystyle R(F)=E_{xy}L(y,F(x))}, where L {\\displaystyle L} is the loss function, y {\\displaystyle y} is the ground truth, and F ( x ) {\\displaystyle F(x)} is the predicted data. This error function is used to make incremental adjustments to the model's weights to eventually reach a well-trained prediction of: F ∗ ( x ) = argmin F ( x ) E x y {\\displaystyle F^{*}(x)={\\underset {F(x)}{\\operatorname {argmin} }}\\,E_{xy}} L ( y, F ( x ) ) {\\displaystyle L(y,F(x))} Once the error is negligible or considered small enough after training, the model is said to have converged. Ensemble learning In some cases, using a singular machine learning approach is not enough to create an accurate estimate for certain data. Ensemble", "Large margin nearest neighbor (LMNN) classification is a statistical machine learning algorithm for metric learning. It learns a pseudometric designed for k-nearest neighbor classification. The algorithm is based on semidefinite programming, a sub-class of convex optimization. The goal of supervised learning (more specifically classification) is to learn a decision rule that can categorize data instances into pre-defined classes. The k-nearest neighbor rule assumes a training data set of labeled instances (i.e. the classes are known). It classifies a new data instance with the class obtained from the majority vote of the k closest (labeled) training instances. Closeness is measured with a pre-defined metric. Large margin nearest neighbors is an algorithm that learns this global (pseudo-)metric in a supervised fashion to improve the classification accuracy of the k-nearest neighbor rule. Setup The main intuition behind LMNN is to learn a pseudometric under which all data instances in the training set are surrounded by at least k instances that share the same class label. If this is achieved, the leave-one-out error (a special case of cross validation) is minimized. Let the training data consist of a data set D = { ( x → 1, y 1 ),..., ( x → n, y n ) } ⊂ R d × C {\\displaystyle D=\\{({\\vec {x}}_{1},y_{1}),\\dots,({\\vec {x}}_{n},y_{n})\\}\\subset R^{d}\\times C}, where the set of possible class categories is C = { 1,..., c } {\\displaystyle C=\\{1,\\dots,c\\}}. The algorithm learns a pseudometric of the type d ( x → i, x → j ) = ( x → i − x → j ) <unk> M ( x → i − x → j ) {\\displaystyle d({\\vec {x}}_{i},{\\vec {x}}_{j})=({\\vec {x}}_{i", "wisdom in ML until 2010: Simple models + simple errors Mathematics of Data | Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 16/ 43 The Deep Learning Paradigm (a) Massive datasets (b) Inductive bias from large and complex architectures (c) ERM using stochastic non-convex first-order optimization algorithms (SGD) Figure: Most common components in a Deep Learning Pipeline Mathematics of Data | Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 17/ 43 Challenges in DL/ML applications: Robustness (I) (a) Turtle classified as rifle. Athalye et al. 2018. (b) Stop sign classified as 45 mph sign. Eykholt et al. 2018 Figure: Natural or human-crafted modifications that trick neural networks used in computer vision tasks Mathematics of Data | Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 18/ 43 Challenges in DL/ML applications: Robustness (II) (a) Linear classifier on data distributed on a sphere (b) Concentration of measure phenomenon on high dimensions Figure: Understanding the robustness of a classifier in high-dimensional spaces. Shafahi et al. 2019. Mathematics of Data | Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 19/ 43 Challenges in DL/ML applications: Robustness (References) 1. Madry, Aleksander and Makelov, Aleksandar and Schmidt, Ludwig and Tsipras, Dimitris and Vladu, Adrian. Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR 2018. 2. Raghunathan, A., Steinhardt, J., and Liang, P. S. Semidefinite relaxations for certifying robustness to adversarial examples. Neurips 2018. 3. Wong, E. and Kolter, Z. (2018). Provable defenses against adversarial examples via the convex outer adversarial polytope. ICML 2018. 4. Huang, X., Kwiatkowska, M., Wang, S., and Wu, M" ]
[ "Because we use exactly the same scheme.", "Because it is almost impossible to train a network with 10 layers without a supercomputer.", "Because I should have used more layers.", "Because I should have used only one layer." ]
Because we use exactly the same scheme.
1217
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Assume $\sigma_{L+1}$ is the element-wise extbf{sigmoid} function and $C_{f, rac{1}{2}}$ is able to obtain a high accuracy on a given binary classification task $T$. Let $g$ be the MLP obtained by multiplying the parameters extbf{in the last layer} of $f$, i.e. $\wv$, by 2. Moreover, let $h$ be the MLP obtained by replacing $\sigma_{L+1}$ with element-wise extbf{ReLU}. Finally, let $q$ be the MLP obtained by doing both of these actions. Which of the following is true? ReLU(x) = max\{x, 0\} \ Sigmoid(x) = rac{1}{1 + e^{-x}}
[ ". The two historically common activation functions are both sigmoids, and are described by y ( v i ) = tanh ⁡ ( v i ) and y ( v i ) = ( 1 + e − v i ) − 1 {\\displaystyle y(v_{i})=\\tanh(v_{i})~~{\\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}. The first is a hyperbolic tangent that ranges from −1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {\\displaystyle y_{i}} is the output of the i {\\displaystyle i} th node (neuron) and v i {\\displaystyle v_{i}} is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids. Layers The MLP consists of three or more layers (an input and an output layer with one or more hidden layers) of nonlinearly-activating nodes. Since MLPs are fully connected, each node in one layer connects with a certain weight w i j {\\displaystyle w_{ij}} to every node in the following layer. Learning Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron. We can represent the degree of error in an output node j {\\displaystyle j} in the n {\\displaystyle n} th data point (training example) by e j ( n ) = d j ( n ) − y j ( n ) {\\displaystyle e_", "non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new and more favorable L1 bounds. The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons. The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in. Definition In the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input x {\\displaystyle \\mathbf {x} } (a real-valued vector) to an output value f ( x ) {\\displaystyle f(\\mathbf {x} )} (a single binary value): f ( x ) = h ( w ⋅ x + b ) {\\displaystyle f(\\mathbf {x} )=h(\\mathbf {w} \\cdot \\mathbf {x} +b)} where h {\\displaystyle h} is the Heaviside step-function (where an input of > 0 {\\textstyle >0} outputs 1; otherwise 0 is the output ), w {\\displaystyle \\mathbf {w} } is a vector of real-valued weights, w ⋅ x {\\displaystyle \\mathbf {w} \\cdot \\mathbf {x} } is the dot product ∑ i = 1 m w i x i {\\textstyle \\sum _{i=1}^{m}w_{i}x_{i}}, where m is the number of inputs to the perceptron, and b is the bias. The bias shifts the decision boundary away from the origin and does not depend on any input value. Equivalently, since w ⋅ x + b = ( w, b ) ⋅ ( x, 1 ) {\\displaystyle \\mathbf {w} \\cdot \\mathbf {x} +b=(\\mathbf {w},b)\\cdot (\\mathbf {x},1)},", "In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable. Modern neural networks are trained using backpropagation and are colloquially referred to as \"vanilla\" networks. MLPs grew out of an effort to improve single-layer perceptrons, which could only be applied to linearly separable data. A perceptron traditionally used a Heaviside step function as its nonlinear activation function. However, the backpropagation algorithm requires that modern MLPs use continuous activation functions such as sigmoid or ReLU. Multilayer perceptrons form the basis of deep learning, and are applicable across a vast set of diverse domains. Timeline In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. In 1962, Rosenblatt published many variants and experiments on perceptrons in his book Principles of Neurodynamics, including up to 2 trainable layers by \"back-propagating errors\". However, it was not the backpropagation algorithm, and he did not have a general method for training multiple layers. In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling. It was one of the first deep learning methods, used to train an eight-layer neural net in 1971. In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. Backpropagation was independently developed multiple times in early 1970s. The earliest published instance was Seppo Linnainmaa's master thesis (1970). Paul Werbos developed it independent", "\\frac {2^{-T_{s}}\\zeta |b_{t}^{(0)}-a_{t}^{(0)}|}{\\mu ^{2}}}}, such that the algorithm is guaranteed to converge linearly. Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint. Neural networks Consider a multilayer perceptron (MLP) with one hidden layer and m {\\displaystyle m} hidden units with mapping from input x ∈ R d {\\displaystyle x\\in R^{d}} to a scalar output described as F x ( W ~, Θ ) = ∑ i = 1 m θ i φ ( x T w ~ ( i ) ) {\\displaystyle F_{x}({\\tilde {W}},\\Theta )=\\sum _{i=1}^{m}\\theta _{i}\\phi (x^{T}{\\tilde {w}}^{(i)})}, where w ~ ( i ) {\\displaystyle {\\tilde {w}}^{(i)}} and θ i {\\displaystyle \\theta _{i}} are the input and output weights of unit i {\\displaystyle i} correspondingly, and φ {\\displaystyle \\phi } is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with m i n W ~, Θ ( f N N ( W ~, Θ ) = E y, x [ l ( − y F x ( W ~, Θ ) ) ] ) {\\displaystyle min_{{\\tilde {W}},\\Theta }(f_{NN}({\\tilde {W}},\\Theta )=E_{y,x}[l(-yF_{x}({\\tilde {W}},\\Theta ))])}, where l {\\displaystyle l} is a loss function, W ~ = { w ~ ( 1 ),..., w ~ ( m ) } {\\displaystyle {\\tilde {W}}=\\", "following example illustrates one reason why a ML algo rithm might learn a classi cation rule that ha low standard risk but high adversarial risk This is a toy example but the basic idea is sound the ML algorithm might rely on a large set of non robust feature that can be easily tricked by perturbation Figure To stop or not to stop Consider a binary classi cation task We have y Let x be the input vector Assume that after applying a suitable transform we get the new input vector x that ha the following simple structure We have x x xD where xi aiy Zi i D where Zi is Gaussian zero mean and unit variance noise which is independent for each component Further a and for i D we have ai q log D D The ex act value for ai are not so important and are chosen simply for convenience What is important is that the rst compo nent contains a strong signal component whereas the other feature have a very weak such component Strong versus weak is with respect to the strength of the noise that is added zero mean Gaussian of unit variance We will say that the rst feature is robust whereas the other feature are not ro bust To summarize each of the D component is a scaled and noisy version of the label and all component represent con ditionally independent observation Further the rst com ponent contains a strong signal component The remaining D feature contain extremely weak signal component but there are many of them we assume that D is large Finally assume that we are given the prior on the label p y and that it is uniform Assume at rst that we are interested in the best classi er without adversarial perturbation i e the classi er with the smallest possible risk error probability This is the Bayes classi er i e we should compute the posterior probability and then choose that label that maximizes the posterior argmax y p y x argmax y p x y p y p x argmax y d Y i p xi y In the last step we have used the fact that under our model the observation are conditionally independent so that we get a product of the probability and that we have a uniform prior This can further be simpli ed to argmax y p y x argmax y d Y i p xi y argmax y log d Y i p xi y argmax y d X i log p xi y argmax" ]
[ "$C_{g, \frac{1}{2}}$ may have an accuracy significantly lower than $C_{f, \frac{1}{2}}$ on $T$", "$C_{h, 0}$ may have an accuracy significantly lower than $C_{f, \frac{1}{2}}$ on $T$", "$C_{q, 0}$ may have an accuracy significantly lower than $C_{f, \frac{1}{2}}$ on $T$", "$C_{g, \frac{1}{2}}$, $C_{h, 0}$, and $C_{q, 0}$ have the same accuracy as $C_{f, \frac{1}{2}}$ on $T$" ]
$C_{g, rac{1}{2}}$, $C_{h, 0}$, and $C_{q, 0}$ have the same accuracy as $C_{f, rac{1}{2}}$ on $T$ Since the threshold $ rac{1}{2}$ for sigmoid corresponds to the input to the last activation function being positive, $C_{h, 0}$ is true. Moreover, multiplying the weights by 2 does not change the sign of the output. Therefore both $C_{g, rac{1}{2}}$ and $C_{q, 0}$ are also true.
1224
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \{-1, 1\}$. We want to classify the dataset using the exponential loss $L(\ww) = rac{1}{N} \sum_{i=1}^N \exp (-y_i \xx_i^ op \ww )$ for $\ww \in \R^d$. Which of the following statements is extbf{true}:
[ "is assumed that the training set consists of a sample of independent and identically distributed pairs, ( x i, y i ) {\\displaystyle (x_{i},\\;y_{i})}. In order to measure how well a function fits the training data, a loss function L : Y × Y → R ≥ 0 {\\displaystyle L:Y\\times Y\\to \\mathbb {R} ^{\\geq 0}} is defined. For training example ( x i, y i ) {\\displaystyle (x_{i},\\;y_{i})}, the loss of predicting the value y ^ {\\displaystyle {\\hat {y}}} is L ( y i, y ^ ) {\\displaystyle L(y_{i},{\\hat {y}})}. The risk R ( g ) {\\displaystyle R(g)} of function g {\\displaystyle g} is defined as the expected loss of g {\\displaystyle g}. This can be estimated from the training data as R e m p ( g ) = 1 N ∑ i L ( y i, g ( x i ) ) {\\displaystyle R_{emp}(g)={\\frac {1}{N}}\\sum _{i}L(y_{i},g(x_{i}))}. Empirical risk minimization In empirical risk minimization, the supervised learning algorithm seeks the function g {\\displaystyle g} that minimizes R ( g ) {\\displaystyle R(g)}. Hence, a supervised learning algorithm can be constructed by applying an optimization algorithm to find g {\\displaystyle g}. When g {\\displaystyle g} is a conditional probability distribution P ( y | x ) {\\displaystyle P(y|x)} and the loss function is the negative log likelihood: L ( y, y ^ ) = − log ⁡ P ( y | x ) {\\displaystyle L(y,{\\hat {y}})=-\\log P(y|x)}, then empirical risk minimization is equivalent to maximum likelihood estimation. When G {\\displaystyle", "where X {\\displaystyle X} and Y {\\displaystyle Y} are in the same space of the training examples. The functions f {\\displaystyle f} are selected from a hypothesis space of functions called H {\\displaystyle H}. The training set from which an algorithm learns is defined as S = { z 1 = ( x 1, y 1 ),.., z m = ( x m, y m ) } {\\displaystyle S=\\{z_{1}=(x_{1},\\ y_{1})\\,..,\\ z_{m}=(x_{m},\\ y_{m})\\}} and is of size m {\\displaystyle m} in Z = X × Y {\\displaystyle Z=X\\times Y} drawn i.i.d. from an unknown distribution D. Thus, the learning map L {\\displaystyle L} is defined as a mapping from Z m {\\displaystyle Z_{m}} into H {\\displaystyle H}, mapping a training set S {\\displaystyle S} onto a function f S {\\displaystyle f_{S}} from X {\\displaystyle X} to Y {\\displaystyle Y}. Here, we consider only deterministic algorithms where L {\\displaystyle L} is symmetric with respect to S {\\displaystyle S}, i.e. it does not depend on the order of the elements in the training set. Furthermore, we assume that all functions are measurable and all sets are countable. The loss V {\\displaystyle V} of a hypothesis f {\\displaystyle f} with respect to an example z = ( x, y ) {\\displaystyle z=(x,y)} is then defined as V ( f, z ) = V ( f ( x ), y ) {\\displaystyle V(f,z)=V(f(x),y)}. The empirical error of f {\\displaystyle f} is I S [ f ] = 1 n ∑ V ( f, z i ) {\\displaystyle I_{S}[f]={\\frac {1}{n}}\\sum V", "_{i})]} Here F t − 1 ( x ) {\\displaystyle F_{t-1}(x)} is the boosted classifier that has been built up to the previous stage of training and f t ( x ) = α t h ( x ) {\\displaystyle f_{t}(x)=\\alpha _{t}h(x)} is the weak learner that is being considered for addition to the final classifier. Weighting At each iteration of the training process, a weight w i, t {\\displaystyle w_{i,t}} is assigned to each sample in the training set equal to the current error E ( F t − 1 ( x i ) ) {\\displaystyle E(F_{t-1}(x_{i}))} on that sample. These weights can be used in the training of the weak learner. For instance, decision trees can be grown which favor the splitting of sets of samples with large weights. Derivation This derivation follows Rojas (2009): Suppose we have a data set { ( x 1, y 1 ),..., ( x N, y N ) } {\\displaystyle \\{(x_{1},y_{1}),\\ldots,(x_{N},y_{N})\\}} where each item x i {\\displaystyle x_{i}} has an associated class y i ∈ { − 1, 1 } {\\displaystyle y_{i}\\in \\{-1,1\\}}, and a set of weak classifiers { k 1,..., k L } {\\displaystyle \\{k_{1},\\ldots,k_{L}\\}} each of which outputs a classification k j ( x i ) ∈ { − 1, 1 } {\\displaystyle k_{j}(x_{i})\\in \\{-1,1\\}} for each item. After the ( m − 1 ) {\\displaystyle (m-1)} -th iteration our boosted classifier is a linear combination of the weak classifiers of the form: C ( m − 1 ) ( x i ) = α 1 k 1 ( x", "{x}}_{i}\\in {\\mathcal {X}}} Training labels Y = { y 1,..., y l } {\\displaystyle Y=\\{y_{1},\\dots,y_{\\ell }\\}}, y i ∈ { − 1, 1 } {\\displaystyle y_{i}\\in \\{-1,1\\}} Convergence threshold θ ≥ 0 {\\displaystyle \\theta \\geq 0} Output: Classification function f : X → { − 1, 1 } {\\displaystyle f:{\\mathcal {X}}\\to \\{-1,1\\}} Initialization Weights, uniform λ n ← 1 l, n = 1,..., l {\\displaystyle \\lambda _{n}\\leftarrow {\\frac {1}{\\ell }},\\quad n=1,\\dots,\\ell } Edge γ ← 0 {\\displaystyle \\gamma \\leftarrow 0} Hypothesis count J ← 1 {\\displaystyle J\\leftarrow 1} Iterate h ^ ← argmax ω ∈ Ω ∑ n = 1 l y n h ( x n ; ω ) λ n {\\displaystyle {\\hat {h}}\\leftarrow {\\underset {\\omega \\in \\Omega }{\\textrm {argmax}}}\\sum _{n=1}^{\\ell }y_{n}h({\\boldsymbol {x}}_{n};\\omega )\\lambda _{n}} if ∑ n = 1 l y n h ^ ( x n ) λ n + γ ≤ θ {\\displaystyle \\sum _{n=1}^{\\ell }y_{n}{\\hat {h}}({\\boldsymbol {x}}_{n})\\lambda _{n}+\\gamma \\leq \\theta } then break h J ← h ^ {\\displaystyle h_{J}\\leftarrow {\\hat {h}}} J ← J + 1 {\\displaystyle J\\leftarrow J+1} ( λ, γ ) ← {\\displayst", "n, y n ) } {\\displaystyle \\textstyle S=\\{(x_{1},y_{1}),\\dots,(x_{n},y_{n})\\}} and the goal is to find a target function f : X → Y {\\displaystyle \\textstyle f:X\\rightarrow Y} that minimizes some loss function, e.g. the square loss function. More formally f = arg ⁡ min g ∫ V ( y, g ( x ) ) d ρ ( x, y ) {\\displaystyle f=\\arg \\min _{g}\\int V(y,g(x))d\\rho (x,y)}, where V ( ⋅, ⋅ ) {\\displaystyle V(\\cdot,\\cdot )} is the loss function, e.g. V ( y, z ) = ( y − z ) 2 {\\displaystyle V(y,z)=(y-z)^{2}} and ρ ( x, y ) {\\displaystyle \\rho (x,y)} the probability distribution according to which the elements of the training set are sampled. If the conditional probability distribution ρ x ( y ) {\\displaystyle \\rho _{x}(y)} is known then the target function has the closed form f ( x ) = ∫ y y d ρ x ( y ) {\\displaystyle f(x)=\\int _{y}yd\\rho _{x}(y)}. So the set S {\\displaystyle S} is a set of samples from the probability distribution ρ ( x, y ) {\\displaystyle \\rho (x,y)}. Now the goal of distributional learning theory if to find ρ {\\displaystyle \\rho } given S {\\displaystyle S} which can be used to find the target function f {\\displaystyle f}. Definition of learnability A class of distributions C {\\displaystyle \\textstyle C} is called efficiently learnable if for every ε > 0 {\\displaystyle \\textstyle \\epsilon >0} and 0 < δ ≤ 1 {\\displaystyle \\textstyle 0" ]
[ "This corresponds to doing logistic regression as seen in class.", "The loss function $L$ is non-convex in $\\ww$.", "If I find a vector $\\ww^\\star$ such that $L(\\ww^\\star) < 1 / N$, then $\\ww^*$ linearly separates my dataset.", "There exists a vector $\\ww^\\star$ such that $L(\\ww^\\star) = 0$.", "\"None of the statements are true." ]
If I find a vector $\ww^\star$ such that $L(\ww^\star) < 1 / N$, then $\ww^*$ linearly separates my dataset.$L(w^\star) < 1 / N$ implies $\exp(- y_i \xx_i op w^*) < 1$ $ orall i$, which means that $y_i \xx_i op w^\star > 0$ $ orall i$.
1404
Which of the following is correct regarding Louvain algorithm?
[ "In classical mechanics, a Liouville dynamical system is an exactly solvable dynamical system in which the kinetic energy T and potential energy V can be expressed in terms of the s generalized coordinates q as follows: T = 1 2 { u 1 ( q 1 ) + u 2 ( q 2 ) + ⋯ + u s ( q s ) } { v 1 ( q 1 ) q ̇ 1 2 + v 2 ( q 2 ) q ̇ 2 2 + ⋯ + v s ( q s ) q ̇ s 2 } {\\displaystyle T={\\frac {1}{2}}\\left\\{u_{1}(q_{1})+u_{2}(q_{2})+\\cdots +u_{s}(q_{s})\\right\\}\\left\\{v_{1}(q_{1}){\\dot {q}}_{1}^{2}+v_{2}(q_{2}){\\dot {q}}_{2}^{2}+\\cdots +v_{s}(q_{s}){\\dot {q}}_{s}^{2}\\right\\}} V = w 1 ( q 1 ) + w 2 ( q 2 ) + ⋯ + w s ( q s ) u 1 ( q 1 ) + u 2 ( q 2 ) + ⋯ + u s ( q s ) {\\displaystyle V={\\frac {w_{1}(q_{1})+w_{2}(q_{2})+\\cdots +w_{s}(q_{s})}{u_{1}(q_{1})+u_{2}(q_{2})+\\cdots +u_{s}(q_{s})}}} The solution of this system consists of a set of separably integrable equations 2 Y d t = d φ 1 E χ 1 − ω 1 + γ 1 = d φ 2 E χ 2 − ω 2 + γ 2 = ⋯ = d φ s E χ s − ω s + γ s {\\displaystyle {\\frac {\\sqrt {2}}", "Léon-Yves Bottou (French pronunciation: [leɔ̃ bɔtu]; born 1965) is a researcher best known for his work in machine learning and data compression. His work presents stochastic gradient descent as a fundamental learning algorithm. He is also one of the main creators of the DjVu image compression technology (together with Yann LeCun and Patrick Haffner), and the maintainer of DjVuLibre, the open source implementation of DjVu. He is the original developer of the Lush programming language. Life Léon Bottou was born in France in 1965. He obtained the Diplôme d'Ingénieur from École Polytechnique in 1987, a Magistère de Mathématiques Fondamentales et Appliquées et d’Informatique from École Normale Supérieure in 1988, a Diplôme d'Études Approndies in Computer Science in 1988, in 1988, and a PhD from Université Paris-Sud in 1991. His master's thesis concerned using Time Delay Neural Networks for speech recognition. He then joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, where he collaborated with Vladimir Vapnik on local learning algorithms. in 1992, he returned to France and founded Neuristique S.A., a company that produced machine learning tools and one of the first data mining software packages. In 1995, he returned to Bell Laboratories, where he developed a number of new machine learning methods, such as Graph Transformer Networks (similar to conditional random field), and applied them to handwriting recognition and OCR. The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s. In 1996, he joined AT&T Labs and worked primarily on the DjVu image compression technology, that is used by some websites, notably the Internet Archive, to distribute scanned documents. Between 2002 and 2010, he was a research scientist at NEC Laboratories in Princeton, New Jersey, where he focused on the theory and practice of machine learning with large-scale datasets, on-line learning, and stochastic optimiza", "that space, (ii) the expectation value of an observable is obtained in the manner as the expectation value in quantum mechanics, (iii) the probabilities of measuring certain values of some observables are calculated by the Born rule, and (iv) the state space of a composite system is the tensor product of the subsystem's spaces. These axioms allow us to recover the formalism of both classical and quantum mechanics. Specifically, under the assumption that the classical position and momentum operators commute, the Liouville equation for the KvN wavefunction is recovered from averaged Newton's laws of motion. However, if the coordinate and momentum obey the canonical commutation relation, the Schrödinger equation of quantum mechanics is obtained. Measurements In the Hilbert space and operator formulation of classical mechanics, the Koopman von Neumann–wavefunction takes the form of a superposition of eigenstates, and measurement collapses the KvN wavefunction to the eigenstate which is associated the measurement result, in analogy to the wave function collapse of quantum mechanics. However, it can be shown that for Koopman–von Neumann classical mechanics non-selective measurements leave the KvN wavefunction unchanged. KvN vs Liouville mechanics The KvN dynamical equation (KvN dynamical eq in xp) and Liouville equation (Liouville eq) are first-order linear partial differential equations. One recovers Newton's laws of motion by applying the method of characteristics to either of these equations. Hence, the key difference between KvN and Liouville mechanics lies in weighting individual trajectories: Arbitrary weights, underlying the classical wave function, can be utilized in the KvN mechanics, while only positive weights, representing the probability density, are permitted in the Liouville mechanics (see this scheme). Quantum analogy Being explicitly based on the Hilbert space language, the KvN classical mechanics adopts many techniques from quantum mechanics, for example, perturbation and diagram techniques as well as functional integral methods. The KvN", "In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification. A greedy optimisation procedure and thus fast version were subsequently developed. The RVM has an identical functional form to the support vector machine, but provides probabilistic classification. It is actually equivalent to a Gaussian process model with covariance function: k ( x, x ′ ) = ∑ j = 1 N 1 α j φ ( x, x j ) φ ( x ′, x j ) {\\displaystyle k(\\mathbf {x},\\mathbf {x'} )=\\sum _{j=1}^{N}{\\frac {1}{\\alpha _{j}}}\\varphi (\\mathbf {x},\\mathbf {x} _{j})\\varphi (\\mathbf {x} ',\\mathbf {x} _{j})} where φ {\\displaystyle \\varphi } is the kernel function (usually Gaussian), α j {\\displaystyle \\alpha _{j}} are the variances of the prior on the weight vector w ∼ N ( 0, α − 1 I ) {\\displaystyle w\\sim N(0,\\alpha ^{-1}I)}, and x 1,..., x N {\\displaystyle \\mathbf {x} _{1},\\ldots,\\mathbf {x} _{N}} are the input vectors of the training set. Compared to that of support vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use an expectation maximization (EM)-like learning method and are therefore at risk of local minima. This is unlike the standard sequential minimal optimization (SMO)-based algorithms employed by SVMs, which are guaranteed to find a global optimum (of the convex problem). The relevance vector machine was patented in the United States by Microsoft (patent exp", "analytically: one needs to find a minimum of a one-dimensional quadratic function. k {\\displaystyle k} is the negative of the sum over the rest of terms in the equality constraint, which is fixed in each iteration. The algorithm proceeds as follows: Find a Lagrange multiplier α 1 {\\displaystyle \\alpha _{1}} that violates the Karush–Kuhn–Tucker (KKT) conditions for the optimization problem. Pick a second multiplier α 2 {\\displaystyle \\alpha _{2}} and optimize the pair ( α 1, α 2 ) {\\displaystyle (\\alpha _{1},\\alpha _{2})}. Repeat steps 1 and 2 until convergence. When all the Lagrange multipliers satisfy the KKT conditions (within a user-defined tolerance), the problem has been solved. Although this algorithm is guaranteed to converge, heuristics are used to choose the pair of multipliers so as to accelerate the rate of convergence. This is critical for large data sets since there are n ( n − 1 ) / 2 {\\displaystyle n(n-1)/2} possible choices for α i {\\displaystyle \\alpha _{i}} and α j {\\displaystyle \\alpha _{j}}. Related Work The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by Bernhard Boser, Isabelle Guyon, Vladimir Vapnik. It is known as the \"chunking algorithm\". The algorithm starts with a random subset of the data, solves this problem, and iteratively adds examples which violate the optimality conditions. One disadvantage of this algorithm is that it is necessary to solve QP-problems scaling with the number of SVs. On real world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm. In 1997, E. Osuna, R. Freund, and F. Girosi proved a theorem which suggests a whole new set of QP algorithms for SVMs. By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A se" ]
[ "It creates a hierarchy of communities with a common root", "Clique is the only topology of nodes where the algorithm detects the same communities, independently of the starting point", "If n cliques of the same order are connected cyclically with n-1 edges, then the algorithm will always detect the same communities, independently of the starting point", "Modularity is always maximal for the communities found at the top level of the community hierarchy" ]
['If n cliques of the same order are connected cyclically with n-1 edges, then the algorithm will always detect the same communities, independently of the starting point']
1410
Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is:
[ "versions). A pair d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}} is concordant if both r a {\\displaystyle r_{a}} and r b {\\displaystyle r_{b}} agree in how they order d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}}. It is discordant if they disagree. Information retrieval quality Information retrieval quality is usually evaluated by the following three measurements: Precision Recall Average precision For a specific query to a database, let P r e l e v a n t {\\displaystyle P_{relevant}} be the set of relevant information elements in the database and P r e t r i e v e d {\\displaystyle P_{retrieved}} be the set of the retrieved information elements. Then the above three measurements can be represented as follows: precision = | P relevant ∩ P retrieved | | P retrieved | ; recall = | P relevant ∩ P retrieved | | P relevant | ; average precision = ∫ 0 1 Prec ( recall ) d recall, {\\displaystyle {\\begin{aligned}&{\\text{precision}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{retrieved}}\\right|}};\\\\[6pt]&{\\text{recall}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{relevant}}\\right|}};\\\\[6pt]&{\\text{average precision}}=\\int _{0}^{1}{\\text{Prec}}({\\text{recall}})\\,d{\\text{recall}},$end{aligned}}} where Prec ( Recall ) {\\displaystyle {\\text{Prec}}({\\text{Recall}})} is the Precision {\\displaystyle {\\text{Precision}}}", ". Examples of ranking quality measures: Mean average precision (MAP); DCG and NDCG; Precision@n, NDCG@n, where \"@n\" denotes that the metrics are evaluated only on top n documents; Mean reciprocal rank; Kendall's tau; Spearman's rho. DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. Other metrics such as MAP, MRR and precision, are defined only for binary judgments. Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric: Expected reciprocal rank (ERR); Yandex's pfound. Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document. Approaches Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered). Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets. In this section, without further notice, x {\\displaystyle x} denotes an object to be evaluated, for example, a document or an image, f ( x ) {\\displaystyle f(x)} denotes a single-value hypothesis, h ( ⋅ ) {\\displaystyle h(\\cdot )} denotes a bi-variate or multi-variate function and L ( ⋅ ) {\\displaystyle L(\\cdot )} denotes the loss function", "of evaluating probabilistic classifiers, alternative evaluation metrics have been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions. In information systems Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. See also Popula", "documents total number of relevant documents = true positives true positives + false negatives ▶Estimates (one minus) the probability to miss relevant documents ▶Ignores false positives. Take only false negatives into account ▶Ignores irrelevant documents. Takes only relevant documents into account ▶Can be biased by retrieving all documents: gives a perfect score to the system that retrieves all documents NLP evaluation – 40 / 58 Evaluation protocol Gold standards Quality of the reference Evaluation metrics Keeping the evaluation clean: training, validating, testing Evaluation measures Validity of the results Evaluation Campaigns Conclusion c <unk>EPFL C. Grivaz, J.-C. Chappelier, M. Rajman Precision & Recall: example Spam filtering example: System Reference email0 OK OK email1 OK Spam email2 OK OK email3 Spam OK email4 OK OK email5 OK OK email6 OK OK email7 Spam Spam email8 OK OK email9 OK OK emailA OK Spam emailB Spam Spam emailC OK OK emailD OK OK emailE OK OK emailF Spam Spam Confusion matrix: P = R = Note: ▶accuracy = ▶always-ok system: accuracy=, R =, P NLP evaluation – 41 / 58 Evaluation protocol Gold standards Quality of the reference Evaluation metrics Keeping the evaluation clean: training, validating, testing Evaluation measures Validity of the results Evaluation Campaigns Conclusion c <unk>EPFL C. Grivaz, J.-C. Chappelier, M. Rajman Precision vs Recall plots For tasks where recall can be controlled (by controlling the amount of outputs), it could be informative to plot precision versus recall Precision Recall More in the “Information Retrieval” lecture NLP evaluation – 42 / 58 Evaluation protocol Gold standards Quality of the reference Evaluation metrics Keeping the evaluation clean: training, validating, testing Evaluation measures Validity of the results Evaluation Campaigns Conclusion c <unk>EPFL C. Grivaz, J.-C. Chappelier, M. Rajman F-score ▶Harmonic mean of precision and recall ▶The harmonic mean penalizes large divergence between numbers, contrary", "mapping and update the table. Thus, this is typically only used for critical services whose failure may have big repercussions. For instance, to secure the mapping IP‐ MAC for the gateway. For the others, one can prevent spoofing by: ‐ Instead of taking the first IP‐MAC association returned by ARP as truthful, check whether there is an inconsistency: there is more than one IP associated to this MAC in your cache table, there is an IP for which you have seen more than one MAC, you observe packets with more than one MAC associated to the IP. 13 ‐ Instead of taking the first IP‐MAC association returned by ARP as truthful, ask other members of the network if they have the same observation, i.e., cross‐check the discovery. ‐ Send an email to the user, or the system administrator, if one observes a change in an IP‐MAC association. Only validate the change if a person confirms it is correct. Note that the two last methods effectively implement the concept of separation of privilege by requiring that the adversary compromises more than one entity (more than one machine in the network in the first case, and one human in the second case). 13 In order to resolve a domain, a client sends a DNS query to a recursive resolver, a server typically provided by the ISP with resolving and caching capabilities. If the domain resolution by a client is not cached by the recursive name server, it contacts a number of authoritative name servers which hold a distributed database of domain names to IP mappings. The recursive resolver traverses the hierarchy of authoritative name servers until it obtains an answer for the query, and sends it back to the client. The client can use the resolved IP address to connect to the destination host. 14 In order to resolve a domain, a client sends a DNS query to a recursive resolver, a server typically provided by the ISP with resolving and caching capabilities. If the domain resolution by a client is not cached by the recursive name server, it contacts a number of authoritative name servers which hold a distributed database of domain names to IP mappings. The recursive resolver traverses the hierarchy of authoritative name servers until it obtains an answer for the" ]
[ "1/2", "5/12", "3/4", "7/24" ]
['5/12']
1415
Which of the following is true?
[ "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o oo o o o o o o o o oo o oo o o o o o oo o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o ooo o o o o o o o oo o o o o o oo oo o o o o o o o o o oooo o o o o o o o o o o o o o oo o o o o o o o o o oo o o o o o o o o o oo o o o oo ooo o o o o o o oo o o o oo o o oo o o o o oo o o o o o o o o o oo o o o oo oo o o o o o o oo o o o o o o o o o o oo o oo o oo o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o", "o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o oo o o o ooo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oo o o o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo oo o o o o o oo o o o o o o o oo oo o o oo o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o oo oo o o o o oo o o o o o o o o o o o o o o oo o o o oo o o o o o o o oo o o o oo o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o o o oo o o o o o o o o o", "o o o o o oo o o oo o o oo o oo o o o o o oo oo o o o o oo o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o oo o o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o oo o o o oo o o oo oo o o o o o o o o o o o oo o o o o o oo o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o ooo o oo ooo o o o o o o o o o o o o oo o oo o o o o o o o oo oo o o ooo o o oo o o oo o o o o o o oo oooo ooo oo o o o o o oooo o ooo o o o o o oo o ooo oo o o o o o o oo oo o o o o o o o oooo oo o o oo oo o o o o o o o o oo o o oo oo oo o oo oo o o o oo o o o o o o o oo oo o oo o o o oo o o o o o o o o ooo o ooo o o o o o oo o o ooo o ooo o o o oo o o oo o o o o o o o oo oo o o o o o oo o o oo o o o o o o o o o o o oooooo o ooooooo o oooo o o o oo o o o o o o oo o o o o o o o o o o o o o oo o o o oo o o o o oo o o oo oo oo o o o o o o oo o oo o o o o o o ooo oo o o o oo o o o o o o ooo o o o o o o o o o oo oo o o o o o o o o oo o o o o oo o o o o o o o oo o o o o o oo o o o ooo o o o ooo o o o oo o o ooo oo o o oo o o ooo o o o o o o", "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o oo o oo o o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o o o o oo o o o o oo o o o o o o o oo o o o o o o o o o o o o oo o o o o o o o o o o o ooo oo o o o o o o o o o o oooo o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o oo o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o oo o oo oo o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o oo o o o o o o o o o o o o o o o o o o o o o o oo oo oo o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o oo o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o oo o o o o o o o o o o o o o oo o o o o o o oo o o o o oo oo o o o o o o o o oo o o o o o o o o o ooo o o", "o o o oo o o o oo o o o o o o o o o o o oo o o o o o ooo o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o ooo o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o oooo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o oo o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o oo o o o o o o o o o o o o o o oo o o o o oo o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o oo o o o o oo o o o o o o o o o o o oo o o o o o o o o oo o o o o o o o o o o o o o o o o o o o ooo o o oo oo o o o o ooo o oo o o o o oo o o o o o o o o o o o o o o o o o oo o o oo o o o oo o o o o o o oo o o o o o oo o o o oo o o o o o o o o o o o oo o o o o o oooo o oo o o o o oo o o o o oo o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o o oo o o o o o o o o o o" ]
[ "High precision implies low recall", "High precision hurts recall", "High recall hurts precision", "High recall implies low precision" ]
['High precision hurts recall', 'High recall hurts precision']
1416
The inverse document frequency of a term can increase
[ ",d); augmented frequency, to prevent a bias towards longer documents, e.g. raw frequency divided by the raw frequency of the most frequently occurring term in the document: t f ( t, d ) = 0.5 + 0.5 ⋅ f t, d max { f t ′, d : t ′ ∈ d } {\\displaystyle \\mathrm {tf} (t,d)=0.5+0.5\\cdot {\\frac {f_{t,d}}{\\max\\{f_{t',d}:t'\\in d\\}}}} Inverse document frequency The inverse document frequency is a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is the logarithmically scaled inverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): i d f ( t, D ) = log ⁡ N | { d : d ∈ D and t ∈ d } | {\\displaystyle \\mathrm {idf} (t,D)=\\log {\\frac {N}{|\\{d:d\\in D{\\text{ and }}t\\in d\\}|}}} with D {\\displaystyle D} is the set of all documents in the corpus N {\\displaystyle N} total number of documents in the corpus N = | D | {\\displaystyle N={|D|}} n t = | { d ∈ D : t ∈ d } | {\\displaystyle n_{t}=|\\{d\\in D:t\\in d\\}|} : number of documents where the term t {\\displaystyle t} appears (i.e., t f ( t, d ) ≠ 0 {\\displaystyle \\mathrm {tf} (t,d)\\neq 0} ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the numerator 1 + N {\\displaystyle 1+N} and denomina", "m of \"inverse\" relative document frequency. This probabilistic interpretation in turn takes the same form as that of self-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriate event spaces for the required probability distributions: not only documents need to be taken into account, but also queries and terms. Link with information theory Both term frequency and inverse document frequency can be formulated in terms of information theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distribution p ( d, t ) {\\displaystyle p(d,t)} is that: p ( d | t ) = 1 | { d ∈ D : t ∈ d } | {\\displaystyle p(d|t)={\\frac {1}{|\\{d\\in D:t\\in d\\}|}}} This assumption and its implications, according to Aizawa: \"represent the heuristic that tf–idf employs.\" The conditional entropy of a \"randomly chosen\" document in the corpus D {\\displaystyle D}, conditional to the fact it contains a specific term t {\\displaystyle t} (and assuming that all documents have equal probability to be chosen) is: H ( D | T = t ) = − ∑ d p d | t log ⁡ p d | t = − log ⁡ 1 | { d ∈ D : t ∈ d } | = log ⁡ | { d ∈ D : t ∈ d } | | D | + log ⁡ | D | = − i d f ( t ) + log ⁡ | D | {\\displaystyle H({\\cal {D}}|{\\cal {T}}=t)=-\\sum _{d}p_{d|t}\\log p_{d|t}=-\\log {\\frac {1}{|\\{d\\in D:t\\in d\\}|}}=\\log {\\frac {|\\{d\\in D:t\\in d\\}|}{|D|}}+", "staff\", and \"salad\" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, \"good\" and \"sweet\" appears in every play and are completely uninformative as to which play it is. Definition The tf–idf is the product of two statistics, term frequency and inverse document frequency. There are various ways for determining the exact values of both statistics. A formula that aims to define the importance of a keyword or phrase within a document or a web page. Term frequency Term frequency, tf(t,d), is the relative frequency of term t within document d, t f ( t, d ) = f t, d ∑ t ′ ∈ d f t ′, d {\\displaystyle \\mathrm {tf} (t,d)={\\frac {f_{t,d}}{\\sum _{t'\\in d}{f_{t',d}}}}}, where ft,d is the raw count of a term in a document, i.e., the number of times that term t occurs in document d. Note the denominator is simply the total number of terms in document d (counting each occurrence of the same term separately). There are various other ways to define term frequency:: 128 the raw count itself: tf(t,d) = ft,d Boolean \"frequencies\": tf(t,d) = 1 if t occurs in d and 0 otherwise; logarithmically scaled frequency: tf(t,d) = log (1 + ft,d); augmented frequency, to prevent a bias towards longer documents, e.g. raw frequency divided by the raw frequency of the most frequently occurring term in the document: t f ( t, d ) = 0.5 + 0.5 ⋅ f t, d max { f t ′, d : t ′ ∈ d } {\\displaystyle \\mathrm {tf} (t,d)=0.5+0.5\\cdot {\\frac {f_{t,d}}{\\max\\{f_{t'", "} | {\\displaystyle n_{t}=|\\{d\\in D:t\\in d\\}|} : number of documents where the term t {\\displaystyle t} appears (i.e., t f ( t, d ) ≠ 0 {\\displaystyle \\mathrm {tf} (t,d)\\neq 0} ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the numerator 1 + N {\\displaystyle 1+N} and denominator to 1 + | { d ∈ D : t ∈ d } | {\\displaystyle 1+|\\{d\\in D:t\\in d\\}|}. Term frequency–inverse document frequency Then tf–idf is calculated as t f i d f ( t, d, D ) = t f ( t, d ) ⋅ i d f ( t, D ) {\\displaystyle \\mathrm {tfidf} (t,d,D)=\\mathrm {tf} (t,d)\\cdot \\mathrm {idf} (t,D)} A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0. Justification of idf Idf was introduced as \"term specificity\" by Karen Spärck Jones in a 1972 paper. Although it has worked well as a heuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to find information theoretic justifications for it. Spärck Jones's own explanation did not propose much theory, aside from a connection to Zipf's law. Attemp", "quency - Inverse Document Frequency tf-idf(wi,dj) = tf(wi,dj)·idf(wi) with idf(wi) = log |D| nb(dk ⊃wi) |D|: number of documents nb(dk ⊃wi): number of documents which contain term wi Computational Linguistics Course (EPFL-MsCS) – Information Retrieval – 35 / 74 Introduction Toolchain Indexing Vector Space model Queries Evaluation Beyond the vector model Conclusion c <unk>EPFL 2008–2014 Jean-Cédric Chappelier & Emmanuel Eckard Weighting Example ▶ Now so long, Marianne it’s time that we began to laugh and cry and cry and laugh about it all again. ▶RD : V ∗→R representation function: here: Term Frequency -→([aardvark,0] [begin,1] [cry,2] [information,0] [laugh,2] [long,1] [Marianne,1] [retrieval,0] [time,1]) -→(0 1 2 0 2 1 1 0 1.) In practice the vector is very sparse Computational Linguistics Course (EPFL-MsCS) – Information Retrieval – 36 / 74 Introduction Toolchain Indexing Vector Space model Queries Evaluation Beyond the vector model Conclusion c <unk>EPFL 2008–2014 Jean-Cédric Chappelier & Emmanuel Eckard Vector space model 1 t 2 t 3 t 1 d 2 d 3 d ▶indexing terms define axis ▶documents are point in the vector space (representing directions) Computational Linguistics Course (EPFL-MsCS) – Information Retrieval – 37 / 74 Introduction Toolchain Indexing Vector Space model Queries Evaluation Beyond the vector model Conclusion c <unk>EPFL 2008–2014 Jean-Cédric Chappelier & Emmanuel Eckard Proximity measure between documents Cosine similarity cos(d1,d2) = d1 ||d1|| · d2 ||d2|| = N ∑ j=1 d1j d2j rh ∑j d1j 2i h ∑j d2j 2i" ]
[ "by adding the term to a document that contains the term", "by removing a document from the document collection that does not contain the term", "by adding a document to the document collection that contains the term", "by adding a document to the document collection that does not contain the term" ]
by adding a document to the document collection that does not contain the term
1417
Which of the following is wrong regarding Ontologies?
[ "(2008). \"Ontology (Science)\". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). \"What is an Ontology?\". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-92673-3. S2CID 8522608. Uschold, Mike; Gruninger, M. (1996). \"Ontologies: Principles, Methods and Applications\". Knowledge Engineering Review. 11 (2): 93–136. CiteSeerX 10.1.1.111.5903. doi:10.1017/S0269888900007797. S2CID 2618234. Pidcock, W. \"What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?\". Archived from the original on 2009-10-14. Yudelson, M.; Gavrilova, T.; Brusilovsky, P. (2005). \"Towards User Modeling Meta-ontology\". User Modeling 2005. Lecture Notes in Computer Science. Vol. 3538. Springer. pp. 448–452. CiteSeerX 10.1.1.86.7079. doi:10.1007/11527886_62. ISBN 978-3-540-31878-1. Movshovitz-Attias, Dana; Cohen, William W. (2012). \"Bootstrapping Biomedical Ontologies for Scientific Text using NELL\" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874. External links Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. \"Ontolog Forum\") - an Open, International, Virtual Community of Practice on Ontology,", "contributors to the Journal of Consciousness Studies. Ontology Nicolai Hartmann equates ontology with Aristotle's science of being qua being. This science involves studying the most general characteristics of entities, usually referred to as categories, and the relations between them. According to Hartmann, the most general categories are: Moments of being (Seinsmomente): existence (Dasein) and essence (Sosein) Modes of being (Seinsweisen): reality and ideality Modalities of being (Seinsmodi): possibility, actuality and necessity Existence and essence The existence of an entity constitutes the fact that this entity is there, that it exists. Essence, on the other hand, constitutes what this entity is like, what its characteristics are. Every entity has both of these modes of being. But, as Hartmann points out, there is no absolute difference between existence and essence. For example, the existence of a leaf belongs to the essence of the tree while the existence of the tree belongs to the essence of the forest. Reality and ideality Reality and ideality are two disjunctive categories: every entity is either real or ideal. Ideal entities are universal, returnable and always existing while real entities are individual, unique and destructible. Among the ideal entities are mathematical objects and values. Reality is made up of a chain of temporal events. Reality is obtrusive, it is often experienced as a form of resistance in contrast to ideality. Modalities of being The modalities of being are divided into the absolute modalities (actuality and non-actuality) and the relative modalities (possibility, impossibility and necessity). The relative modalities are relative in the sense that they depend on the absolute modalities: something is possible, impossible or necessary because something else is actual. Hartmann analyzes modality in the real sphere in terms of necessary conditions. An entity becomes actual if all its necessary conditions obtain. If all these factors obtain, it is necessary that the entity exists. But as long as one of its factors is missing, it can't become actual, it is impossible. This has the consequence that all positive and all the negative modalities fall together: whatever", "strapping Biomedical Ontologies for Scientific Text using NELL\" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874. External links Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. \"Ontolog Forum\") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology Use of Ontologies in Natural Language Processing Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit. Standardization of Ontologies", "; McGuinness, Deborah L. (March 2001). \"Ontology Development 101: A Guide to Creating Your First Ontology\". Stanford Knowledge Systems Laboratory Technical Report KSL-01-05, Stanford Medical Informatics Technical Report SMI-2001-0880. Archived from the original on 2010-07-14. Chaminda Abeysiriwardana, Prabath; Kodituwakku, Saluka R (2012). \"Ontology Based Information Extraction for Disease Intelligence\". International Journal of Research in Computer Science. 2 (6): 7–19. arXiv:1211.3497. Bibcode:2012arXiv1211.3497C. doi:10.7815/ijorcs.26.2012.051 (inactive 8 December 2024). S2CID 11297019.: CS1 maint: DOI inactive as of December 2024 (link) Razmerita, L.; Angehrn, A.; Maedche, A. (2003). \"Ontology-Based User Modeling for Knowledge Management Systems\". User Modeling 2003. Lecture Notes in Computer Science. Vol. 2702. Springer. pp. 213–7. CiteSeerX 10.1.1.102.4591. doi:10.1007/3-540-44963-9_29. ISBN 3-540-44963-9. Soylu, A.; De Causmaecker, Patrick (2009). \"Merging model driven and ontology driven system development approaches pervasive computing perspective\". Proceedings of the 24th International Symposium on Computer and Information Sciences. pp. 730–5. doi:10.1109/ISCIS.2009.5291915. ISBN 978-1-4244-5021-3. S2CID 2267593. Smith, B. (2008). \"Ontology (Science)\". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). \"What is an Ontology?\". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_", "the Visual Notation for OWL Ontologies (VOWL). Engineering Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them. Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ensuring the ontology is current with domain knowledge and term use Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem Ensuring the ontology can support its use cases Editors Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages. Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc. Learning Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges. Research Epistemological assumptions, which in research asks \"What do you know? or \"How do you know it?\", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they" ]
[ "We can create more than one ontology that conceptualize the same real-world entities", "Ontologies help in the integration of data expressed in different models", "Ontologies support domain-specific vocabularies", "Ontologies dictate how semi-structured data are serialized" ]
['Ontologies dictate how semi-structured data are serialized']
1418
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
[ ". Examples of ranking quality measures: Mean average precision (MAP); DCG and NDCG; Precision@n, NDCG@n, where \"@n\" denotes that the metrics are evaluated only on top n documents; Mean reciprocal rank; Kendall's tau; Spearman's rho. DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. Other metrics such as MAP, MRR and precision, are defined only for binary judgments. Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric: Expected reciprocal rank (ERR); Yandex's pfound. Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document. Approaches Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered). Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets. In this section, without further notice, x {\\displaystyle x} denotes an object to be evaluated, for example, a document or an image, f ( x ) {\\displaystyle f(x)} denotes a single-value hypothesis, h ( ⋅ ) {\\displaystyle h(\\cdot )} denotes a bi-variate or multi-variate function and L ( ⋅ ) {\\displaystyle L(\\cdot )} denotes the loss function", "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, all patents associated with PageRank have expired. Description PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of \"measuring\" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R ( E ). {\\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on the Webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it (\"incoming links\"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking", "versions). A pair d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}} is concordant if both r a {\\displaystyle r_{a}} and r b {\\displaystyle r_{b}} agree in how they order d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}}. It is discordant if they disagree. Information retrieval quality Information retrieval quality is usually evaluated by the following three measurements: Precision Recall Average precision For a specific query to a database, let P r e l e v a n t {\\displaystyle P_{relevant}} be the set of relevant information elements in the database and P r e t r i e v e d {\\displaystyle P_{retrieved}} be the set of the retrieved information elements. Then the above three measurements can be represented as follows: precision = | P relevant ∩ P retrieved | | P retrieved | ; recall = | P relevant ∩ P retrieved | | P relevant | ; average precision = ∫ 0 1 Prec ( recall ) d recall, {\\displaystyle {\\begin{aligned}&{\\text{precision}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{retrieved}}\\right|}};\\\\[6pt]&{\\text{recall}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{relevant}}\\right|}};\\\\[6pt]&{\\text{average precision}}=\\int _{0}^{1}{\\text{Prec}}({\\text{recall}})\\,d{\\text{recall}},$end{aligned}}} where Prec ( Recall ) {\\displaystyle {\\text{Prec}}({\\text{Recall}})} is the Precision {\\displaystyle {\\text{Precision}}}", "in Google Toolbar, though the PageRank continued to be used internally to rank content in search results. SERP rank The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200). Search engine optimization (SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword. The PageRank of the HomePage of a website is the best indication Google offers for website authority. After the introduction of Google Places into the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results. When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google. Google directory PageRank The Google Directory PageRank was an 8-unit measurement. Unlike the Google Toolbar, which showed a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011. False or spoofed PageRank It was known that the PageRank shown in the Toolbar could easily be spoofed. Redirection from one page to another, either via a HTTP 302 response or a \"Refresh\"", "Retrievability is a term associated with the ease with which information can be found or retrieved using an information system, specifically a search engine or information retrieval system. A document (or information object) has high retrievability if there are many queries which retrieve the document via the search engine, and the document is ranked sufficiently high that a user would encounter the document. Conversely, if there are few queries that retrieve the document, or when the document is retrieved the documents are not high enough in the ranked list, then the document has low retrievability. Retrievability can be considered as one aspect of findability. Applications of retrievability include detecting search engine bias, measuring algorithmic bias, evaluating the influence of search technology, tuning information retrieval systems and evaluating the quality of documents in a collection. See also Information retrieval Knowledge mining Search engine optimization Findability References Azzopardi, L. & Vinay, V. (2008). \"Retrievability: an evaluation measure for higher order information access tasks\". Proceedings of the 17th ACM conference on Information and knowledge management. CIKM '08. Napa Valley, California, USA: ACM. pp. 561–570. doi:10.1145/1458082.1458157. ISBN 9781595939913. S2CID 8705350. Azzopardi, L. & Vinay, V. (2008). \"Accessibility in information retrieval\". Proceedings of the IR research, 30th European conference on Advances in information retrieval. ECIR '08. Glasgow, UK: Springer. pp. 482–489. ISBN 9783540786450. Retrieved 7 Dec 2016." ]
[ "P@k-1 > P@k+1", "P@k-1 = P@k+1", "R@k-1 < R@k+1", "R@k-1 = R@k+1" ]
['R@k-1 < R@k+1']
1420
What is true regarding Fagin's algorithm?
[ "This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where the universe's history repeats itself arbitrarily many times due to properties of statistical mechanics; this is the time scale when it will first be somewhat similar (for a reasonable choice of \"similar\") to its current state again. Combinatorial processes give rise to astonishingly large numbers. The factorial function, which quantifies permutations of a fixed set of objects, grows superexponentially as the number of objects increases. Stirling's formula provides a precise asymptotic expression for this rapid growth. In statistical mechanics, combinatorial numbers reach such immense magnitudes that they are often expressed using logarithms. Gödel numbers, along with similar representations of bit-strings in algorithmic information theory, are vast—even for mathematical statements of moderate length. Remarkably, certain pathological numbers surpass even the Gödel numbers associated with typical mathematical propositions. Logician Harvey Friedman has made significant contributions to the study of very large numbers, including work related to Kruskal's tree theorem and the Robertson–Seymour theorem. \"Billions and billions\" To help viewers of Cosmos distinguish between \"millions\" and \"billions\", astronomer Carl Sagan stressed the \"b\". Sagan never did, however, say \"billions and billions\". The public's association of the phrase and Sagan came from a Tonight Show skit. Parodying Sagan's effect, Johnny Carson quipped \"billions and billions\". The phrase has, however, now become a humorous fictitious number—the Sagan. Cf., Sagan Unit. Examples googol = 10 100 {\\displaystyle 10^{100}} /10 DTg centillion = 10 303 {\\displaystyle 10^{303}} /1Ce or 10 600 {\\displaystyle 10^{600}}, depending on number naming system millinillion = 10 3003 {\\displaystyle 10^{3003}} /1MI or 10 6000 {\\displaystyle 10^{6000}}, depending on number naming system The largest known Smith", "al that the expected length of Fano's method has expected length bounded above by E L ≤ H ( X ) + 1 − p min {\\displaystyle \\mathbb {E} L\\leq H(X)+1-p_{\\text{min}}}, where p min = min i p i {\\displaystyle p_{\\text{min}}=\\textstyle \\min _{i}p_{i}} is the probability of the least common symbol. Comparison with other coding methods Neither Shannon–Fano algorithm is guaranteed to generate an optimal code. For this reason, Shannon–Fano codes are almost never used; Huffman coding is almost as computationally simple and produces prefix codes that always achieve the lowest possible expected code word length, under the constraints that each symbol is represented by a code formed of an integral number of bits. This is a constraint that is often unneeded, since the codes will be packed end-to-end in long sequences. If we consider groups of codes at a time, symbol-by-symbol Huffman coding is only optimal if the probabilities of the symbols are independent and are some power of a half, i.e., 1 / 2 k {\\displaystyle \\textstyle 1/2^{k}}. In most situations, arithmetic coding can produce greater overall compression than either Huffman or Shannon–Fano, since it can encode in fractional numbers of bits which more closely approximate the actual information content of the symbol. However, arithmetic coding has not superseded Huffman the way that Huffman supersedes Shannon–Fano, both because arithmetic coding is more computationally expensive and because it is covered by multiple patents. Huffman coding A few years later, David A. Huffman (1952) gave a different algorithm that always produces an optimal tree for any given symbol probabilities. While Fano's Shannon–Fano tree is created by dividing from the root to the leaves, the Huffman algorithm works in the opposite direction, merging from the leaves to the root. Create a leaf node for each symbol and add it to a priority que", "do n j = n / p j {\\displaystyle n_{j}=n/p_{j}} for i = 1 to k do h := x q n i − x mod f {\\displaystyle h:=x^{q^{n_{i}}}-x{\\bmod {f}}} g := gcd(f, h); if g ≠ 1, then return \"f is reducible\" and STOP; end for; g := x q n − x mod f {\\displaystyle g:=x^{q^{n}}-x{\\bmod {f}}} if g = 0, then return \"f is irreducible\", else return \"f is reducible\" The basic idea of this algorithm is to compute x q n i mod f {\\displaystyle x^{q^{n_{i}}}{\\bmod {f}}} starting from the smallest n 1,..., n k {\\displaystyle n_{1},\\ldots,n_{k}} by repeated squaring or using the Frobenius automorphism, and then to take the correspondent gcd. Using the elementary polynomial arithmetic, the computation of the matrix of the Frobenius automorphism needs O ( n 2 ( n + log ⁡ q ) ) {\\displaystyle O(n^{2}(n+\\log q))} operations in Fq, the computation of x q n i − x ( mod f ) {\\displaystyle x^{q^{n_{i}}}-x{\\pmod {f}}} needs O(n3) further operations, and the algorithm itself needs O(kn2) operations, giving a total of O ( n 2 ( n + log ⁡ q ) ) {\\displaystyle O(n^{2}(n+\\log q))} operations in Fq. Using fast arithmetic (complexity O ( n log ⁡ n ) {\\displaystyle O(n\\log n)} for multiplication and division, and O ( n ( log ⁡ n ) 2 ) {\\displaystyle O(n(\\log n)^{2})} for GCD computation), the computation of the x q", "FGLM is one of the main algorithms in computer algebra, named after its designers, Faugère, Gianni, Lazard and Mora. They introduced their algorithm in 1993. The input of the algorithm is a Gröbner basis of a zero-dimensional ideal in the ring of polynomials over a field with respect to a monomial order and a second monomial order. As its output, it returns a Gröbner basis of the ideal with respect to the second ordering. The algorithm is a fundamental tool in computer algebra and has been implemented in most of the computer algebra systems. The complexity of FGLM is O(nD3), where n is the number of variables of the polynomials and D is the degree of the ideal. There are several generalization and various applications for FGLM.", "e Stähelin (1891–1970), Swiss mathematician, editor of Bernoulli family letters, and pacifist Gwyneth Stallard, British expert on complex dynamics and the iteration of meromorphic functions Katherine E. Stange, Canadian-American number theorist Zvezdelina Stankova (born 1969), Bulgarian-American expert on permutation patterns, founder of the Berkeley Math Circle Nancy K. Stanton, American researcher on complex analysis, partial differential equations, and differential geometry Marion Elizabeth Stark (1894—1982), one of the first female American mathematicians to receive a doctorate Anastasia Stavrova, Russian expert in algebraic groups, non-associative algebra, and algebraic K-theory Jackie Stedall (1950–2014), British historian of mathematics Angelika Steger (born 1962), German-Swiss expert on graph theory, randomized algorithms, and approximation algorithms Irene Stegun (1919–2008), American mathematician who edited a classic book of mathematical tables Gabriele Steidl (born 1963), German researcher in computational harmonic analysis, convex optimization, and image processing Mary Kay Stein, American mathematics educator Maya Stein, German-Chilean graph theorist Berit Stensønes (born 1956), Norwegian mathematician specializing in complex analysis and complex dynamics Elizabeth Stephansen (1872–1961), first Norwegian woman to receive a mathematics doctorate Edith Stern (born 1952), child prodigy in mathematics and IBM engineer Chris Stevens, American topological group theorist, historian of mathematics, and mathematics educator Perdita Stevens (born 1966), British algebraist, theoretical computer scientist, and software engineer Lorna Stewart, Canadian graph theorist and graph algorithms researcher Alice Christine Stickland (1906–1987), British applied mathematician, expert on radio propagation Angeline Stickney (1830–1892), American suffragist, abolitionist, and mathematician, namesake of the largest crater on Phobos Doris Stockton (1924–2018), American mathematician and textbook author Mechthild Stoer, German applied mathematician and operations researcher, names" ]
[ "It performs a complete scan over the posting files", "It provably returns the k documents with the largest aggregate scores", "Posting files need to be indexed by TF-IDF weights", "It never reads more than (kn)½ entries from a posting list" ]
['It provably returns the k documents with the largest aggregate scores']
1422
Which of the following is WRONG for Ontologies?
[ "(2008). \"Ontology (Science)\". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). \"What is an Ontology?\". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-92673-3. S2CID 8522608. Uschold, Mike; Gruninger, M. (1996). \"Ontologies: Principles, Methods and Applications\". Knowledge Engineering Review. 11 (2): 93–136. CiteSeerX 10.1.1.111.5903. doi:10.1017/S0269888900007797. S2CID 2618234. Pidcock, W. \"What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?\". Archived from the original on 2009-10-14. Yudelson, M.; Gavrilova, T.; Brusilovsky, P. (2005). \"Towards User Modeling Meta-ontology\". User Modeling 2005. Lecture Notes in Computer Science. Vol. 3538. Springer. pp. 448–452. CiteSeerX 10.1.1.86.7079. doi:10.1007/11527886_62. ISBN 978-3-540-31878-1. Movshovitz-Attias, Dana; Cohen, William W. (2012). \"Bootstrapping Biomedical Ontologies for Scientific Text using NELL\" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874. External links Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. \"Ontolog Forum\") - an Open, International, Virtual Community of Practice on Ontology,", "; McGuinness, Deborah L. (March 2001). \"Ontology Development 101: A Guide to Creating Your First Ontology\". Stanford Knowledge Systems Laboratory Technical Report KSL-01-05, Stanford Medical Informatics Technical Report SMI-2001-0880. Archived from the original on 2010-07-14. Chaminda Abeysiriwardana, Prabath; Kodituwakku, Saluka R (2012). \"Ontology Based Information Extraction for Disease Intelligence\". International Journal of Research in Computer Science. 2 (6): 7–19. arXiv:1211.3497. Bibcode:2012arXiv1211.3497C. doi:10.7815/ijorcs.26.2012.051 (inactive 8 December 2024). S2CID 11297019.: CS1 maint: DOI inactive as of December 2024 (link) Razmerita, L.; Angehrn, A.; Maedche, A. (2003). \"Ontology-Based User Modeling for Knowledge Management Systems\". User Modeling 2003. Lecture Notes in Computer Science. Vol. 2702. Springer. pp. 213–7. CiteSeerX 10.1.1.102.4591. doi:10.1007/3-540-44963-9_29. ISBN 3-540-44963-9. Soylu, A.; De Causmaecker, Patrick (2009). \"Merging model driven and ontology driven system development approaches pervasive computing perspective\". Proceedings of the 24th International Symposium on Computer and Information Sciences. pp. 730–5. doi:10.1109/ISCIS.2009.5291915. ISBN 978-1-4244-5021-3. S2CID 2267593. Smith, B. (2008). \"Ontology (Science)\". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). \"What is an Ontology?\". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_", "strapping Biomedical Ontologies for Scientific Text using NELL\" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874. External links Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. \"Ontolog Forum\") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology Use of Ontologies in Natural Language Processing Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit. Standardization of Ontologies", "The Cell Ontology is an ontology that aims at capturing the diversity of cell types in animals. It is part of the Open Biomedical and Biological Ontologies (OBO) Foundry. The Cell Ontology identifiers and organizational structure are used to annotate data at the level of cell types, for example in single-cell RNA-seq studies. It is one important resource in the construction of the Human Cell Atlas. The Cell Ontology was first described in an academic article in 2005. See also Gene ontology OBO Foundry References External links Cell Ontology GitHub page", "contributors to the Journal of Consciousness Studies. Ontology Nicolai Hartmann equates ontology with Aristotle's science of being qua being. This science involves studying the most general characteristics of entities, usually referred to as categories, and the relations between them. According to Hartmann, the most general categories are: Moments of being (Seinsmomente): existence (Dasein) and essence (Sosein) Modes of being (Seinsweisen): reality and ideality Modalities of being (Seinsmodi): possibility, actuality and necessity Existence and essence The existence of an entity constitutes the fact that this entity is there, that it exists. Essence, on the other hand, constitutes what this entity is like, what its characteristics are. Every entity has both of these modes of being. But, as Hartmann points out, there is no absolute difference between existence and essence. For example, the existence of a leaf belongs to the essence of the tree while the existence of the tree belongs to the essence of the forest. Reality and ideality Reality and ideality are two disjunctive categories: every entity is either real or ideal. Ideal entities are universal, returnable and always existing while real entities are individual, unique and destructible. Among the ideal entities are mathematical objects and values. Reality is made up of a chain of temporal events. Reality is obtrusive, it is often experienced as a form of resistance in contrast to ideality. Modalities of being The modalities of being are divided into the absolute modalities (actuality and non-actuality) and the relative modalities (possibility, impossibility and necessity). The relative modalities are relative in the sense that they depend on the absolute modalities: something is possible, impossible or necessary because something else is actual. Hartmann analyzes modality in the real sphere in terms of necessary conditions. An entity becomes actual if all its necessary conditions obtain. If all these factors obtain, it is necessary that the entity exists. But as long as one of its factors is missing, it can't become actual, it is impossible. This has the consequence that all positive and all the negative modalities fall together: whatever" ]
[ "Different information systems need to agree on the same ontology in order to interoperate.", "They help in the integration of data expressed in different models.", "They give the possibility to specify schemas for different domains.", "They dictate how semi-structured data are serialized." ]
['They dictate how semi-structured data are serialized.']
1424
What is the benefit of LDA over LSI?
[ ", supervised Latent Dirichlet Allocation with covariates (SLDAX) has been specifically developed to combine latent topics identified in texts with other manifest variables. This approach allows for the integration of text data as predictors in statistical regression analyses, improving the accuracy of mental health predictions. One of the main advantages of SLDAX over traditional two-stage approaches is its ability to avoid biased estimates and incorrect standard errors, allowing for a more accurate analysis of psychological texts. In the field of social sciences, LDA has proven to be useful for analyzing large datasets, such as social media discussions. For instance, researchers have used LDA to investigate tweets discussing socially relevant topics, like the use of prescription drugs and cultural differences in China. By analyzing these large text corpora, it is possible to uncover patterns and themes that might otherwise go unnoticed, offering valuable insights into public discourse and perception in real time. Musicology In the context of computational musicology, LDA has been used to discover tonal structures in different corpora. Machine learning One application of LDA in machine learning - specifically, topic discovery, a subproblem in natural language processing – is to discover topics in a collection of documents, and then automatically classify any individual document within the collection in terms of how \"relevant\" it is to each of the discovered topics. A topic is considered to be a set of terms (i.e., individual words or phrases) that, taken together, suggest a shared theme. For example, in a document collection related to pet animals, the terms dog, spaniel, beagle, golden retriever, puppy, bark, and woof would suggest a DOG_related theme, while the terms cat, siamese, Maine coon, tabby, manx, meow, purr, and kitten would suggest a CAT_related theme. There may be many more topics in the collection – e.g., related to diet, grooming, healthcare, behavior, etc. that we do not discuss for simplicity's sake. (Very common, so called stop words in a language – e.g., \"the\", \"an\", \"that\", \"are\", \"is\", etc., – would", "In natural language processing, latent Dirichlet allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) for modeling automatically extracted topics in textual corpora. The LDA is an example of a Bayesian topic model. In this, observations (e.g., words) are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics. History In the context of population genetics, LDA was proposed by J. K. Pritchard, M. Stephens and P. Donnelly in 2000. LDA was applied in machine learning by David Blei, Andrew Ng and Michael I. Jordan in 2003. Overview Population genetics In population genetics, the model is used to detect the presence of structured genetic variation in a group of individuals. The model assumes that alleles carried by individuals under study have origin in various extant or past populations. The model and various inference algorithms allow scientists to estimate the allele frequencies in those source populations and the origin of alleles carried by individuals under study. The source populations can be interpreted ex-post in terms of various evolutionary scenarios. In association studies, detecting the presence of genetic structure is considered a necessary preliminary step to avoid confounding. Clinical psychology, mental health, and social science In clinical psychology research, LDA has been used to identify common themes of self-images experienced by young people in social situations. Other social scientists have used LDA to examine large sets of topical data from discussions on social media (e.g., tweets about prescription drugs). Additionally, supervised Latent Dirichlet Allocation with covariates (SLDAX) has been specifically developed to combine latent topics identified in texts with other manifest variables. This approach allows for the integration of text data as predictors in statistical regression analyses, improving the accuracy of mental health predictions. One of the main advantages of SLDAX over traditional two-stage approaches is its ability to avoid biased estimates and incorrect standard errors, allowing for a more accurate analysis of psychological texts. In the field of social sciences, LDA has", "Semantic Indexing (LSI) • Latent Dirichlet Allocation (LDA) LSA viz • LSA uses a term-document matrix which describes the occurrences of terms in documents • Weight of element proportional to the number of times the terms appear in each document (rare terms are upweighted to reflect their relative importance) • SVD to decompose the term-document matrix A into a term-concept matrix U, a singular value matrix S, and a concept- document matrix V in the form: A = USV'. [Wikipedia] LDA viz • LDA is a generative statistical model • LDA represents documents as mixtures of topics • LDA model might have topics that can be classified as CAT_related and DOG_related. More examples\", \"lex\": \"Data TEXT VISUALIZATION K I R E L L B E N Z I P H D kikohs kirell benzi www kirellbenzi com Inspired by Heer Why visualize text Faster understanding get quick insight on what I am reading Comparison compare document collection or inspect evolution of collection over time Clustering grouping classi cation or topic extraction Correlation compare pattern in the current text to other datasets Heer CharacterisOcs of a textual representaOon Abstract representation of the data Powerful representation to describe Linear perception Semi structured grammar punctuation word sentence paragraph typography calligraphy Di erent across population group Raw text visualizaOon Rich text Overview detail with minimap Text datasets Documents Web page article log book email comment source code tag Collections of document Messages chat email letter social pro le publication Challenges of text viz Resuming text High dimensionality if possible resume text with text Choosing the right keywords Providing relevant context to assist the reader Need to understand the semantics Show or provide access to the source text Ontologies extraction Needs to understand the domain in addition of the previous point In pracOce Determine your analysis task Find the best tool and model to match the task Read the litterature look at other data viz software etc Text a data Words are ambiguous Cannot be used a nominal data word Cannot simply do an equality test Some word are correlated Paris New York Madrid Ordered January February March Conjugated tense", "the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (giving C(C − 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. Incremental LDA The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features. In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules. Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples. Practical use In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either the maximum likelihood estimate or the maximum a posteriori estimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector)", "the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (giving C(C − 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. Incremental LDA The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features. In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules. Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples. Practical use In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either the maximum likelihood estimate or the maximum a posteriori estimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector)" ]
[ "LSI is sensitive to the ordering of the words in a document, whereas LDA is not", "LDA has better theoretical explanation, and its empirical results are in general better than LSI’s", "LSI is based on a model of how documents are generated, whereas LDA is not", "LDA represents semantic dimensions (topics, concepts) as weighted combinations of terms, whereas LSI does not" ]
['LDA has better theoretical explanation, and its empirical results are in general better than LSI’s']
1426
Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
[ "Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done \"manually\" (or \"intellectually\") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification. The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied. Documents may be classified according to their subjects or according to other attributes (such as document type, author, printing year etc.). In the rest of this article only subject classification is considered. There are two main philosophies of subject classification of documents: the content-based approach and the request-based approach. \"Content-based\" versus \"request-based\" classification Content-based classification is classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned. In automatic classification it could be the number of times given words appears in a document. Request-oriented classification (or -indexing) is classification in which the anticipated request from users is influencing how documents are being classified. The classifier asks themself: “Under which descriptors should this entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230). Request-oriented classification may be classification that is targeted towards a particular audience or user group. For example, a library or a database for feminist studies may classify/index documents differently when compared to a historical library. It is probably better, however, to understand request-oriented classification as policy-based classification: The classification is done according to some ideals and reflects the purpose of the", "entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230). Request-oriented classification may be classification that is targeted towards a particular audience or user group. For example, a library or a database for feminist studies may classify/index documents differently when compared to a historical library. It is probably better, however, to understand request-oriented classification as policy-based classification: The classification is done according to some ideals and reflects the purpose of the library or database doing the classification. In this way it is not necessarily a kind of classification or indexing based on user studies. Only if empirical data about use or users are applied should request-oriented classification be regarded as a user-based approach. Classification versus indexing Sometimes a distinction is made between assigning documents to classes (\"classification\") versus assigning subjects to documents (\"subject indexing\") but as Frederick Wilfrid Lancaster has argued, this distinction is not fruitful. \"These terminological distinctions,” he writes, “are quite meaningless and only serve to cause confusion” (Lancaster, 2003, p. 21). The view that this distinction is purely superficial is also supported by the fact that a classification system may be transformed into a thesaurus and vice versa (cf., Aitchison, 1986, 2004; Broughton, 2008; Riesthuis & Bliedung, 1991). Therefore, the act of labeling a document (say by assigning a term from a controlled vocabulary to a document) is at the same time to assign that document to the class of documents indexed by that term (all documents indexed or classified as X belong to the same class of documents). In other words, labeling a document is the same as assigning it to the class of documents indexed under that label. Automatic document classification (ADC) Automatic document classification tasks can be divided into three sorts: supervised document classification where some external mechanism (such as human feedback) provides information on the correct classification for documents, unsupervised document classification (also known as document clustering), where the classification must be done entirely without reference to external information, and semi", "of some 350.000 documents. This was facilitated by data generated within the framework of an EU-supported project \"EuropeanaLocal\". For this exploration, three ICC hierarchical levels have been used for some 5000 terms. The result is described in the report of Christoph Mak. Prof.Koch regarded a classification degree of almost 50% as a good result, considering that only a shortened version of ICC had been used. In order to reach a better result one would have needed 1–2 years. Also an index of all terms with their codes could be achieved under these explorations. Data Linkage Motivated by the work of an Italian research Group in Trento on Revising the Wordnet Domains Hierarchy: semantics, coverage and balancing, by which the DDC codes were used, Prof. Ernesto William De Luca et al. showed in a study that for such case the use of ICC could lead to essentially better results. This was shown in two contributions: Including knowledge domains from the ICC into the Multilingual Lexical Linked Data Cloud (LLD) and Die Multilingual Lexical Linked Data Cloud: Eine mögliche Zugangsoptimierung?, in which the LLD was used in a meta-model which contains all resources with the possibility of retrieval and navigation of data from different aspects. By this, the existing work about many thousand knowledge fields (of ICC) can be combined with the Multilingual Lexical Linked Data Cloud, based on RDF/OWL representation of EuroWordNet and similar integrated lexical resources (MultiWordNet, MEMODATA and the Hamburg Metapher BD). Semantic Web structuring In October 2013, the computer scientist Hermann Bense, Dortmund, explored the possibilities for structuring the Semanic Web with ICC codes. He developed two approaches for a pictorial presentation of knowledge fields with their possible subdivisions. A graphic representation of those knowledge fields pertaining to the first two levels can be found under Ontology4. The inclusion of the third hierarchical level has been envisaged as the next step. Some potential applications of ICC in its present form Possibility to roughly structure documents, especially bibliographies and reference works. Structuring personal repertories, e.g. a Who's Who in Who's Who in Classification and Index", "aggregate elements, which contain further sub-elements. The semantics of an element are determined by its context: they are affected by the parent or container element in the hierarchy and by other elements in the same container. For example, the various Description elements (1.4, 5.10, 6.3, 7.2.2, 8.3 and 9.3) each derive their context from their parent element. In addition, description element 9.3 also takes its context from the value of element 9.1 Purpose in the same instance of Classification. The data model specifies that some elements may be repeated either individually or as a group; for example, although the elements 9.2 (Description) and 9.1 (Purpose) can only occur once within each instance of the Classification container element, the Classification element may be repeated - thus allowing many descriptions for different purposes. The data model also specifies the value space and datatype for each of the simple data elements. The value space defines the restrictions, if any, on the data that can be entered for that element. For many elements, the value space allows any string of Unicode character to be entered, whereas other elements entries must be drawn from a declared list (i.e. a controlled vocabulary) or must be in a specified format (e.g. date and language codes). Some element datatypes simply allow a string of characters to be entered, and others comprise two parts, as described below: LangString items contain Language and String parts, allowing the same information to be recorded in multiple languages Vocabulary items are constrained in such a way that their entries have to be chosen from a controlled list of terms - composed of Source-Value pairs - with the Source containing the name of the list of terms being used and the Value containing the chosen term DateTime and Duration items contain one part that allows the date or duration to be given in a machine readable format, and a second that allows a description of the date or duration (for example \"mid summer, 1968\"). When implementing the LOM as a data or service provider, it is not necessary to support all the elements in the data model, nor need the LOM data model limit the information which may be provided. The creation of an application profile allows a community of users to specify which elements and vocabula", "of extant designations of knowledge fields from whatever available reference works. This was funded by the German Documentation Society (DGD) (1971-2) under the title of Order system of knowledge fields. In addition, the syllabuses of German universities and polytechniques were explored for relevant terms and documented (1975). Thereafter, it seemed necessary to add definitions from special dictionaries and encyclopediae; it soon appeared that the 12.500 terms included numerous synonyms, so that the whole collection boiled down to about 6.500 concept designations (Project Logstruktur, supported by the German Science Foundation (DFG) 1976-78). The outcome of this work was the formulation of 30 theses which ended up in 12 principles for the new system, published 40 years later under. These principles refer not only to theoretical foundations but also to structure and other organizational aspects of the whole array of knowledge fields. In 1974, the digital position scheme for field subdivision had already been developed to allow for classifying classification literature in the bibliographical section of the first issue of the Journal International Classification. In 1977, the entire ICC was ready for presentation at a seminar in Bangalore, India. A publication of the first three hierarchical levels appeared however only in 1982. It was applied to the bibliography of classification systems and thesauri in vol.1 of the International Classification and Indexing Bibliography; it has been updated. Governing principles These were published in full length in the book Wissensorganisation. Entwicklung, Aufgabe, Anwendung, Zukunft and the article Information Coding Classification. Geschichtliches, Prinzipien, Inhaltliches, hence it suffices to just mention their topics with some necessary additions. Principle 1: Concept theoretical approaches. Concepts are the contents of ICC, they are understood as being units of knowledge. The „birth“ of a concept. Where do the characteristics, the knowledge elements come from? How do conceptual relations arise? Principle 2: The four kinds of concept relations and their applications. Principle 3: Decimal numbers form the ICC codes as its universal language. Principle 4: The nine ontical levels of ICC. They were grouped under three captions: Prolegomena (1-3), life sciences (4-6) and human output (7-9): Structure and form Matter and energy" ]
[ "in the index merging approach for single node machines", "in the map-reduce approach for parallel clusters", "in both", "in neither of the two" ]
in the index merging approach for single node machines
1427
Which of the following is correct regarding Crowdsourcing?
[ "s rely on the power of the crowd hopefully. For example, suppose we want to identify a picture according to the people in a picture is adult or not, this is a Bernoulli labeling problem, and all of us can do in one or two seconds, this is an easy task for human being. However, if we have tens of thousands picture like this, then this is no longer the easy task any more. That's why we need to rely on crowdsourcing framework to make this fast. Crowdsourcing framework of this consists of two steps. Step one, we just dynamically acquire from the crowd for items. On the other sides, this is dynamic procedure. We don't just send out this picture to everyone and we focus every response, instead, we do this in quantity. We are going to decide which picture we send it in the next, and which worker we are going to hire in the crowd in the next. According to his or her historical labeling results. And each picture can be sent to multiple workers and every worker can also work on different pictures. Then after we collect enough number of labels for different picture, we go to the second steps where we want to infer true label of each picture based on the collected labels. So there are multiple ways we can do inference. For instance, the simplest we can do this is just majority vote. The problem is that no free lunch, we have to pays for worker for each label he or she provides and we only have a limited project budget. So the question is how to spend the limited budget in a smart way. Challenges Before showing the mathematic model, the paper mentions what kinds of challenges we are facing. Challenge 1 First of all, the items have a different level of difficulty to compute the label, in a previous example, some picture are easy to classify. In this case, you will usually see very consistent labels from the crowd. However, if some pictures are ambiguous, people may disagree with each other resulting in highly inconsistent labelling. So we may allocate more resources on this ambiguous task. Challenge 2 And another difficulty we often have is that the worker are not perfect, sometimes this worker are not responsible, they just provide the random label, therefore, of course, we would not spend our budget on this no reliable workers. Now the problem is both the difficulty of", "from the 1980s onward. As these global institutions remain state-like or state-centric it is unsurprising that they perpetuate state-like or state-centric approaches to collective problem solving rather than alternative ones. Crowdsourcing is a process of accumulating ideas, thoughts, or information from many independent participants, with aim of finding the best solution for a given challenge. Modern information technologies allow for many people to be involved and facilitate managing their suggestions in ways that provide good results. The Internet allows for a new capacity of collective (including planetary-scale) problem solving. See also Actuarial science – Statistics applied to risk in insurance and other financial products Analytical skill – Crucial skill in all different fields of work and life Creative problem-solving – Mental process of problem solving Collective intelligence – Group intelligence that emerges from collective efforts Community of practice Coworking – Practice of independent contractors or scientists sharing office space without supervision Crowdsolving – Sourcing services or funds from a groupPages displaying short descriptions of redirect targets Divergent thinking – A process of generating creative ideas Grey problem – IT service problem where the causing technology is unknown or unconfirmed, making the problem solving difficult to allocatePages displaying wikidata descriptions as a fallback Innovation – Practical implementation of improvements Instrumentalism – Position in the philosophy of science Problem-posing education – Method of teaching coined by Paulo Freire Problem statement – Description of an issue Problem structuring methods Shared intentionality – Ability to engage with others' psychological states Structural fix – solving a problem or resolving a conflict by bringing about structural changes in underlying structures that provoked or sustained these problemsPages displaying wikidata descriptions as a fallback Subgoal labeling – Cognitive process Troubleshooting – Form of problem solving, often applied to repair failed products or processes Wicked problem – Problem that is difficult or impossible to solve Notes Further reading Beckmann, Jens F.; Guthke, Jürgen (1995). \"Complex problem solving, intelligence, and learning ability\". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erl", "ry and cell biology. For example, the increase in the strength of interactions between proteins and DNA produced by crowding may be of key importance in processes such as transcription and DNA replication. Crowding has also been suggested to be involved in processes as diverse as the aggregation of hemoglobin in sickle-cell disease, and the responses of cells to changes in their volume. The importance of crowding in protein folding is of particular interest in biophysics. Here, the crowding effect can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation. Crowding may also increase the effectiveness of chaperone proteins such as GroEL in the cell, which could counteract this reduction in folding efficiency. It has also been shown that macromolecular crowding affects protein-folding dynamics as well as overall protein shape where distinct conformational changes are accompanied by secondary structure alterations implying that crowding-induced shape changes may be important for protein function and malfunction in vivo. A particularly striking example of the importance of crowding effects involves the crystallins that fill the interior of the lens. These proteins have to remain stable and in solution for the lens to be transparent; precipitation or aggregation of crystallins causes cataracts. Crystallins are present in the lens at extremely high concentrations, over 500 mg/ml, and at these levels crowding effects are very strong. The large crowding effect adds to the thermal stability of the crystallins, increasing their resistance to denaturation. This effect may partly explain the extraordinary resistance shown by the lens to damage caused by high temperatures. Crowding may also play a role in diseases that involve protein aggregation, such as sickle cell anemia where mutant hemoglobin forms aggregates and alzheimer's disease, where tau protein forms neurofibrillary tangles under crowded conditions within neurons. Study Due to macromolecular crowding, enzyme assays and biophysical measurements performed in dilute solution may fail to reflect the actual process and its ki", "model lets us infer the relationship between all four teams, even though not all teams have played each other. Variations Crowd-BT The Crowd-BT model, developed in 2013 by Chen et al, attempts to extend the standard Bradley–Terry model for crowdsourced settings while reducing the number of comparisons needed by taking into account the reliability of each judge. In particular, it identifies and excludes judges presumed to be spammers (selecting choices at random) or malicious (selecting always the wrong choice). In a crowdsourced task of ranking documents by reading difficulty with 624 judges contributing up to 40 pairwise comparisons each, Crowd-BT was shown to outperform both standard Bradley–Terry as well as ranking system TrueSkill. It has been recommended for use when quality results are valued over efficiency and the number of comparisons is high. See also Ordinal regression Rasch model Scale (social sciences) Elo rating system Thurstonian model", "and offering performance improvements over OPTICS by using an R-tree index. The key drawback of DBSCAN and OPTICS is that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such as EM clustering that are able to precisely model this kind of data. Mean-shift is a clustering approach where each object is moved to the densest area in its vicinity, based on kernel density estimation. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these \"density attractors\" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails. Density-based clustering examples Grid-based clustering The grid-based technique is used for a multi-dimensional data set. In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clustering algorithm are: Divide data space into a finite number of cells. Randomly select a cell ‘c’, where c should not be traversed beforehand. Calculate the density of ‘c’ If the density of ‘c’ greater than threshold density Mark cell ‘c’ as a new cluster Calculate the density of all the neighbor" ]
[ "Random Spammers give always the same answer for every question", "It is applicable only for binary classification problems", "Honey Pot discovers all the types of spammers but not the sloppy workers", "The output of Majority Decision can be equal to the one of Expectation-Maximization" ]
The output of Majority Decision can be equal to the one of Expectation-Maximization
1428
When computing PageRank iteratively, the computation ends when...
[ "Computation PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method or the power method. The basic mathematical operations performed are identical. Iterative At t = 0 {\\displaystyle t=0}, an initial probability distribution is assumed, usually P R ( p i ; 0 ) = 1 N {\\displaystyle PR(p_{i};0)={\\frac {1}{N}}}. where N is the total number of pages, and p i ; 0 {\\displaystyle p_{i};0} is page i at time 0. At each time step, the computation, as detailed above, yields P R ( p i ; t + 1 ) = 1 − d N + d ∑ p j ∈ M ( p i ) P R ( p j ; t ) L ( p j ) {\\displaystyle PR(p_{i};t+1)={\\frac {1-d}{N}}+d\\sum _{p_{j}\\in M(p_{i})}{\\frac {PR(p_{j};t)}{L(p_{j})}}} where d is the damping factor, or in matrix notation where R i ( t ) = P R ( p i ; t ) {\\displaystyle \\mathbf {R} _{i}(t)=PR(p_{i};t)} and 1 {\\displaystyle \\mathbf {1} } is the column vector of length N {\\displaystyle N} containing only ones. The matrix M {\\displaystyle {\\mathcal {M}}} is defined as M i j = { 1 / L ( p j ), if j links to i 0, otherwise {\\displaystyle {\\mathcal {M}}_{ij}={\\begin{cases}1/L(p_{j}),&{\\mbox{if }}j{\\mbox{ links to }}i\\ \\\\0,&{\\mbox{otherwise}}\\end{cases}}} i.e., M := ( K − 1 A ) T {\\displaystyle {", "converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear in log ⁡ n {\\displaystyle \\log n}, where n is the size of the network. As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equal t − 1 {\\displaystyle t^{-1}} where t {\\displaystyle t} is the expectation of the number of clicks (or random jumps) required to get from the page back to itself. One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). Several strategies have been proposed to accelerate the computation of PageRank. Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it started actively penalizing sites selling paid text links, Google has combatted link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google's trade secrets. Computation PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method or the power method. The basic mathematical operations performed are identical. Iterative At t = 0 {\\displaystyle t=0}, an initial probability distribution is assumed, usually P R ( p i ; 0 ) = 1 N {\\displaystyle PR(p_{i};0)={\\frac {1}{N}}}. where N is the total number of pages, and", "When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability, d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: P R ( p i ) = 1 − d N + d ∑ p j ∈ M ( p i ) P R ( p j ) L ( p j ) {\\displaystyle PR(p_{i})={\\frac {1-d}{N}}+d\\sum _{p_{j}\\in M(p_{i})}{\\frac {PR(p_{j})}{L(p_{j})}}} where p 1, p 2,..., p N {\\displaystyle p_{1},p_{2},...,p_{N}} are the pages under consideration, M ( p i ) {\\displaystyle M(p_{i})} is the set of pages that link to p i {\\displaystyle p_{i}}, L ( p j ) {\\displaystyle L(p_{j})} is the number of outbound links on page p j {\\displaystyle p_{j}}, and N {\\displaystyle N} is the total number of pages. The PageRank values are the entries of the dominant right eigenvector of the modified adjacency matrix rescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is R = [ P R ( p 1 ) P R ( p 2 ) <unk> P R ( p N ) ] {\\displaystyle \\mathbf {R} ={\\begin{bmatrix}PR(p_{1})\\\\PR(p_{2})$vdots \\\\PR(p_{N})\\end{bmatrix}}} where R is the", "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, all patents associated with PageRank have expired. Description PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of \"measuring\" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R ( E ). {\\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on the Webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it (\"incoming links\"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking", "as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume a probability distribution between 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pages B, C, and D to A, each link would transfer 0.25 PageRank to A upon the next iteration, for a total of 0.75. P R ( A ) = P R ( B ) + P R ( C ) + P R ( D ). {\\displaystyle PR(A)=PR(B)+PR(C)+PR(D).\\,} Suppose instead that page B had a link to pages C and A, page C had a link to page A, and page D had links to all three pages. Thus, upon the first iteration, page B would transfer half of its existing value (0.125) to page A and the other half (0.125) to page C. Page C would transfer all of its existing value (0.25) to the only page it links to, A. Since D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to A. At the completion of this iteration, page A will have a PageRank of approximately 0.458. P R ( A ) = P R ( B ) 2 + P R ( C ) 1 + P R ( D ) 3. {\\displaystyle PR(A)={\\frac {PR(B)}{2}}+{\\frac {PR(C)}{1}}+{\\frac {PR(D)}{3}}.\\,} In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links L( ). P R ( A ) = P R ( B ) L (" ]
[ "The difference among the eigenvalues of two subsequent iterations falls below a predefined threshold", "The norm of the difference of rank vectors of two subsequent iterations falls below a predefined threshold", "All nodes of the graph have been visited at least once", "The probability of visiting an unseen node falls below a predefined threshold" ]
['The norm of the difference of rank vectors of two subsequent iterations falls below a predefined threshold']
1430
How does LSI querying work?
[ "ing CBWFQ and LLQ External links Low Latency Queuing (LLQ) Cisco QoS – Low Latency Queuing Bandwidth Sharing within CBWFQ/LLQ", "A query-level feature or QLF is a ranking feature utilized in a machine-learned ranking algorithm. Example QLFs: How many times has this query been run in the last month? How many words are in the query? What is the sum/average/min/max/median of the BM25F values for the query?", "al Information (LPI)\". doi:10.1016/j.ejor.2003.10.049. : Cite journal requires |journal= (help) One-shot decisions applying the Linear Partial Information (LPI)", "nQuery is a clinical trial design platform used for the design and monitoring of adaptive, group sequential, and fixed sample size trials. It is most commonly used by biostatisticians to calculate sample size and statistical power for adaptive clinical trial design. nQuery is proprietary software developed and distributed by Statsols. The software includes calculations for over 1,000 sample sizes and power scenarios. History Janet Dixon Elashoff, creator of nQuery, is a retired American statistician and daughter of the mathematician and statistician Wilfrid Joseph Dixon, creator of BMDP. Elashoff is also the retired Director of the Division of Biostatistics, Cedars-Sinai Medical Center. While at UCLA and Cedars-Sinai during the 1990s, she wrote the program nQuery Sample Size Software (then named nQuery Advisor). This software quickly became widely used to estimate the sample size requirements for pharmaceutical trials. She joined the company Statistical Solutions LLC in order to commercialize it. In June 2020, nQuery was acquired by Insightful Science. Uses nQuery is used for adaptive clinical trial design. Trials with an adaptive design have been reported to be more efficient, informative, and ethical than trials with a traditional fixed design because they conserve resources such as time and money and often require fewer participants. nQuery allows researchers to apply both frequentist and Bayesian statistics to calculate the appropriate sample size for their study. References External links Official Statsols Page for nQuery", "A query language, also known as data query language or database query language (DQL), is a computer language used to make queries in databases and information systems. In database systems, query languages rely on strict theory to retrieve information. A well known example is the Structured Query Language (SQL). Types Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry. Other types of query languages include: Full-text. The simplest query language is treating all terms as bag of words that are to be matched with the postings in the inverted index and where subsequently ranking models are applied to retrieve the most relevant documents. Only tokens are defined in the CFG. Web search engines often use this approach. Boolean. A query language that also supports the use of the Boolean operators AND, OR, NOT. Structured. A language that supports searching within (a combination of) fields when a document is structured and has been indexed using its document structure. Natural language. A query language that supports natural language by parsing the natural language query to a form that can be best used to retrieve relevant documents, for example with Question answering systems or conversational search. Examples Attempto Controlled English is a query language that is also a controlled natural language. AQL is a query language for the ArangoDB native multi-model database system..QL is a proprietary object-oriented query language for querying relational databases; successor of Datalog; CodeQL is the analysis engine used by developers to automate security checks, and by security researchers to perform variant analysis on GitHub. Contextual Query Language (CQL) a formal language for representing queries to information retrieval systems such as web indexes or bibliographic catalogues. Cypher is a query language for the Neo4j graph database; DMX is a query language for data mining models;" ]
[ "The query vector is treated as an additional term; then cosine similarity is computed", "The query vector is transformed by Matrix S; then cosine similarity is computed", "The query vector is treated as an additional document; then cosine similarity is computed", "The query vector is multiplied with an orthonormal matrix; then cosine similarity is computed" ]
The query vector is treated as an additional document; then cosine similarity is computed
1433
Suppose that an item in a leaf node N exists in every path. Which one is correct?
[ "of the ring where id starts with 1. The routing table therefore contains log2(n) rows populated as follows. Level 0: 1* -> this enables to forward to any node with an id starting with a 1 Level 1: 00* -> this enables to forward to any node with an id starting with a 00 Level 2: 010*-> this enables to forward to any node with an starting with a 010 Etc. Imagine that node 01110101 wants to send a message to key 100000000000. It will forward the message to the node present in the first row of its routing table. That node will then forward it to a node in the second level of its routing table starting with 10 and so on and so forth. objId nodeIds O 2 128- 1 Leaf set In addition, in Pastry, each node maintains a leafset, which are the k closest nodes (typically k is 16). This set is maintained aggressively through a heartbeat-based protocol to avoid partitions. The leafset is also used at the last hops of any routing operations. If the leafset contains the destination node, the routing operation is stopped. This eventually ensures that the system can tolerate up to k/2 node failures with adjacent ids. Basically, the routing state to maintain is O(log(n)), n being the size of the system and the routing from one node to another is achieved in O(log(n)) steps on average. Node departures or failures A node may voluntarily leave the system and in that case will handle explicitly and cleanly its departure. In case of a node failure, the leafset is used to detect the failures (the detection can also happen when a contact fails during the routing) and recover from the failure Node joins When a node X joins the network, we assume that it contacts one node, say A. The algorithm for joining the network and populating its routing table is to route a message from A to its own Id X. The message will be received by Z, the destination node. X will get its leafset from Z, and populates the routing table b of the line i of the", "s to a node in this tree at depth l {\\displaystyle \\ell }. The i {\\displaystyle i} th word in the prefix code corresponds to a node v i {\\displaystyle v_{i}} let A i {\\displaystyle A_{i}} be the set of all leaf nodes (i.e. of nodes at depth l n {\\displaystyle \\ell _{n}} ) in the subtree of A {\\displaystyle A} rooted at v i {\\displaystyle v_{i}}. That subtree being of height l n − l i {\\displaystyle \\ell _{n}-\\ell _{i}}, we have | A i | = r l n − l i. {\\displaystyle |A_{i}|=r^{\\ell _{n}-\\ell _{i}}.} Since the code is a prefix code, those subtrees cannot share any leaves, which means that A i ∩ A j = ∅, i ≠ j. {\\displaystyle A_{i}\\cap A_{j}=\\varnothing,\\quad i\\neq j.} Thus, given that the total number of nodes at depth l n {\\displaystyle \\ell _{n}} is r l n {\\displaystyle r^{\\ell _{n}}}, we have | <unk> i = 1 n A i | = ∑ i = 1 n | A i | = ∑ i = 1 n r l n − l i <unk> r l n {\\displaystyle \\left|\\bigcup _{i=1}^{n}A_{i}\\right|=\\sum _{i=1}^{n}|A_{i}|=\\sum _{i=1}^{n}r^{\\ell _{n}-\\ell _{i}}\\leqslant r^{\\ell _{n}}} from which the result follows. Conversely, given any ordered sequence of n {\\displaystyle n} natural numbers, l 1 <unk> l 2 <unk> ⋯ <unk> l n {\\displaystyle \\ell _{1", "assumption or by τ {\\displaystyle \\tau } If N {\\displaystyle N} is not a leaf node, then there is an inference rule l N ← s 1,..., s m {\\displaystyle l_{N}\\leftarrow s_{1},...,s_{m}}, ( m ≥ 0 ) {\\displaystyle (m\\geq 0)}, where l N {\\displaystyle l_{N}} is the label of N {\\displaystyle N} and If m = 0 {\\displaystyle m=0}, then the rule shall be l N ← τ {\\displaystyle l_{N}\\leftarrow \\tau } (i.e. child of N {\\displaystyle N} is τ {\\displaystyle \\tau } ) Otherwise, N {\\displaystyle N} has m {\\displaystyle m} children, labelled by s 1,..., s m {\\displaystyle s_{1},...,s_{m}} S {\\displaystyle S} is the set of all assumptions labeling the leave nodes An argument with claim c {\\displaystyle c} supported by a set of assumption S {\\displaystyle S} can also be denoted as S <unk> c {\\displaystyle S\\vdash c} See also Notes", "uses a priority queue where the node with lowest probability is given highest priority: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. Since efficient priority queue data structures require O(log n) time per insertion, and a tree with n leaves has 2n−1 nodes, this algorithm operates in O(n log n) time, where n is the number of symbols. If the symbols are sorted by probability, there is a linear-time (O(n)) method to create a Huffman tree using two queues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues: Start with as many leaves as there are symbols. Enqueue all leaf nodes into the first queue (by probability in increasing order so that the least likely item is in the head of the queue). While there is more than one node in the queues: Dequeue the two nodes with the lowest weight by examining the fronts of both queues. Create a new internal node, with the two just-removed nodes as children (either node can be either child) and the sum of their weights as the new weight. Enqueue the new node into the rear of the second queue. The remaining node is the root node; the tree has now been generated. Once the Huffman tree has been generated, it is traversed to generate a dictionary which maps the symbols to binary codes as follows: Start with current node set to the root. If node is not a leaf node, label the edge to", "s for performance. Trees as used in computing are similar to but can be different from mathematical constructs of trees in graph theory, trees in set theory, and trees in descriptive set theory. Terminology A node is a structure which may contain data and connections to other nodes, sometimes called edges or links. Each node in a tree has zero or more child nodes, which are below it in the tree (by convention, trees are drawn with descendants going downwards). A node that has a child is called the child's parent node (or superior). All nodes have exactly one parent, except the topmost root node, which has none. A node might have many ancestor nodes, such as the parent's parent. Child nodes with the same parent are sibling nodes. Typically siblings have an order, with the first one conventionally drawn on the left. Some definitions allow a tree to have no nodes at all, in which case it is called empty. An internal node (also known as an inner node, inode for short, or branch node) is any node of a tree that has child nodes. Similarly, an external node (also known as an outer node, leaf node, or terminal node) is any node that does not have child nodes. The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). Thus the root node has depth zero, leaf nodes have height zero, and a tree with only a single node (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1. Each non-root node can be treated as the root node of its own subtree, which includes that node and all its descendants. Other terms used with trees: Neighbor Parent or child. Ancestor A node reachable by repeated proceeding from child to parent. Descendant A node reachable by" ]
[ "N co-occurs with its prefix in every transaction.", "For every node P that is a parent of N in the fp tree, confidence(P->N) = 1", "N’s minimum possible support is equal to the number of paths.", "The item N exists in every candidate set." ]
['N’s minimum possible support is equal to the number of paths.']
1434
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
[ ". Examples of ranking quality measures: Mean average precision (MAP); DCG and NDCG; Precision@n, NDCG@n, where \"@n\" denotes that the metrics are evaluated only on top n documents; Mean reciprocal rank; Kendall's tau; Spearman's rho. DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. Other metrics such as MAP, MRR and precision, are defined only for binary judgments. Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric: Expected reciprocal rank (ERR); Yandex's pfound. Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document. Approaches Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered). Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets. In this section, without further notice, x {\\displaystyle x} denotes an object to be evaluated, for example, a document or an image, f ( x ) {\\displaystyle f(x)} denotes a single-value hypothesis, h ( ⋅ ) {\\displaystyle h(\\cdot )} denotes a bi-variate or multi-variate function and L ( ⋅ ) {\\displaystyle L(\\cdot )} denotes the loss function", "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, all patents associated with PageRank have expired. Description PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of \"measuring\" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R ( E ). {\\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on the Webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it (\"incoming links\"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking", "versions). A pair d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}} is concordant if both r a {\\displaystyle r_{a}} and r b {\\displaystyle r_{b}} agree in how they order d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}}. It is discordant if they disagree. Information retrieval quality Information retrieval quality is usually evaluated by the following three measurements: Precision Recall Average precision For a specific query to a database, let P r e l e v a n t {\\displaystyle P_{relevant}} be the set of relevant information elements in the database and P r e t r i e v e d {\\displaystyle P_{retrieved}} be the set of the retrieved information elements. Then the above three measurements can be represented as follows: precision = | P relevant ∩ P retrieved | | P retrieved | ; recall = | P relevant ∩ P retrieved | | P relevant | ; average precision = ∫ 0 1 Prec ( recall ) d recall, {\\displaystyle {\\begin{aligned}&{\\text{precision}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{retrieved}}\\right|}};\\\\[6pt]&{\\text{recall}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{relevant}}\\right|}};\\\\[6pt]&{\\text{average precision}}=\\int _{0}^{1}{\\text{Prec}}({\\text{recall}})\\,d{\\text{recall}},$end{aligned}}} where Prec ( Recall ) {\\displaystyle {\\text{Prec}}({\\text{Recall}})} is the Precision {\\displaystyle {\\text{Precision}}}", "in Google Toolbar, though the PageRank continued to be used internally to rank content in search results. SERP rank The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200). Search engine optimization (SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword. The PageRank of the HomePage of a website is the best indication Google offers for website authority. After the introduction of Google Places into the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results. When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google. Google directory PageRank The Google Directory PageRank was an 8-unit measurement. Unlike the Google Toolbar, which showed a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011. False or spoofed PageRank It was known that the PageRank shown in the Toolbar could easily be spoofed. Redirection from one page to another, either via a HTTP 302 response or a \"Refresh\"", "Retrievability is a term associated with the ease with which information can be found or retrieved using an information system, specifically a search engine or information retrieval system. A document (or information object) has high retrievability if there are many queries which retrieve the document via the search engine, and the document is ranked sufficiently high that a user would encounter the document. Conversely, if there are few queries that retrieve the document, or when the document is retrieved the documents are not high enough in the ranked list, then the document has low retrievability. Retrievability can be considered as one aspect of findability. Applications of retrievability include detecting search engine bias, measuring algorithmic bias, evaluating the influence of search technology, tuning information retrieval systems and evaluating the quality of documents in a collection. See also Information retrieval Knowledge mining Search engine optimization Findability References Azzopardi, L. & Vinay, V. (2008). \"Retrievability: an evaluation measure for higher order information access tasks\". Proceedings of the 17th ACM conference on Information and knowledge management. CIKM '08. Napa Valley, California, USA: ACM. pp. 561–570. doi:10.1145/1458082.1458157. ISBN 9781595939913. S2CID 8705350. Azzopardi, L. & Vinay, V. (2008). \"Accessibility in information retrieval\". Proceedings of the IR research, 30th European conference on Advances in information retrieval. ECIR '08. Glasgow, UK: Springer. pp. 482–489. ISBN 9783540786450. Retrieved 7 Dec 2016." ]
[ "P@k-1 > P@k+1", "P@k-1 = P@k+1", "R@k-1 < R@k+", "R@k-1 = R@k+1" ]
R@k-1 < R@k+
1439
For the number of times the apriori algorithm and the FPgrowth algorithm for association rule mining are scanning the transaction database the following is true
[ "Frequent pattern discovery (or FP discovery, FP mining, or Frequent itemset mining) is part of knowledge discovery in databases, Massive Online Analysis, and data mining; it describes the task of finding the most frequent and relevant patterns in large datasets. The concept was first introduced for mining transaction databases. Frequent patterns are defined as subsets (itemsets, subsequences, or substructures) that appear in a data set with frequency no less than a user-specified or auto-determined threshold. Techniques Techniques for FP mining include: market basket analysis cross-marketing catalog design clustering classification recommendation systems For the most part, FP discovery can be done using association rule learning with particular algorithms Eclat, FP-growth and the Apriori algorithm. Other strategies include: Frequent subtree mining Structure mining Sequential pattern mining and respective specific techniques. Implementations exist for various machine learning systems or modules like MLlib for Apache Spark.", "Relational data mining is the data mining technique for relational databases. Unlike traditional data mining algorithms, which look for patterns in a single table (propositional patterns), relational data mining algorithms look for patterns among multiple tables (relational patterns). For most types of propositional patterns, there are corresponding relational patterns. For example, there are relational classification rules (relational classification), relational regression tree, and relational association rules. There are several approaches to relational data mining: Inductive Logic Programming (ILP) Statistical Relational Learning (SRL) Graph Mining Propositionalization Multi-view learning Algorithms Multi-Relation Association Rules: Multi-Relation Association Rules (MRAR) is a new class of association rules which in contrast to primitive, simple and even multi-relational association rules (that are usually extracted from multi-relational databases), each rule item consists of one entity but several relations. These relations indicate indirect relationship between the entities. Consider the following MRAR where the first item consists of three relations live in, nearby and humid: “Those who live in a place which is near by a city with humid climate type and also are younger than 20 -> their health condition is good”. Such association rules are extractable from RDBMS data or semantic web data. Software Safarii: a Data Mining environment for analysing large relational databases based on a multi-relational data mining engine. Dataconda: a software, free for research and teaching purposes, that helps mining relational databases without the use of SQL. Datasets Relational dataset repository: a collection of publicly available relational datasets. See also Data mining Structure mining Database mining References External links Web page for a text book on relational data mining", "In deep learning, pruning is the practice of removing parameters from an existing artificial neural network. The goal of this process is to reduce the size (parameter count) of the neural network (and therefore the computational resources required to run it) whilst maintaining accuracy. This can be compared to the biological process of synaptic pruning which takes place in mammalian brains during development. Node (neuron) pruning A basic algorithm for pruning is as follows: Evaluate the importance of each neuron. Rank the neurons according to their importance (assuming there is a clearly defined measure for \"importance\"). Remove the least important neuron. Check a termination condition (to be determined by the user) to see whether to continue pruning. Edge (weight) pruning Most work on neural network pruning focuses on removing weights, namely, setting their values to zero. Early work suggested to also change the values of non-pruned weights. See also Knowledge distillation Neural Darwinism", "background - where foreground processes are given high priority) to understand non pre-emptive and pre-emptive multilevel scheduling in depth with FCFS algorithm for both the queues: See also Fair-share scheduling Lottery scheduling", "ncy between variables that is of interest; and once redundant variables are merged, their relationship to one another can no longer be studied. System granulation (aggregation) In database systems, aggregations (see e.g. OLAP aggregation and Business intelligence systems) result in transforming original data tables (often called information systems) into the tables with different semantics of rows and columns, wherein the rows correspond to the groups (granules) of original tuples and the columns express aggregated information about original values within each of the groups. Such aggregations are usually based on SQL and its extensions. The resulting granules usually correspond to the groups of original tuples with the same values (or ranges) over some pre-selected original columns. There are also other approaches wherein the groups are defined basing on, e.g., physical adjacency of rows. For example, Infobright implemented a database engine wherein data was partitioned onto rough rows, each consisting of 64K of physically consecutive (or almost consecutive) rows. Rough rows were automatically labeled with compact information about their values on data columns, often involving multi-column and multi-table relationships. It resulted in a higher layer of granulated information where objects corresponded to rough rows and attributes - to various aspects of rough information. Database operations could be efficiently supported within such a new framework, with an access to the original data pieces still available (Slezak et al. 2013). Concept granulation (component analysis) The origins of the granular computing ideology are to be found in the rough sets and fuzzy sets literatures. One of the key insights of rough set research—although by no means unique to it—is that, in general, the selection of different sets of features or variables will yield different concept granulations. Here, as in elementary rough set theory, by \"concept\" we mean a set of entities that are indistinguishable or indiscernible to the observer (i.e., a simple concept), or a set of entities that is composed from such simple concepts (i.e., a complex concept). To" ]
[ "fpgrowth has always strictly fewer scans than apriori", "fpgrowth and apriori can have the same number of scans", "apriori cannot have fewer scans than fpgrowth", "all three above statements are false" ]
['fpgrowth and apriori can have the same number of scans']
1441
Given the following teleporting matrix (Ε) for nodes A, B and C:[0    ½    0][0     0    0][0    ½    1]and making no assumptions about the link matrix (R), which of the following is correct:(Reminder: columns are the probabilities to leave the respective node.)
[ "The network probability matrix describes the probability structure of a network based on the historical presence or absence of edges in a network. For example, individuals in a social network are not connected to other individuals with uniform random probability. The probability structure is much more complex. Intuitively, there are some people whom a person will communicate with or be connected more closely than others. For this reason, real-world networks tend to have clusters or cliques of nodes that are more closely related than others (Albert and Barabasi, 2002, Carley [year], Newmann 2003). This can be simulated by varying the probabilities that certain nodes will communicate. The network probability matrix was originally proposed by Ian McCulloh. References McCulloh, I., Lospinoso, J. & Carley, K.M. (2007). Probability Mechanics in Communications Networks. In Proceedings of the 12th International Conference on Applied Mathematics of the World Science Engineering Academy and Society, Cairo, Egypt. 30–31 December 2007. \"Understanding Network Science,\" (Archived article) https://wayback-beta.archive.org/web/20080830045705/http://zangani.com/blog/2007-1030-networkingscience Linked: The New Science of Networks, A.-L. Barabási (Perseus Publishing, Cambridge (2002). Network Science, The National Academies Press (2005)ISBN 0-309-10026-7 External links Center for Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University U.S. Military Academy Network Science Center The Center for Interdisciplinary Research on Complex Systems at Northeastern University", "4 5 6 1/2 1/2 1 1/3 1/3 1/3 1/2 1/2 1 1 The communicating classes are {1,2,3}, {4}, and {5,6}. Only {5,6} is a closed communicating class. c) (One-dimensional random walk) A random walk on the set of integers Z is a Markov chain on the state space E = Z of the form Xn = X0 + ∑n i=1 εi, n ∈N. Here X0 is an integer-valued random variable and (εi)i≥1 are integer-valued and i.i.d. If the distribution of the εi's is given by P(εi = 1) = p, P(εi = -1) = 1-p for some p ∈(0,1), we call X a simple random walk. A simple random walk starts at some randomly chosen integer given by X0. Then, in each successive step, it jumps to its nearest neighbor on the right with probability p and to its nearest neighbor on the left with probability 1 -p. The direction of each jump (i.e. left or right) is independent of previous jumps. The site to which a jump leads does however depend on previous jumps as the walker can only jump to a nearest neighbor of its current location. The corresponding transition matrix is the in nite matrix P = <unk> <unk> ⎝ <unk> <unk> 1 -p 0 p 0 0 ⋯ 0 1 -p 0 p 0 ⋯ 0 0 1 -p 0 p <unk> <unk> <unk> <unk> ⎠. d) (Birth and death chain on N) For i ∈N let pi,ri,qi be real numbers in [0,1] such that pi + ri + qi = 1. Assume further that q0 = 0. A birth and death chain on N is a 11 Markov chain on the state space E = N with transition matrix P = (pij) satisfying pij = <unk> pi, j = i + 1, ri, j = i, qi, j = i -1. (2.2) Here, Xn can be interpreted as the size of a population at time n. From one time step to the next, there is either exactly one birth or exactly one death or the population size stays constant. The probabilities of birth are", "... Dangling nodes = absorbing states are not the only classes we can get Examples: 15 1 3 2 4 Any = (,,, ) is solution 1 2 4 3 6 5 3,4 not dangling, but {3,4} is absorbing class Solution: add randomization At every iteration, coin flip: with prob. walk on the graph ( <unk>), with prob. 1 -jump to a random page = <unk>+ (1 -) Theorem: If < 1, = has exactly one solution for any network graph = 0 uniform In practice: 0.8 ≤≤0.9, i.e., 5-10 steps on web graph between random jumps PageRank algorithm computes this solution 16 teleportation matrix Irreducible: Every page is directly connected to every other page Aperiodic: > 0 (self-loops from teleportation matrix) This is enough to avoid periodic patterns Irreducible + aperiodic = ergodic: Single stationary distribution Long-term page frequency of random surfer 17 Uniform jumps: crude We can incorporate more information about the a-priori importance of web pages Length of the URL Words in the domain Language HTML tags ... Model: when randomizing, sample from = distribution over all nodes  = + + 1 - 18 Approach 1: simulate random walker Stationary regime: walker at = Problem: with Θ(100bn) web pages: slow convergence, very costly Approach 2: linear-system method Compute solution of - = Normalized rank: = /() Efficient for small graphs Approach 3: power method is (left) dominant eigenvector (eigenvalue=1) of Iterating +1 = 19 Theorem: approach 2 produces PageRank vector Proof: PageRank vector : = and = 1 Want to show that -= 0 ⇔-= 0", "}\\\\I_{\\mathrm {R} }$end{bmatrix}}} The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T has the properties: det ( T ) = A D − B C = 1 {\\displaystyle \\det(T)=AD-BC=1} A = D {\\displaystyle A=D} The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G. The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In such models, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as c refers to the per-unit-length quantity. Lossless line The lossless line approximation is the least accurate; it is typically used on short lines where the inductance is much greater than the resistance. For this approximation, the voltage and current are identical at the sending and receiving ends. The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance. When a lossless line is terminated by surge impedance, the voltage does not drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the line. For load > SIL, the voltage drops from sending end and the line consumes VARs. For load < SIL, the voltage increases from the sending end, and the line generates VARs. Short line The short line approximation is normally used for lines shorter than 80 km (50 mi). There, only a series impedance Z is considered, while C and G are ignored. The final result is that A = D = 1 per unit, B = Z Ohms, and C = 0. The associated transition matrix for this approximation is therefore: [ V S I S ] = [ 1", "by 0 the all-zero vector in {0, 1}d, and index the 2d eigenvalues and eigenvectors with elements z ∈{0, 1}d. Lemma 1.6. The eigenvalues and eigenvectors of the transition probability matrix are λz = 1 -2|z| d + 1, where |z| = number of non-zero components in z and φ(z) x (-1)z·x ∀x ∈{0, 1}d, where z · x P 1≤t≤d ztxt Proof. (Pφ(z))x = X y∈S pxyφ(z) y = 1 d + 1 φ(z) x + 1 d + 1 X 1≤t≤d φ(z) x+et where φ(z) x+et = (-1)z·(x+et) = (-1)z·x(-1)z·et = φ(z) x (-1)zt Thus (Pφ(z))x = 1 d + 1φ(z) x <unk> <unk>1 + X 1≤t≤d (-1)zt <unk> <unk>= 1 d + 1φ(z) x (1 + d -2|z|) = 1 -2|z| d + 1 φ(z) x which proves that Pφ(z) = 1 -2|z| d + 1 φ(z) Notice in particular that λ0 = 1, φ(0) = (1, · · ·, 1)T, and |φ(z) x | = 1, ∀x, z ∈{0, 1}d. The eigenvalues have high multiplicities: for 1 ≤t ≤d, the eigenvalue λ = 1 - 2t d+1 corresponds to\", \"lex\": \"Markov Chains and Algorithmic Applications WEEK The cut o phenomenon Summary of the two previous lecture Recall that we are considering a Markov chain Xn n with transition matrix P and a nite state space S with S N We assume that the chain is ergodic irreducible aperiodic and positive recurrent thus there is a unique stationary and limiting distribution with P and pij n n j i j S Finally we assume that the detailed balance equation is satis ed i pij j pji i j" ]
[ "A random walker can never reach node A", "A random walker can never leave node A", "A random walker can always leave node C", "A random walker can always leave node B" ]
['A random walker can always leave node B']
1444
Which of the following methods does not exploit statistics on the co-occurrence of words in a text?
[ "s focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of computational semantics, which is in turn a subfield of computational linguistics and natural language processing. Many of the applications of statistical semantics (listed above) can also be addressed by lexicon-based algorithms, instead of the corpus-based algorithms of statistical semantics. One advantage of corpus-based algorithms is that they are typically not as labour-intensive as lexicon-based algorithms. Another advantage is that they are usually easier to adapt to new languages or noisier new text types from e.g. social media than lexicon-based algorithms are. However, the best performance on an application is often achieved by combining the two approaches. See also References =", "collocations and associations between words. For instance, countings of occurrences and co-occurrences of words in a text corpus can be used to approximate the probabilities p ( x ) {\\displaystyle p(x)} and p ( x, y ) {\\displaystyle p(x,y)} respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia (dump of October 2015) filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952. (Note: natural log is used to calculate the PMI values in this example, instead of log base 2) Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score. References Fano, R M (1961). \"chapter 2\". Transmission of Information: A Statistical Theory of Communications. MIT Press, Cambridge, MA. ISBN 978-0262561693. : ISBN / Date incompatibility (help) External links Demo at Rensselaer MSR Server (PMI values normalized to be between 0 and 1)", "Techniques for Use in Statistics, Chapman and Hall, London. ISBN 0-412-31400-2 Barndorff-Nielsen, O.E., Cox, D.R., (1994). Inference and Asymptotics. Chapman & Hall, London. P. McCullagh, \"Tensor Methods in Statistics\", Monographs on Statistics and Applied Probability, Chapman and Hall, 1987. Edwards, A.W.F. (1984) Likelihood. CUP. ISBN 0-521-31871-8", "In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval. History The term statistical semantics was first used by Warren Weaver in his well-known paper on machine translation. He argued that word-sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that \"a word is characterized by the company it keeps\" was advocated by J. R. Firth. This assumption is known in linguistics as the distributional hypothesis. Emile Delavenay defined statistical semantics as the \"statistical study of the meanings of words and their frequency and order of recurrence\". \"Furnas et al. 1983\" is frequently cited as a foundational contribution to statistical semantics. An early success in the field was latent semantic analysis. Applications Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects of semantics, by applying statistical techniques to large corpora: Measuring the similarity in word meanings Measuring the similarity in word relations Modeling similarity-based generalization Discovering words with a given relation Classifying relations between words Extracting keywords from documents Measuring the cohesiveness of text Discovering the different senses of words Distinguishing the different senses of words Subcognitive aspects of words Distinguishing praise from criticism Related fields Statistical semantics focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of computational semantics, which is in turn a subfield of computational linguistics and natural language processing. Many of the applications of statistical semantics (listed above) can also be addressed by lexicon-based algorithms, instead of the corpus-based algorithms of statistical sem", "scope The invention of the coincidence method enlightened new techniques for measuring high-energy cosmic rays. On such experiment, COS-B, launched in 1975 and featured an anti-coincidence veto for charged particles, as well as three scintillation detectors to measure electron cascades caused by incoming gamma radiation. Therefore, gamma ray interactions could be measured with three-fold coincidence, after having passed a charged particle veto (see Anti-Coincidence). Other experiments using coincidence methods AGS AMS CHANDLER CRESST XENON Anti-coincidence The anti-coincidence method, similarly to the coincidence method, helps discriminate background interactions from target signals. However, anti-coincidence designs are used to actively reject non-signal particles rather than affirm signal particles. For instance, anti-coincidence counters can be used to shield charged particles when an experiment is explicitly searching for neutral particles, as in the SuperKamiokande neutrino experiment. These charged particles are often cosmic rays. Anti-coincidence detectors work by flagging or rejecting any events that trigger one channel of the detector, but not another. For a given rate of coincident particle interactions, R c o i n c i d e n t {\\displaystyle R_{\\rm {coincident}}}, R c o i n c i d e n t = R s u s p e c t e d − R u n c o r r e l a t e d {\\displaystyle R_{\\rm {coincident}}=R_{\\rm {suspected}}-R_{\\rm {uncorrelated}}} where R s u s p e c t e d {\\displaystyle R_{\\rm {suspected}}} is the rate of suspected target interactions and R u n c o r r e l a t e d {\\displaystyle R_{\\rm {uncorrelated}}} is the rate of all detected, but uncorrelated events across multiple channels. This shows that all uncorrelated events, measured using the anti-coincidence technique," ]
[ "Word embeddings\n\n\n", "Transformers\n\n\n", "Vector space retrieval\n\n\n", "Fasttext" ]
['Vector space retrieval\n\n\n']
1449
Which attribute gives the best split?A1PNa44b44A2PNx51y33A3PNt61j23
[ "al splitting method [7, Algorithm 3.1] can therefore be used to solve (12). EPFL 2021 | Mathematical Foundations of Signal Processing M. Simeoni, B. Bejar & J. Fageot 28 Primal-Dual Splitting Method Algorithm 5 Primal-Dual Splitting (PDS) Method 1: procedure PDS(τ,σ,ρ,x0,z0) 2: for all n ≥1 do 3: ̃xn = proxτG ¡ xn-1 -τ∇F(xn-1)-τK ∗zn-1 ¢ 4: ̃zn = proxσH ∗(zn-1 +σK [2 ̃xn -xn-1]) 5: xn = ρ ̃xn +(1-ρ)xn-1 6: zn = ρ ̃zn +(1-ρ)zn-1 7: return {(xn,zn)}n∈N EPFL 2021 | Mathematical Foundations of Signal Processing M. Simeoni, B. Bejar & J. Fageot 29 Interpretation of PDS The algorithm performs alternating proximal gradient/ascent steps: • Given an estimate zn-1, Row 3 performs a proximal gradient descent with step size τ > 0 to minimise min x∈RN F(x) + G(x) +zT n-1Kx w.r.t. to the variable x (called primal variable). • Row 4 uses the result of the proximal gradient descent step 3 and the previous primal estimate xn-1 and performs a proximal gradient ascent with step size σ > 0 to maximise max z∈RM zTK(2 ̃xn -xn-1)-H ∗(z) w.r.t. to the variable z (called dual variable). • ρ > 0 is a momentum term, used to combine the output of the gradient/ascent steps with previous estimates of the primal/dual variables. EPFL 2021 | Mathematical Foundations of Signal Processing M. Simeoni, B. Bejar & J. Fageot 30 Convergence of PDS (β <unk>= 0) Theorem: (Convergence of PDS, β <unk>= 0) [7", "four samples at the right child. PC, t = 1/4 PNC, t = 3/4 H ( t ) {\\displaystyle \\mathrm {H} {(t)}} = -(1/4 × log2(1/4) + 3/4 × log2(3/4)) = 0.811 From this new H ( t ) {\\displaystyle \\mathrm {H} {(t)}}, the candidate splits can be calculated using the same formulae as the root node: Thus, the right child will be split with Mutation 4. All the samples that have the mutation will be passed to the left child and the ones that lack it will be passed to the right child. To split the left node, the process would be the same, except there would only be 3 samples to check. Sometimes a node may not need to be split at all if it is a pure set, where all samples at the node are just cancerous or non-cancerous. Splitting the node may lead to the tree being more inaccurate and in this case it will not be split. The tree would now achieve 100% accuracy if the samples that were used to build it are tested. This isn't a good idea, however, since the tree would overfit the data. The best course of action is to try testing the tree on other samples, of which are not part of the original set. Two outside samples are below: By following the tree, NC10 was classified correctly, but C15 was classified as NC. For other samples, this tree would not be 100% accurate anymore. It could be possible to improve this though, with options such as increasing the depth of the tree or increasing the size of the training set. Advantages Information gain is the basic criterion to decide whether a feature should be used to split a node or not. The feature with the optimal split i.e., the highest value of information gain at a node of a decision tree is used as the feature for splitting the node. The concept of information gain function falls under the C4.5 algorithm for generating the decision trees and selecting the optimal split for a decision tree node. Some of its advantages include: It can work with both continuous and discrete variables. Due to the factor –[p ∗ log(p)] in the entro", "bodies and cell wall fragments are able to be more uniformly distributed throughout the peanut butter, rather than clumping. If the particle size is more widely distributed, it mimics the particle size distribution of stabilized peanut butter, resulting in a more stable natural peanut butter. Applications The rheology of peanut butter may affect its best texture, flavor, storage stability, and overall quality. This understanding can be applied when determining better or alternative stabilizers for peanut butter or better grinding manufacturing processes for unstabilized peanut butter to prevent oil separation more effectively.", "S} is split on an attribute A {\\displaystyle A}. In other words, how much uncertainty in S {\\displaystyle S} was reduced after splitting set S {\\displaystyle S} on attribute A {\\displaystyle A}. I G ( S, A ) = H ( S ) − ∑ t ∈ T p ( t ) H ( t ) = H ( S ) − H ( S | A ). {\\displaystyle IG(S,A)=\\mathrm {H} {(S)}-\\sum _{t\\in T}p(t)\\mathrm {H} {(t)}=\\mathrm {H} {(S)}-\\mathrm {H} {(S|A)}.} Where, H ( S ) {\\displaystyle \\mathrm {H} (S)} – Entropy of set S {\\displaystyle S} T {\\displaystyle T} – The subsets created from splitting set S {\\displaystyle S} by attribute A {\\displaystyle A} such that S = <unk> t ∈ T t {\\displaystyle S=\\bigcup _{t\\in T}t} p ( t ) {\\displaystyle p(t)} – The proportion of the number of elements in t {\\displaystyle t} to the number of elements in set S {\\displaystyle S} H ( t ) {\\displaystyle \\mathrm {H} (t)} – Entropy of subset t {\\displaystyle t} In ID3, information gain can be calculated (instead of entropy) for each remaining attribute. The attribute with the largest information gain is used to split the set S {\\displaystyle S} on this iteration. See also Classification and regression tree (CART) C4.5 algorithm Decision tree learning Decision tree model References Further reading Mitchell, Tom Michael (1997). Machine Learning. New York, NY: McGraw-Hill. pp. 55–58. ISBN 0070428077. OCLC 36417892. Grzymala-Busse, Jerzy W. (February 1993). \"Selected Algorithms of Machine Learning from Examples\" (PDF).", "{\\frac {|S_{f}|^{2}}{|S|^{2}}}{\\frac {1}{|S_{f}|^{2}}}\\sum _{i\\in S_{f}}\\sum _{j\\in S_{f}}{\\frac {1}{2}}(y_{i}-y_{j})^{2}\\right)} where S {\\displaystyle S}, S t {\\displaystyle S_{t}}, and S f {\\displaystyle S_{f}} are the set of presplit sample indices, set of sample indices for which the split test is true, and set of sample indices for which the split test is false, respectively. Each of the above summands are indeed variance estimates, though, written in a form without directly referring to the mean. By replacing ( y i − y j ) 2 {\\displaystyle (y_{i}-y_{j})^{2}} in the formula above with the dissimilarity d i j {\\displaystyle d_{ij}} between two objects i {\\displaystyle i} and j {\\displaystyle j}, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed. Measure of \"goodness\" Used by CART in 1984, the measure of \"goodness\" is a function that seeks to optimize the balance of a candidate split's capacity to create pure children with its capacity to create equally-sized children. This process is repeated for each impure node until the tree is complete. The function φ ( s ∣ t ) {\\displaystyle \\varphi (s\\mid t)}, where s {\\displaystyle s} is a candidate split at node t {\\displaystyle t}, is defined as below φ ( s ∣ t ) = 2 P L P R ∑ j = 1 class count | P ( j ∣ t L ) − P ( j ∣ t R ) | {\\displaystyle \\varphi (s\\mid t)=2P_{L}P_{R}\\sum _{j=1}^{\\" ]
[ "A1", "A3", "A2", "All the same" ]
['A3']
1450
Suppose that q is density reachable from p. The chain of points that ensure this relationship are {t,u,g,r} Which one is FALSE?
[ "more than k objects. We denote the set of k nearest neighbors as Nk(A). This distance is used to define what is called reachability distance: reachability-distance k ( A, B ) = max { k -distance ( B ), d ( A, B ) } {\\displaystyle {\\text{reachability-distance}}_{k}(A,B)=\\max\\{k{\\text{-distance}}(B),d(A,B)\\}} In words, the reachability distance of an object A from B is the true distance between the two objects, but at least the k -distance {\\displaystyle k{\\text{-distance}}} of B. Objects that belong to the k nearest neighbors of B (the \"core\" of B, see DBSCAN cluster analysis) are considered to be equally distant. The reason for this is to reduce the statistical fluctuations between all points A close to B, where increasing the value for k increases the smoothing effect. Note that this is not a distance in the mathematical definition, since it is not symmetric. (While it is a common mistake to always use the k -distance ( A ) {\\displaystyle k{\\text{-distance}}(A)}, this yields a slightly different method, referred to as Simplified-LOF) The local reachability density of an object A is defined by lrd k ( A ) := | N k ( A ) | ∑ B ∈ N k ( A ) reachability-distance k ( A, B ) {\\displaystyle {\\text{lrd}}_{k}(A):={\\frac {|N_{k}(A)|}{\\sum _{B\\in N_{k}(A)}{\\text{reachability-distance}}_{k}(A,B)}}} which is the inverse of the average reachability distance of the object A from its neighbors. Note that it is not the average reachability of the neighbors from A (which by definition would be the k -distance ( A ) {\\displaystyle k{\\text{-distance}}(A)} ), but the", "}(A):={\\frac {|N_{k}(A)|}{\\sum _{B\\in N_{k}(A)}{\\text{reachability-distance}}_{k}(A,B)}}} which is the inverse of the average reachability distance of the object A from its neighbors. Note that it is not the average reachability of the neighbors from A (which by definition would be the k -distance ( A ) {\\displaystyle k{\\text{-distance}}(A)} ), but the distance at which A can be \"reached\" from its neighbors. With duplicate points, this value can become infinite. The local reachability densities are then compared with those of the neighbors using LOF k ( A ) := 1 | N k ( A ) | ∑ B ∈ N k ( A ) lrd k ( B ) lrd k ( A ) = 1 | N k ( A ) | ⋅ lrd k ( A ) ∑ B ∈ N k ( A ) lrd k ( B ) {\\displaystyle {\\text{LOF}}_{k}(A):={\\frac {1}{|N_{k}(A)|}}\\sum _{B\\in N_{k}(A)}{\\frac {{\\text{lrd}}_{k}(B)}{{\\text{lrd}}_{k}(A)}}={\\frac {1}{|N_{k}(A)|\\cdot {\\text{lrd}}_{k}(A)}}\\sum _{B\\in N_{k}(A)}{\\text{lrd}}_{k}(B)} which is the average local reachability density of the neighbors divided by the object's own local reachability density. A value of approximately 1 indicates that the object is comparable to its neighbors (and thus not an outlier). A value below 1 indicates a denser region (which would be an inlier), while values significantly larger than 1 indicate outliers. LOF(k) ~ 1 means Similar density as neighbors, LOF(k) < 1 means Higher density", "beliefs or information (the prior probability) with observed data. principal component analysis (PCA) probability probability density The probability in a continuous probability distribution. For example, you can't say that the probability of a man being six feet tall is 20%, but you can say he has 20% of chances of being between five and six feet tall. Probability density is given by a probability density function. Contrast probability mass. probability density function The probability distribution for a continuous random variable. probability distribution A function that gives the probability of all elements in a given space; see List of probability distributions. probability measure The probability of events in a probability space. probability plot probability space A sample space over which a probability measure has been defined. Q quantile A particular point or value at which the range of a probability distribution is divided into continuous intervals with equal probabilities, or at which the observations in a sample are divided in the same way. The number of groups into which the range is divided is always one greater than the number of quantiles dividing them. Commonly used quantiles include quartiles (which divide a range into four groups), deciles (ten groups), and percentiles (one hundred groups). The groups themselves are termed halves, thirds, quarters, etc., though the terms for the quantiles are sometimes used to refer to the groups, rather than to the cut points. quartile A type of quantile which divides a range of data points into four groups, termed quarters, of equal size. For any quartile-divided dataset, there are exactly three quartiles or cut points that create the four groups. The first quartile ( Q {\\displaystyle Q} 1) is defined as the middle data point or value that is halfway between the smallest value (minimum) and the median of the dataset, such that 25 percent of the data lies below this quartile. The second quartile ( Q {\\displaystyle Q} 2) is the median itself, with 50 percent of the data below this point. The third quartile ( Q {\\displaystyle Q} 3) is defined as the middle value halfway between the median and the largest value (maximum) of the dataset, such that 75 percent of", ", g {\\displaystyle f,g} with respect to P {\\displaystyle P}, they are said to be B-equivalent if there exists a c > 0 {\\displaystyle c>0} s.t f ( x ) = c ⋅ g ( x ) {\\displaystyle f(x)=c\\cdot g(x)}, denoted f = B g {\\displaystyle f=_{B}g} (the convention c ⋅ ∞ = ∞ {\\displaystyle c\\cdot \\infty =\\infty } is used in cases where a measure is infinite). It can be shown that ( = B ) {\\displaystyle (=_{B})} is an equivalence relation. The Bayes space B ( P ) {\\displaystyle B(P)} is defined as the quotient space of all measures with the same null-sets in Ω {\\displaystyle \\Omega } as P {\\displaystyle P} under the equivalence relation ( = B ) {\\displaystyle (=_{B})}. The first challenge to analysing density functions is that B ( P ) {\\displaystyle B(P)} is not linear space under ordinary addition and multiplication since the ordinary difference between two densities would not be non-negative everywhere. Like in the Aitchison geometry for finite dimensional data, perturbation and powering is defined for densities: Perturbation ( f ⊕ g ) ( x ) = B f ( x ) ⋅ g ( x ) {\\textstyle (f\\oplus g)(x)=_{B}f(x)\\cdot g(x)} Powering α ⊙ f ( x ) = B f ( x ) α {\\textstyle \\alpha \\odot f(x)=_{B}f(x)^{\\alpha }} where f ( x ), g ( x ) {\\displaystyle f(x),{\\text{ }}g(x)} are densities in B ( P ) {\\displaystyle B(P)} and α {\\displaystyle \\alpha } is some real number. It can be shown using the properties of multiplication and power", "density functions φ 1 {\\displaystyle \\phi _{1}} and φ 2 {\\displaystyle \\phi _{2}} are equimeasurable if ∀ δ > 0, μ { x ∈ R | φ 1 ( x ) ≥ δ } = μ { x ∈ R | φ 2 ( x ) ≥ δ }, {\\displaystyle \\forall \\delta >0,\\,\\mu \\{x\\in \\mathbb {R} |\\phi _{1}(x)\\geq \\delta \\}=\\mu \\{x\\in \\mathbb {R} |\\phi _{2}(x)\\geq \\delta \\},} where μ is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable \"rearrangement\" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.) See also Inequalities in information theory Logarithmic Schrödinger equation Uncertainty principle Riesz–Thorin theorem Fourier transform References Further reading Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. (2016). \"One-parameter class of uncertainty relations based on entropy power\". Phys. Rev. E 93 (6): 060104(R). doi:10.1103/PhysRevE.93.060104. Zozor, S.; Vignat, C. (2007). \"On classes of non-Gaussian asymptotic minimizers in entropic uncertainty principles\". Physica A: Statistical Mechanics and Its Applications. 375 (2): 499. arXiv:math/0605510. Bibcode:2007PhyA..375..499Z. doi:10.1016/j.physa.2006." ]
[ "{t,u,g,r} have to be all core points.", "p and q will also be density-connected", "p has to be a core point", "q has to be a border point" ]
['q has to be a border point']
1451
In User-Based Collaborative Filtering, which of the following is correct, assuming that all the ratings are positive?
[ "Instead, we want to embrace the collaborative aspect of sharing observations online with a digital community, and celebrate the healing power of nature as people document their local biodiversity to the best of their ability.\" This change remained in effect for following years. Results Reference: References External links Official website", "ulation that is being evaluated. The collaborator solution is selected randomly from the solutions that make up the Pareto-optimal front of the subpopulation. The fitness assignment to the collaborator solutions is done in an optimistic fashion (i.e. an \"old\" fitness value is replaced when the new one is better). Applications The constructive cooperative coevolution algorithm has been applied to different types of problems, e.g. a set of standard benchmark functions, optimisation of sheet metal press lines and interacting production stations. The C3 algorithm has been embedded with, amongst others, the differential evolution algorithm and the particle swarm optimiser for the subproblem optimisations. See also Cooperative coevolution Metaheuristic Stochastic search Differential evolution Swarm intelligence Genetic algorithms Hyper-heuristics", "user-generated reviews and ratings. The recommendations and scoring system that the company has developed are meant to assist researchers with the process of developing future medications and finding cures for diseases. They are guided towards products and techniques that were previously used by other researchers when planning and performing experiments. The company's revenue is based on selling SaaS subscriptions to researchers in biopharma companies. They also charge product suppliers for content syndication.", "meta-data in content-based filtering, but the former are more valuable for the recommender system. Since these features are broadly mentioned by users in their reviews, they can be seen as the most crucial features that can significantly influence the user's experience on the item, while the meta-data of the item (usually provided by the producers instead of consumers) may ignore features that are concerned by the users. For different items with common features, a user may give different sentiments. Also, a feature of the same item may receive different sentiments from different users. Users' sentiments on the features can be regarded as a multi-dimensional rating score, reflecting their preference on the items. Based on the feature/aspects and the sentiments extracted from the user-generated text, a hybrid recommender system can be constructed. There are two types of motivation to recommend a candidate item to a user. The first motivation is the candidate item have numerous common features with the user's preferred items, while the second motivation is that the candidate item receives a high sentiment on its features. For a preferred item, it is reasonable to believe that items with the same features will have a similar function or utility. So, these items will also likely to be preferred by the user. On the other hand, for a shared feature of two candidate items, other users may give positive sentiment to one of them while giving negative sentiment to another. Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item. Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review. Review or feedback poorly written is hardly helpful for recommender system. Besides, a review can be designed to hinder sales of a target product, thus be harmful to the recommender system even it is well written. Researchers also found that long and short forms of user-generated text should be treated differently. An interesting result shows that short-form reviews are sometimes more helpful than long-form, because it is easier to filter out the noise in a short-form text. For the long-form text,", "theory. Recommendation algorithms that utilize cluster analysis often fall into one of the three main categories: Collaborative filtering, Content-Based filtering, and a hybrid of the collaborative and content-based. Collaborative Filtering Recommendation Algorithm Collaborative filtering works by analyzing large amounts of data on user behavior, preferences, and activities to predict what a user might like based on similarities with others. It detects patterns in how users rate items and groups similar users or items into distinct “neighborhoods.” Recommendations are then generated by leveraging the ratings of content from others within the same neighborhood. The algorithm can focus on either user-based or item-based grouping depending on the context. Content-Based Filtering Recommendation Algorithm Content-based filtering uses item descriptions and a user's preference profile to recommend items with similar characteristics to those the user previously liked. It evaluates the distance between feature vectors of item clusters, or “neighborhoods.” The user's past interactions are represented as a weighted feature vector, which is compared to these clusters. Recommendations are generated by identifying the cluster evaluated be the closest in distance with the user's preferences. Hybrid Recommendation Algorithms Hybrid recommendation algorithms combine collaborative and content-based filtering to better meet the requirements of specific use cases. In certain cases this approach leads to more effective recommendations. Common strategies include: (1) running collaborative and content-based filtering separately and combining the results, (2) adding onto one approach with specific features of the other, and (3) integrating both hybrid methods into one model. Markov chain Monte Carlo methods Clustering is often utilized to locate and characterize extrema in the target distribution. Anomaly detection Anomalies/outliers are typically – be it explicitly or implicitly – defined with respect to clustering structure in data. Natural language processing Clustering can be used to resolve lexical ambiguity. DevOps Clustering has been used to analyse the effectiveness of DevOps teams. Social science Sequence analysis in social sciences Cluster analysis is used to identify patterns of family life trajectories, professional careers, and daily or" ]
[ "Pearson Correlation Coefficient and Cosine Similarity have different value range, but return the same similarity ranking for the users", "If the ratings of two users have both variance equal to 0, then their Cosine Similarity is maximized", "Pearson Correlation Coefficient and Cosine Similarity have the same value range, but can return different similarity ranking for the users", "If the variance of the ratings of one of the users is 0, then their Cosine Similarity is not computable" ]
['If the ratings of two users have both variance equal to 0, then their Cosine Similarity is maximized']
1452
The term frequency of a term is normalized
[ "M F T | M F T ⟩ {\\displaystyle {\\mathcal {N}}={\\frac {\\left\\langle \\mathrm {MFT} \\right|\\left.\\mathrm {RPA} \\right\\rangle }{\\left\\langle \\mathrm {MFT} \\right|\\left.\\mathrm {MFT} \\right\\rangle }}} The normalization can be calculated by ⟨ R P A | R P A ⟩ = N 2 ⟨ M F T | e z i ( q ~ i ) 2 / 2 e z j ( q ~ j † ) 2 / 2 | M F T ⟩ = 1 {\\displaystyle \\langle \\mathrm {RPA} |\\mathrm {RPA} \\rangle ={\\mathcal {N}}^{2}\\langle \\mathrm {MFT} |\\mathbf {e} ^{z_{i}({\\tilde {\\mathbf {q} }}_{i})^{2}/2}\\mathbf {e} ^{z_{j}({\\tilde {\\mathbf {q} }}_{j}^{\\dagger })^{2}/2}|\\mathrm {MFT} \\rangle =1} where Z i j = ( X t ) i k z k X j k {\\displaystyle Z_{ij}=(X^{\\mathrm {t} })_{i}^{k}z_{k}X_{j}^{k}} is the singular value decomposition of Z i j {\\displaystyle Z_{ij}}. q ~ i = ( X † ) j i a j {\\displaystyle {\\tilde {\\mathbf {q} }}^{i}=(X^{\\dagger })_{j}^{i}\\mathbf {a} ^{j}} N − 2 = ∑ m i ∑ n j ( z i / 2 ) m i ( z j / 2 ) n j m! n! ⟨ M F T | ∏ i j ( q ~ i", "- sion, this makes the task under-determined. We say that the model is over-parameterized for the task. Using regularization is a way to avoid the issue described, which we will learn later.", "-max of [0, 1] is given as: x ′ = x − min ( x ) max ( x ) − min ( x ) {\\displaystyle x'={\\frac {x-{\\text{min}}(x)}{{\\text{max}}(x)-{\\text{min}}(x)}}} where x {\\displaystyle x} is an original value, x ′ {\\displaystyle x'} is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights). To rescale a range between an arbitrary set of values [a, b], the formula becomes: x ′ = a + ( x − min ( x ) ) ( b − a ) max ( x ) − min ( x ) {\\displaystyle x'=a+{\\frac {(x-{\\text{min}}(x))(b-a)}{{\\text{max}}(x)-{\\text{min}}(x)}}} where a, b {\\displaystyle a,b} are the min-max values. Mean normalization x ′ = x − x ̄ max ( x ) − min ( x ) {\\displaystyle x'={\\frac {x-{\\bar {x}}}{{\\text{max}}(x)-{\\text{min}}(x)}}} where x {\\displaystyle x} is an original value, x ′ {\\displaystyle x'} is the normalized value, x ̄ = average ( x ) {\\displaystyle {\\bar {x}}={\\text{average}}(x)} is the mean of that feature vector. There is another form of the means normalization which divides by the standard deviation which is also called standardization. Standardization (Z-score Normalization) In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting", "\\mathbf {Z} _{k}\\right)={\\frac {p\\left(\\mathbf {z} _{k}\\mid \\mathbf {x} _{k}\\right)p\\left(\\mathbf {x} _{k}\\mid \\mathbf {Z} _{k-1}\\right)}{p\\left(\\mathbf {z} _{k}\\mid \\mathbf {Z} _{k-1}\\right)}}} The denominator p ( z k ∣ Z k − 1 ) = ∫ p ( z k ∣ x k ) p ( x k ∣ Z k − 1 ) d x k {\\displaystyle p\\left(\\mathbf {z} _{k}\\mid \\mathbf {Z} _{k-1}\\right)=\\int p\\left(\\mathbf {z} _{k}\\mid \\mathbf {x} _{k}\\right)p\\left(\\mathbf {x} _{k}\\mid \\mathbf {Z} _{k-1}\\right)\\,d\\mathbf {x} _{k}} is a normalization term. The remaining probability density functions are p ( x k ∣ x k − 1 ) = N ( F k x k − 1, Q k ) p ( z k ∣ x k ) = N ( H k x k, R k ) p ( x k − 1 ∣ Z k − 1 ) = N ( x ^ k − 1, P k − 1 ) {\\displaystyle {\\begin{aligned}p\\left(\\mathbf {x} _{k}\\mid \\mathbf {x} _{k-1}\\right)&={\\mathcal {N}}\\left(\\mathbf {F} _{k}\\mathbf {x} _{k-1},\\mathbf {Q} _{k}\\right)\\\\p\\left(\\mathbf {z} _{k}\\mid \\mathbf {x} _{k}\\right)&={\\mathcal {N}}\\left(\\mathbf {H} _{k}\\math", "Standardized rates are a statistical measure of any rates in a population. These are adjusted rates that take into account the vital differences between populations that may affect their birthrates or death rates. Examples The most common are birth, death and unemployment rates. For example, in a community made up of primarily young couples, the birthrate might appear to be high when compared to that of other populations. However, by calculating the standardized birthrates that is by comparing the same age group in other populations), a more realistic picture of childbearing capacity will be developed. Formula The formula for standardized rates is as follows: Σ(crude rate for age group × standard population for age group) / Σstandard population See also Mortality ratio References Medical Biostatistics, Third Edition (MedicalBiostatistics.synthasite.com), A. Indrayan (indrayan.weebly.com), Chapman & Hall/ CRC Press, 2012 Introduction to Sociology, Bruce J. Cohen and Terri L. Orbuch" ]
[ "by the maximal frequency of all terms in the document", "by the maximal frequency of the term in the document collection", "by the maximal frequency of any term in the vocabulary", "by the maximal term frequency of any document in the collection" ]
by the maximal frequency of all terms in the document
1454
Which is an appropriate method for fighting skewed distributions of class labels in classification?
[ "model based on the sample-label pair: (xt, yt). Recently, a new learning paradigm called progressive learning technique has been developed. The progressive learning technique is capable of not only learning from new samples but also capable of learning new classes of data and yet retain the knowledge learnt thus far. Evaluation The performance of a multi-class classification system is often assessed by comparing the predictions of the system against reference labels with an evaluation metric. Common evaluation metrics are Accuracy or macro F1. See also Binary classification One-class classification Multi-label classification Multiclass perceptron Multi-task learning Notes", ". A drawback of the basic \"majority voting\" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map (SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data. K-NN can then be applied to the SOM. Parameter selection The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques (see hyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method. The 1-nearest neighbor classifier The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that", "Classification Research Group – Was a group contributing to classification research and theory in library and information science Controlled vocabulary – Method of organizing knowledge Decimal classification Universal Decimal Classification – Bibliographic and library classification system Findability – the ease with which information can be identified when searching for itPages displaying wikidata descriptions as a fallback Folksonomy – Classification based on users' tags Information architecture – Structural design of shared information Tag (metadata) – Keyword assigned to information", "Classification Research Group – Was a group contributing to classification research and theory in library and information science Controlled vocabulary – Method of organizing knowledge Decimal classification Universal Decimal Classification – Bibliographic and library classification system Findability – the ease with which information can be identified when searching for itPages displaying wikidata descriptions as a fallback Folksonomy – Classification based on users' tags Information architecture – Structural design of shared information Tag (metadata) – Keyword assigned to information", "nised in the context of neural network-based Markov models in the early 1990s. Another source of label bias is that training is always done with respect to known previous tags, so the model struggles at test time when there is uncertainty in the previous tag." ]
[ "Include an over-proportional number of samples from the larger class", "Use leave-one-out cross validation", "Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels", "Generate artificial data points for the most frequent classes" ]
['Use leave-one-out cross validation', 'Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels']
1456
Thang, Jeremie and Tugrulcan have built their own search engines. For a query Q, they got precision scores of 0.6, 0.7, 0.8  respectively. Their F1 scores (calculated by same parameters) are same. Whose search engine has a higher recall on Q?
[ "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, all patents associated with PageRank have expired. Description PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of \"measuring\" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R ( E ). {\\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on the Webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it (\"incoming links\"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking", "frac {PR(B)}{L(B)}}+{\\frac {PR(C)}{L(C)}}+{\\frac {PR(D)}{L(D)}}+\\,\\cdots \\right).} The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied by N and the sum becomes N. A statement in Page and Brin's paper that \"the sum of all PageRanks is one\" and claims by other Google employees support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper \"The Anatomy of a Large-Scale Hypertextual Web Search Engine\", where they mistakenly claimed that the latter formula formed a probability distribution over web pages. Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of a random surfer who reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are the links between pages – all of which are all equally probable. If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again. When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability, d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: P R ( p", "ity ( 1 − ε {\\displaystyle 1-\\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takes O ( log ⁡ n / ε ) {\\displaystyle O({\\sqrt {\\log n}}/\\epsilon )} rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size. Google Toolbar The Google Toolbar long had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The \"Toolbar Pagerank\" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from its Webmaster Tools section, saying that \"We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most important metric for them to track, which is simply not true.\" The \"Toolbar Pagerank\" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming. In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate. On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar, though the PageRank continued to be used internally to rank content in search results. SERP rank The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of", "wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company; Hassan went on to pursue a career in robotics and founded the company Willow Garage in 2006. While conventional search engines ranked results by counting how many times the search terms appeared on the page, they theorized about a better system that analyzed the relationships among websites. They called this algorithm PageRank; it determined a website's relevance by the number of pages, and the importance of those pages that linked back to the original site. Page told his ideas to Hassan, who began writing the code to implement Page's ideas. Page and Brin would also use their friend Susan Wojcicki's garage as their office when the search engine was set up in 1998. Page and Brin originally nicknamed the new search engine \"BackRub\" because the system checked backlinks to estimate the importance of a site. Hassan, as well as Alan Steremberg were cited by Page and Brin as being critical to the development of Google. Rajeev Motwani and Terry Winograd later co-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of the Google search engine, published in 1998. Héctor García-Molina and Jeffrey Ullman were also cited as contributors to the project. PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996, with Larry Page's PageRank patent including a citation to Li's earlier RankDex patent; Li later went on to create the Chinese search engine Baidu. Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol, a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information. Google was initially funded by an August 1998 investment of $100,000 from Andy Bechtolsheim, co-founder of Sun Microsystems. This initial investment served as a motivation to incorporate the company to be able to use the funds. Page and Brin initially approached David Cheriton for advice because he had a nearby office in Stanford, and they knew he had startup experience, having recently sold", "in Google Toolbar, though the PageRank continued to be used internally to rank content in search results. SERP rank The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200). Search engine optimization (SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword. The PageRank of the HomePage of a website is the best indication Google offers for website authority. After the introduction of Google Places into the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results. When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google. Google directory PageRank The Google Directory PageRank was an 8-unit measurement. Unlike the Google Toolbar, which showed a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011. False or spoofed PageRank It was known that the PageRank shown in the Toolbar could easily be spoofed. Redirection from one page to another, either via a HTTP 302 response or a \"Refresh\"" ]
[ "Thang", "Jeremie", "Tugrulcan", "We need more information" ]
['Thang', 'Tugrulcan']
1458
When compressing the adjacency list of a given URL, a reference list
[ "ots \\\\\\$vdots &&\\ell (p_{i},p_{j})&$ell (p_{N},p_{1})&\\cdots &&\\ell (p_{N},p_{N})\\end{bmatrix}}\\mathbf {R} } where the adjacency function l ( p i, p j ) {\\displaystyle \\ell (p_{i},p_{j})} is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if page p j {\\displaystyle p_{j}} does not link to p i {\\displaystyle p_{i}}, and normalized such that, for each j ∑ i = 1 N l ( p i, p j ) = 1 {\\displaystyle \\sum _{i=1}^{N}\\ell (p_{i},p_{j})=1}, i.e. the elements of each column sum up to 1, so the matrix is a stochastic matrix (for more details see the computation section below). Thus this is a variant of the eigenvector centrality measure used commonly in network analysis. Because of the large eigengap of the modified adjacency matrix above, the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper, reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear in log ⁡ n {\\displaystyle \\log n}, where n is the size of the network. As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks", "borescences, relations parents-enfants Manière très familière de représenter des liens hiérarchiques Structures séquentielles Ex: wizards, processus en plusieurs étapes Reflète un ordre logique de progression Structures matricielles Ex. navigation hypertexte permettant de naviguer de différentes façons: par date, puis par sujet, puis par articles connexes, etc. « On peut définir l'hypertexte comme un système interactif qui permet de construire et de gérer des liens sémantiques entre des objets repérables dans un ensemble de documents polysémiques » universalis.fr UX 20 2. Systèmes de nomenclature Différentes logiques de nommage (taxonomie) Libellés textuels Ex: liens au sein des textes, liens contextuels, menus de navigation, listes, appels à l’action, tags, etc. Nécessite une charte éditoriale pour une utilisation standardisée/contrôlée du vocabulaire utilisé, adaptés en termes de jargon Libellés iconiques Ex. iconographie au lieu de libellés Nécessaire de les accompagner de libellés texte le plus souvent NB: Créer une nomenclature centrée utilisateur Analyse des mots/expressions utilisés par les groupes cibles (ex. logs de recherche, analyse de textes publics, notes d’entretiens, etc.) Tri par cartes UX 21 3. Systèmes de navigation « You are here » = donner du contexte (navigation, fil d’ariane) Important quand on n’arrive pas sur le site au travers de la page d’accueil Important dans les portails d’informations Trois principaux systèmes de navigation 1. Globale 2. Locale 3. Contextuelle UX 22 3.1 Système de navigation globale Le système de navigation globale reste présent sur toutes les pages du site Souvent présent sous la forme de barre de navigation en haut de page Dirige vers les zones et fonctions clés/haut niveau du site UX 23 3.2 Système de navigation locale Permet à l’utilisateur de naviguer dans les subdivisions de la page sur laquelle il se trouve Peut être fusionné avec la navigation globale (apparition au passage de la souris) Présence facultative UX 24 3.3 Système de navigation contextuelle Permet de naviguer vers", "vector of the modified adjacency matrix rescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is R = [ P R ( p 1 ) P R ( p 2 ) <unk> P R ( p N ) ] {\\displaystyle \\mathbf {R} ={\\begin{bmatrix}PR(p_{1})\\\\PR(p_{2})$vdots \\\\PR(p_{N})\\end{bmatrix}}} where R is the solution of the equation R = [ ( 1 − d ) / N ( 1 − d ) / N <unk> ( 1 − d ) / N ] + d [ l ( p 1, p 1 ) l ( p 1, p 2 ) ⋯ l ( p 1, p N ) l ( p 2, p 1 ) <unk> <unk> <unk> l ( p i, p j ) l ( p N, p 1 ) ⋯ l ( p N, p N ) ] R {\\displaystyle \\mathbf {R} ={\\begin{bmatrix}{(1-d)/N}\\\\{(1-d)/N}\\\\\\$vdots \\\\{(1-d)/N}\\end{bmatrix}}+d{\\begin{bmatrix}\\ell (p_{1},p_{1})&\\ell (p_{1},p_{2})&\\cdots &\\ell (p_{1},p_{N})$ell (p_{2},p_{1})&\\ddots &&\\vdots \\\\\\$vdots &&\\ell (p_{i},p_{j})&$ell (p_{N},p_{1})&\\cdots &&\\ell (p_{N},p_{N})\\end{bmatrix}}\\mathbf {R} } where the adjacency function l ( p i, p j ) {\\displaystyle \\ell (p_{i},p_{j})} is the ratio between number of links outbound from page j to page i to the", "S homepage RankVisu homepage Short review of Diffusion Maps Nonlinear PCA by autoencoder neural networks", "reference location is usually a major industrial location closest to the location where the index is being determined. Location factors for various locations have been published and updated in various journals as in Aspen Richardson's \"International Construction Cost Factor Location Manual (2003)\"." ]
[ "Is chosen from neighboring URLs that can be reached in a small number of hops", "May contain URLs not occurring in the adjacency list of the given URL", "Lists all URLs not contained in the adjacency list of given URL", "All of the above" ]
May contain URLs not occurring in the adjacency list of the given URL
1461
Data being classified as unstructured or structured depends on the:
[ "Structured data analysis is the statistical data analysis of structured data. This can arise either in the form of an a priori structure such as multiple-choice questionnaires or in situations with the need to search for structure that fits the given data, either exactly or approximately. This structure can then be used for making comparisons, predictions, manipulations etc. Types of structured data analysis Algebraic data analysis Bayesian analysis Cluster analysis Combinatorial data analysis Formal concept analysis Functional data analysis Geometric data analysis Regression analysis Shape analysis Topological data analysis Tree structured data analysis References Further reading Carlsson, Gunnar (2009). \"Topology and data\". Bulletin of the American Mathematical Society. New Series. 46 (2): 255–308. doi:10.1090/S0273-0979-09-01249-X. James O. Ramsay; B. W. Silverman (2005). Functional data analysis. Springer. ISBN 9780387400808. Leland Wilkinson, (1992) Tree Structured Data Analysis: AID, CHAID and CART", "Structured content is information or content that is organized in a predictable way and is usually classified with metadata. XML is a common storage format, but structured content can also be stored in other standard or proprietary formats. When working in structured content, writers need to build the structure of their content as well as add the text, images, etc. They build the structure by adding elements, and there are elements for different types of content. The structure must be valid according to the standard being used, and it is often enforced by the authoring tool. This helps to ensure consistency, as writers must use the appropriate elements in a consistent way. See also Structure mining", "An important related notion is that of a succinct data structure, which uses space roughly equal to the information-theoretic minimum, which is a worst-case notion of the space needed to represent the data. In contrast, the size of a compressed data structure depends upon the particular data being represented. When the data are compressible, as is often the case in practice for natural language text, the compressed data structure can occupy space very close to the information-theoretic minimum, and significantly less space than most compression schemes.", "data), business metadata (or external metadata), and process metadata. NISO distinguishes three types of metadata: descriptive, structural, and administrative. Descriptive metadata is typically used for discovery and identification, as information to search and locate an object, such as title, authors, subjects, keywords, and publisher. Structural metadata describes how the components of an object are organized. An example of structural metadata would be how pages are ordered to form chapters of a book. Finally, administrative metadata gives information to help manage the source. Administrative metadata refers to the technical information, such as file type, or when and how the file was created. Two sub-types of administrative metadata are rights management metadata and preservation metadata. Rights management metadata explains intellectual property rights, while preservation metadata contains information to preserve and save a resource. Statistical data repositories have their own requirements for metadata in order to describe not only the source and quality of the data but also what statistical processes were used to create the data, which is of particular importance to the statistical community in order to both validate and improve the process of statistical data production. An additional type of metadata beginning to be more developed is accessibility metadata. Accessibility metadata is not a new concept to libraries; however, advances in universal design have raised its profile.: 213–214 Projects like Cloud4All and GPII identified the lack of common terminologies and models to describe the needs and preferences of users and information that fits those needs as a major gap in providing universal access solutions.: 210–211 Those types of information are accessibility metadata.: 214 Schema.org has incorporated several accessibility properties based on IMS Global Access for All Information Model Data Element Specification.: 214 The Wiki page WebSchemas/Accessibility lists several properties and their values. While the efforts to describe and standardize the varied accessibility needs of information seekers are beginning to become more robust, their adoption into established metadata schemas has not been as developed. For example, while Dublin Core (DC)'s \"audience\" and MARC 21's \"reading level\" could be used to identify resources suitable for users with dyslexia and DC'", "In computer science, uncertain data is data that contains noise that makes it deviate from the correct, intended or original values. In the age of big data, uncertainty or data veracity is one of the defining characteristics of data. Data is constantly growing in volume, variety, velocity and uncertainty (1/veracity). Uncertain data is found in abundance today on the web, in sensor networks, within enterprises both in their structured and unstructured sources. For example, there may be uncertainty regarding the address of a customer in an enterprise dataset, or the temperature readings captured by a sensor due to aging of the sensor. In 2012 IBM called out managing uncertain data at scale in its global technology outlook report that presents a comprehensive analysis looking three to ten years into the future seeking to identify significant, disruptive technologies that will change the world. In order to make confident business decisions based on real-world data, analyses must necessarily account for many different kinds of uncertainty present in very large amounts of data. Analyses based on uncertain data will have an effect on the quality of subsequent decisions, so the degree and types of inaccuracies in this uncertain data cannot be ignored. Uncertain data is found in the area of sensor networks; text where noisy text is found in abundance on social media, web and within enterprises where the structured and unstructured data may be old, outdated, or plain incorrect; in modeling where the mathematical model may only be an approximation of the actual process. When representing such data in a database, an appropriate uncertain database model needs to be selected. Example data model for uncertain data One way to represent uncertain data is through probability distributions. Let us take the example of a relational database. There are three main ways to do represent uncertainty as probability distributions in such a database model. In attribute uncertainty, each uncertain attribute in a tuple is subject to its own independent probability distribution. For example, if readings are taken of temperature and wind speed, each would be described by its own probability distribution, as knowing the reading for one measurement would not provide any information about the other. In correlated uncertainty, multiple attributes may" ]
[ "Degree of abstraction", "Level of human involvement", "Type of physical storage", "Amount of data " ]
['Degree of abstraction']
1462
With negative sampling a set of negative samples is created for
[ "In statistics, non-sampling error is a catch-all term for the deviations of estimates from their true values that are not a function of the sample chosen, including various systematic errors and random errors that are not due to sampling. Non-sampling errors are much harder to quantify than sampling errors. Non-sampling errors in survey estimates can arise from: Coverage errors, such as failure to accurately represent all population units in the sample, or the inability to obtain information about all sample cases; Response errors by respondents due for example to definitional differences, misunderstandings, or deliberate misreporting; Mistakes in recording the data or coding it to standard classifications; Pseudo-opinions given by respondents when they have no opinion, but do not wish to say so Other errors of collection, nonresponse, processing, or imputation of values for missing or inconsistent data. An excellent discussion of issues pertaining to non-sampling error can be found in several sources such as Kalton (1983) and Salant and Dillman (1995), See also Errors and residuals in statistics Sampling error", "the use of Additive smoothing in a Naïve Bayes classifier", "Strong and weak sampling are two sampling approach in Statistics, and are popular in computational cognitive science and language learning. In strong sampling, it is assumed that the data are intentionally generated as positive examples of a concept, while in weak sampling, it is assumed that the data are generated without any restrictions. Formal Definition In strong sampling, we assume observation is randomly sampled from the true hypothesis: P ( x | h ) = { 1 | h |, if x ∈ h 0, otherwise {\\displaystyle P(x|h)={\\begin{cases}{\\frac {1}{|h|}}&{\\text{, if }}x\\in h\\\\0&{\\text{, otherwise}}\\end{cases}}} In weak sampling, we assume observations randomly sampled and then classified: P ( x | h ) = { 1, if x ∈ h 0, otherwise {\\displaystyle P(x|h)={\\begin{cases}1&{\\text{, if }}x\\in h\\\\0&{\\text{, otherwise}}\\end{cases}}} Consequence: Posterior computation under Weak Sampling P ( h | x ) = P ( x | h ) P ( h ) ∑ h ′ P ( x | h ′ ) P ( h ′ ) = { P ( h ) ∑ h ′ : x ∈ h ′ P ( h ′ ), if x ∈ h 0, otherwise {\\displaystyle P(h|x)={\\frac {P(x|h)P(h)}{\\sum \\limits _{h'}P(x|h')P(h')}}={\\begin{cases}{\\frac {P(h)}{\\sum \\limits _{h':x\\in h'}P(h')}}&{\\text{, if }}x\\in h\\\\0&{\\text{, otherwise}}\\end{cases}}} Therefore the likelihood P ( x | h ′ ) {\\displaystyle P(x|", "therefore pose a difficult problem. However, recently, other researchers have disagreed with this argument, showing that if synthetic data accumulates alongside human-generated data, model collapse is avoided. The researchers argue that data accumulating over time is a more realistic description of reality than deleting all existing data every year, and that the real-world impact of model collapse may not be as catastrophic as feared. An alternative branch of the literature investigates the use of machine learning detectors and watermarking to identify model generated data and filter it out. Mathematical models of the phenomenon 1D Gaussian model In 2024, a first attempt has been made at illustrating collapse for the simplest possible model — a single dimensional normal distribution fit using unbiased estimators of mean and variance, computed on samples from the previous generation. To make this more precise, we say that original data follows a normal distribution X 0 ∼ N ( μ, σ 2 ) {\\displaystyle X^{0}\\sim {\\mathcal {N}}(\\mu,\\sigma ^{2})}, and we possess M 0 {\\displaystyle M_{0}} samples X j 0 {\\displaystyle X_{j}^{0}} for j ∈ { 1,..., M 0 } {\\displaystyle j\\in {\\{\\,1,\\dots,M_{0}\\,{}\\}}}. Denoting a general sample X j i {\\displaystyle X_{j}^{i}} as sample j ∈ { 1,..., M i } {\\displaystyle j\\in {\\{\\,1,\\dots,M_{i}\\,{}\\}}} at generation i {\\displaystyle i}, then the next generation model is estimated using the sample mean and variance: μ i + 1 = 1 M i ∑ j X j i ; σ i + 1 2 = 1 M i − 1 ∑ j ( X j i − μ i + 1 ) 2. {\\displaystyle \\mu _{i+1}={\\frac {1}{M_{i}}}\\sum _{j}X_{j}^{i", "sampling: the use of color spaces such as YIQ, used in NTSC, allow one to reduce the resolution on the components to accord with human perception – humans have highest resolution for black-and-white (luma), lower resolution for mid-spectrum colors like yellow and green, and lowest for red and blues – thus NTSC displays approximately 350 pixels of luma per scanline, 150 pixels of yellow vs. green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component. Information loss Lossy compression formats suffer from generation loss: repeatedly compressing and decompressing the file will cause it to progressively lose quality. This is in contrast with lossless data compression, where data will not be lost via the use of such a procedure. Information-theoretical foundations for lossy data compression are provided by rate-distortion theory. Much like the use of probability in optimal coding theory, rate-distortion theory heavily draws on Bayesian estimation and decision theory in order to model perceptual distortion and even aesthetic judgment. There are two basic lossy compression schemes: In lossy transform codecs, samples of picture or sound are taken, chopped into small segments, transformed into a new basis space, and quantized. The resulting quantized values are then entropy coded. In lossy predictive codecs, previous and/or subsequent decoded data is used to predict the current sound sample or image frame. The error between the predicted data and the real data, together with any extra information needed to reproduce the prediction, is then quantized and coded. In some systems the two techniques are combined, with transform codecs being used to compress the error signals generated by the predictive stage. Comparison The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any lossless method, while still meeting the requirements of the application. Lossy methods are most often used for compressing sound, images or videos. This is because these types of data are intended for human interpretation where the mind can easily \"fill in the blanks\" or see past very minor errors or inconsistencies – ideally lossy compression is transparent (impercept" ]
[ "For each word of the vocabulary", "For each word-context pair", "For each occurrence of a word in the text", "For each occurrence of a word-context pair in the text", "" ]
For each occurrence of a word-context pair in the text
1463
Suppose you have a search engine that retrieves the top 100 documents and achieves 90% precision and 20% recall. You modify the search engine to retrieve the top 200 and mysteriously, the precision stays the same. Which one is CORRECT?
[ "versions). A pair d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}} is concordant if both r a {\\displaystyle r_{a}} and r b {\\displaystyle r_{b}} agree in how they order d i {\\displaystyle d_{i}} and d j {\\displaystyle d_{j}}. It is discordant if they disagree. Information retrieval quality Information retrieval quality is usually evaluated by the following three measurements: Precision Recall Average precision For a specific query to a database, let P r e l e v a n t {\\displaystyle P_{relevant}} be the set of relevant information elements in the database and P r e t r i e v e d {\\displaystyle P_{retrieved}} be the set of the retrieved information elements. Then the above three measurements can be represented as follows: precision = | P relevant ∩ P retrieved | | P retrieved | ; recall = | P relevant ∩ P retrieved | | P relevant | ; average precision = ∫ 0 1 Prec ( recall ) d recall, {\\displaystyle {\\begin{aligned}&{\\text{precision}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{retrieved}}\\right|}};\\\\[6pt]&{\\text{recall}}={\\frac {\\left|P_{\\text{relevant}}\\cap P_{\\text{retrieved}}\\right|}{\\left|P_{\\text{relevant}}\\right|}};\\\\[6pt]&{\\text{average precision}}=\\int _{0}^{1}{\\text{Prec}}({\\text{recall}})\\,d{\\text{recall}},$end{aligned}}} where Prec ( Recall ) {\\displaystyle {\\text{Prec}}({\\text{Recall}})} is the Precision {\\displaystyle {\\text{Precision}}}", "Retrievability is a term associated with the ease with which information can be found or retrieved using an information system, specifically a search engine or information retrieval system. A document (or information object) has high retrievability if there are many queries which retrieve the document via the search engine, and the document is ranked sufficiently high that a user would encounter the document. Conversely, if there are few queries that retrieve the document, or when the document is retrieved the documents are not high enough in the ranked list, then the document has low retrievability. Retrievability can be considered as one aspect of findability. Applications of retrievability include detecting search engine bias, measuring algorithmic bias, evaluating the influence of search technology, tuning information retrieval systems and evaluating the quality of documents in a collection. See also Information retrieval Knowledge mining Search engine optimization Findability References Azzopardi, L. & Vinay, V. (2008). \"Retrievability: an evaluation measure for higher order information access tasks\". Proceedings of the 17th ACM conference on Information and knowledge management. CIKM '08. Napa Valley, California, USA: ACM. pp. 561–570. doi:10.1145/1458082.1458157. ISBN 9781595939913. S2CID 8705350. Azzopardi, L. & Vinay, V. (2008). \"Accessibility in information retrieval\". Proceedings of the IR research, 30th European conference on Advances in information retrieval. ECIR '08. Glasgow, UK: Springer. pp. 482–489. ISBN 9783540786450. Retrieved 7 Dec 2016.", "-2000s (decade). Practical usage by search engines Commercial web search engines began using machine-learned ranking systems since the 2000s (decade). One of the first search engines to start using it was AltaVista (later its technology was acquired by Overture, and then Yahoo), which launched a gradient boosting-trained ranking function in April 2003. Bing's search is said to be powered by RankNet algorithm, which was invented at Microsoft Research in 2005. In November 2009 a Russian search engine Yandex announced that it had significantly increased its search quality due to deployment of a new proprietary MatrixNet algorithm, a variant of gradient boosting method which uses oblivious decision trees. Recently they have also sponsored a machine-learned ranking competition \"Internet Mathematics 2009\" based on their own search engine's production data. Yahoo has announced a similar competition in 2010. As of 2008, Google's Peter Norvig denied that their search engine exclusively relies on machine-learned ranking. Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models \"learn what people say they like, not what people actually like\". In January 2017, the technology was included in the open source search engine Apache Solr. It is also available in the open source OpenSearch and Elasticsearch. These implementations make learning to rank widely accessible for enterprise search. Vulnerabilities Similar to recognition applications in computer vision, recent neural network based ranking algorithms are also found to be susceptible to covert adversarial attacks, both on the candidates and the queries. With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations. Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense. See also Content-based image retrieval Multimedia information retrieval Image retrieval Triplet loss References Extern", "of evaluating probabilistic classifiers, alternative evaluation metrics have been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions. In information systems Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. See also Popula", "Recall Rec q R q TS q R q Collection R S R S U C Relevant Retrieved Computational Linguistics Course EPFL MsCS Information Retrieval Introduction Toolchain Evaluation Beyond the vector model Conclusion c EPFL Jean C dric Chappelier Emmanuel Eckard Performance measure R Precision De nition Precision at n document Prn q R q TSn q Sn q with Sn q n rst document to be retrieved R Precision precision obtained after retrieving a many document a there are relevant document averaged over query R Precision N N i Pr R qi qi Computational Linguistics Course EPFL MsCS Information Retrieval Introduction Toolchain Evaluation Beyond the vector model Conclusion c EPFL Jean C dric Chappelier Emmanuel Eckard Performance measure Mean Average Precision Average Precision Average of the precision whenever all relevant document below rank rk d q are retrieved AvgP q R q d R q Prrk d q q Mean Average Precision Mean over the query of the Average Precisions N i AvgP qi MAP measure the tendency of the system to retrieve relevant document rst Computational Linguistics Course EPFL MsCS Information Retrieval Introduction Toolchain Evaluation Beyond the vector model Conclusion c EPFL Jean C dric Chappelier Emmanuel Eckard Plotting average Precision and Recall Precision Recall DSIR alpha Hybrid alpha VS alpha Aim of the game push the curve towards the upper right corner Computational Linguistics Course EPFL MsCS Information Retrieval Introduction Toolchain Evaluation Beyond the vector model Probabilistic model Topic based model LSI DSIR Conclusion c EPFL Jean C dric Chappelier Emmanuel Eckard Probabilistic model Idea The best possible ranking return document sorted by probability to be relevent given a query for instance Sparck Jones model Estimate the probability that a given document di is relevant di R q to given query q P di R q di q Invert the probability here R is a boolean variable standing for di R q P di R q Write P di R q a a function of the probability of occurence of the term assuming that term are conditionally independant P ti d R q Comput" ]
[ "The recall becomes 10%", "The number of relevant documents is 450", "The F-score stays the same", "This is not possible" ]
['The number of relevant documents is 450']
1465
In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed:
[ "P4 metric (also known as FS or Symmetric F ) enables performance evaluation of the binary classifier. It is calculated from precision, recall, specificity and NPV (negative predictive value). P4 is designed in similar way to F1 metric, however addressing the criticisms leveled against F1. It may be perceived as its extension. Like the other known metrics, P4 is a function of: TP (true positives), TN (true negatives), FP (false positives), FN (false negatives). Justification The key concept of P4 is to leverage the four key conditional probabilities: P ( + ∣ C + ) {\\displaystyle P(+\\mid C{+})} - the probability that the sample is positive, provided the classifier result was positive. P ( C + ∣ + ) {\\displaystyle P(C{+}\\mid +)} - the probability that the classifier result will be positive, provided the sample is positive. P ( C − ∣ − ) {\\displaystyle P(C{-}\\mid -)} - the probability that the classifier result will be negative, provided the sample is negative. P ( − ∣ C − ) {\\displaystyle P(-\\mid C{-})} - the probability the sample is negative, provided the classifier result was negative. The main assumption behind this metric is, that a properly designed binary classifier should give the results for which all the probabilities mentioned above are close to 1. P4 is designed the way that P 4 = 1 {\\displaystyle \\mathrm {P} _{4}=1} requires all the probabilities being equal 1. It also goes to zero when any of these probabilities go to zero. Definition P4 is defined as a harmonic mean of four key conditional probabilities: P 4 = 4 1 P ( + ∣ C + ) + 1 P ( C + ∣ + ) + 1 P ( C − ∣ − ) + 1 P ( − ∣ C − ) = 4 1 p r e c i s i o n + 1 r e c a l l + 1 s p e c i f i c i t y + 1 N P V {\\displaystyle \\mathrm {P} _{", "also goes to zero when any of these probabilities go to zero. Definition P4 is defined as a harmonic mean of four key conditional probabilities: P 4 = 4 1 P ( + ∣ C + ) + 1 P ( C + ∣ + ) + 1 P ( C − ∣ − ) + 1 P ( − ∣ C − ) = 4 1 p r e c i s i o n + 1 r e c a l l + 1 s p e c i f i c i t y + 1 N P V {\\displaystyle \\mathrm {P} _{4}={\\frac {4}{{\\frac {1}{P(+\\mid C{+})}}+{\\frac {1}{P(C{+}\\mid +)}}+{\\frac {1}{P(C{-}\\mid -)}}+{\\frac {1}{P(-\\mid C{-})}}}}={\\frac {4}{{\\frac {1}{\\mathit {precision}}}+{\\frac {1}{\\mathit {recall}}}+{\\frac {1}{\\mathit {specificity}}}+{\\frac {1}{\\mathit {NPV}}}}}} In terms of TP,TN,FP,FN it can be calculated as follows: P 4 = 4 ⋅ T P ⋅ T N 4 ⋅ T P ⋅ T N + ( T P + T N ) ⋅ ( F P + F N ) {\\displaystyle \\mathrm {P} _{4}={\\frac {4\\cdot \\mathrm {TP} \\cdot \\mathrm {TN} }{4\\cdot \\mathrm {TP} \\cdot \\mathrm {TN} +(\\mathrm {TP} +\\mathrm {TN} )\\cdot (\\mathrm {FP} +\\mathrm {FN} )}}} Evaluation of the binary classifier performance Evaluating the performance of binary classifier is a multidisciplinary concept. It spans from the evaluation of medical tests, psychiatric tests to machine learning classifiers from a variety of fields. Thus, many metrics in use", "x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), x = 2 {\\displaystyle x=2}, so powers of 2 are repeatedly factored out. Example For example, to find the product of two numbers (0.15625) and m: ( 0.15625 ) m = ( 0.00101 b ) m = ( 2 − 3 + 2 − 5 ) m = ( 2 − 3 ) m + ( 2 − 5 ) m = 2 − 3 ( m + ( 2 − 2 ) m ) = 2 − 3 ( m + 2 − 2 ( m ) ). {\\displaystyle {\\begin{aligned}(0.15625)m&=(0.00101_{b})m=\\left(2^{-3}+2^{-5}\\right)m=\\left(2^{-3})m+(2^{-5}\\right)m\\\\&=2^{-3}\\left(m+\\left(2^{-2}\\right)m\\right)=2^{-3}\\left(m+2^{-2}(m)\\right).\\end{aligned}}} Method To find the product of two binary numbers d and m: A register holding the intermediate result is initialized to d. Begin with the least significant (rightmost) non-zero bit in m. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m. Derivation In general, for a binary number with bit values ( d 3 d 2 d 1 d 0 {\\displaystyle d_{3}d_{2}d_{1}d_{0}} ) the product is ( d 3 2 3 + d 2 2 2 + d 1 2 1 + d 0 2 0 ) m = d 3 2 3 m + d 2 2 2 m + d 1 2 1 m + d 0 2 0 m. {\\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}", "{TN} }{4\\cdot \\mathrm {TP} \\cdot \\mathrm {TN} +(\\mathrm {TP} +\\mathrm {TN} )\\cdot (\\mathrm {FP} +\\mathrm {FN} )}}} Evaluation of the binary classifier performance Evaluating the performance of binary classifier is a multidisciplinary concept. It spans from the evaluation of medical tests, psychiatric tests to machine learning classifiers from a variety of fields. Thus, many metrics in use exist under several names. Some of them being defined independently. Properties of P4 metric Symmetry - contrasting to the F1 metric, P4 is symmetrical. It means - it does not change its value when dataset labeling is changed - positives named negatives and negatives named positives. Range: P 4 ∈ [ 0, 1 ] {\\displaystyle \\mathrm {P} _{4}\\in [0,1]} Achieving P 4 ≈ 1 {\\displaystyle \\mathrm {P} _{4}\\approx 1} requires all the key four conditional probabilities being close to 1. For P 4 ≈ 0 {\\displaystyle \\mathrm {P} _{4}\\approx 0} it is sufficient that one of the key four conditional probabilities is close to 0. Examples, comparing with the other metrics Dependency table for selected metrics (\"true\" means depends, \"false\" - does not depend): Metrics that do not depend on a given probability are prone to misrepresentation when it approaches 0. Example 1: Rare disease detection test Let us consider the medical test aimed to detect kind of rare disease. Population size is 100 000, while 0.05% population is infected. Test performance: 95% of all positive individuals are classified correctly (TPR=0.95) and 95% of all negative individuals are classified correctly (TNR=0.95). In such a case, due to high population imbalance, in spite of having high test accuracy (0.95), the probability that an individual who has been classified as positive is in fact positive is very low: P ( + ∣ C +", "≈ ( m M 12 ) 2 ⟨ Δ l 2 ⟩ / G M 12 a ≈ m M 12 G ρ a σ {\\displaystyle \\langle \\Delta \\xi ^{2}\\rangle =\\langle \\Delta l_{\\rm {bin}}^{2}\\rangle /l_{\\rm {bin}}^{2}\\approx \\left({m \\over M_{12}}\\right)^{2}\\langle \\Delta l^{2}\\rangle /GM_{12}a\\approx {m \\over M_{12}}{G\\rho a \\over \\sigma }} where ρ = mn is the mass density of field stars. Let F(θ,t) be the probability that the rotation axis of the binary is oriented at angle θ at time t. The evolution equation for F is ∂ F ∂ t = 1 sin ⁡ θ ∂ ∂ θ ( sin ⁡ θ ⟨ Δ ξ 2 ⟩ 4 ∂ F ∂ θ ). {\\displaystyle {\\partial F \\over \\partial t}={1 \\over \\sin \\theta }{\\partial \\over \\partial \\theta }\\left(\\sin \\theta {\\langle \\Delta \\xi ^{2}\\rangle \\over 4}{\\partial F \\over \\partial \\theta }\\right).} If <Δξ2>, a, ρ and σ are constant in time, this becomes ∂ F ∂ τ = 1 2 ∂ ∂ μ [ ( 1 − μ 2 ) ∂ F ∂ μ ] {\\displaystyle {\\partial F \\over \\partial \\tau }={1 \\over 2}{\\partial \\over \\partial \\mu }\\left[(1-\\mu ^{2}){\\partial F \\over \\partial \\mu }\\right]} where μ = cos θ and τ is the time in units of the relaxation time trel, where t r e l " ]
[ "That the class labels depends on the feature", "That the class label is independent of the feature", "That the class label correlates with the feature", "None of the above" ]
That the class label is independent of the feature
1467
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
[ "genomes of unculturable bacteria) based on a model of already labeled data. Hidden Markov models Hidden Markov models (HMMs) are a class of statistical models for sequential data (often related to systems evolving over time). An HMM is composed of two mathematical objects: an observed state‐dependent process X 1, X 2,..., X M {\\displaystyle X_{1},X_{2},\\ldots,X_{M}}, and an unobserved (hidden) state process S 1, S 2,..., S T {\\displaystyle S_{1},S_{2},\\ldots,S_{T}}. In an HMM, the state process is not directly observed – it is a 'hidden' (or 'latent') variable – but observations are made of a state‐dependent process (or observation process) that is driven by the underlying state process (and which can thus be regarded as a noisy measurement of the system states of interest). HMMs can be formulated in continuous time. HMMs can be used to profile and convert a multiple sequence alignment into a position-specific scoring system suitable for searching databases for homologous sequences remotely. Additionally, ecological phenomena can be described by HMMs. Convolutional neural networks Convolutional neural networks (CNN) are a class of deep neural network whose architecture is based on shared weights of convolution kernels or filters that slide along input features, providing translation-equivariant responses known as feature maps. CNNs take advantage of the hierarchical pattern in data and assemble patterns of increasing complexity using smaller and simpler patterns discovered via their filters. Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNN uses relatively little pre-processing compared to", "Cryptanalysis Speech recognition, including Siri Speech synthesis Part-of-speech tagging Document separation in scanning solutions Machine translation Partial discharge Gene prediction Handwriting recognition Alignment of bio-sequences Time series analysis Activity recognition Protein folding Sequence classification Metamorphic virus detection Sequence motif discovery (DNA and proteins) DNA hybridization kinetics Chromatin state discovery Transportation forecasting Solar irradiance variability History Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar. In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics. Extensions General state spaces In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter. Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model and the learnability limits are still under exploration. Bayesian modeling of the transitions probabilities Hidden Markov models are generative models, in which the joint distribution of observations and hidden states", "1.2, they are used in all state-of-the-art continuous speech recognition systems to represent sta- tistical grammars [11], usually referred to as N-grams, and estimating the probability of a sequence of L words P(W L 1 ) ≈ L Y l=N+1 P(wl|wl-1 l-N) which is equivalent to assuming that possible word sequences can be modelled by a Markov model of order N. 1.4.3 Hidden Markov models (HMM) In many sequential pattern processing/classification problems (such as speech recognition and cursive handwriting recognition), one of the greatest difficulties is to simultaneously model the inherent statistical variations in sequential rates and feature characteristics. In this respect, Hidden Markov Models (HMMs) have been one of the most successful approaches used so far. As presented in Table 1, an HMM is a particular form of SFSA where Markov models (modelling the sequential properties of the data) are complemented by a second stochastic process modelling the local properties of the data. The HMM is called“hidden”because there is an underlying stochastic process (i.e., the sequence of states) that is not observable, but affects the observed sequence of events. Although sequential signals, such as speech and handwriting, are non-stationary processes, HMMs thus assume that the sequence of ob- servation vectors is a piecewise stationary process. That is, a sequence X = xN 1 is modelled as a sequence of discrete stationary states Q = {q1,, qk,, qK}, with instantaneous transitions between these states. In this case, a HMM is defined as a stochastic finite state automata with a particular (generally strictly left-to-right for speech data) topology. An example of a simple HMM is given in Figure 1.2. In speech recognition, this could be the model of a word or phoneme which is assumed to be composed of three stationary parts. In cursive handwriting recognition, this could be the model of a letter. Once the topology of the HMM has been defined (usually “ar", "A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or hidden) Markov process (referred to as X {\\displaystyle X} ). An HMM requires that there be an observable process Y {\\displaystyle Y} whose outcomes depend on the outcomes of X {\\displaystyle X} in a known way. Since X {\\displaystyle X} cannot be observed directly, the goal is to learn about state of X {\\displaystyle X} by observing Y {\\displaystyle Y}. By definition of being a Markov model, an HMM has an additional requirement that the outcome of Y {\\displaystyle Y} at time t = t 0 {\\displaystyle t=t_{0}} must be \"influenced\" exclusively by the outcome of X {\\displaystyle X} at t = t 0 {\\displaystyle t=t_{0}} and that the outcomes of X {\\displaystyle X} and Y {\\displaystyle Y} at t < t 0 {\\displaystyle t<t_{0}} must be conditionally independent of Y {\\displaystyle Y} at t = t 0 {\\displaystyle t=t_{0}} given X {\\displaystyle X} at time t = t 0 {\\displaystyle t=t_{0}}. Estimation of the parameters in an HMM can be performed using maximum likelihood estimation. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate parameters. Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics. Definition Let X n {\\displaystyle X_{n}} and Y n {\\displaystyle Y_{n}} be discrete-time stochastic processes and n ≥ 1 {\\displaystyle n\\geq 1}. The pair ( X n, Y n ) {\\displaystyle (X_{n},Y_{n})} is a hidden Markov", "4105T. doi:10.1088/0953-8984/22/41/414105. PMID 21386588. S2CID 103345. A Revealing Introduction to Hidden Markov Models by Mark Stamp, San Jose State University. Fitting HMM's with expectation-maximization – complete derivation A step-by-step tutorial on HMMs Archived 2017-08-13 at the Wayback Machine (University of Leeds) Hidden Markov Models (an exposition using basic mathematics) Hidden Markov Models (by Narada Warakagoda) Hidden Markov Models: Fundamentals and Applications Part 1, Part 2 (by V. Petrushin) Lecture on a Spreadsheet by Jason Eisner, Video and interactive spreadsheet" ]
[ "The cost of learning the model is quadratic in the lengths of the text.", "The cost of predicting a word is linear in the lengths of the text preceding the word.", "An HMM model can be built using words enhanced with morphological features as input.", "The label of one word is predicted based on all the previous labels" ]
['An HMM model can be built using words enhanced with morphological features as input.']
1468
10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has:
[ "d a A B A B a long a A and B are disjoint set or more generally A A Am A A Am when Ai Aj for all i j The case where the set have element in common is di erent Combining the Sum and Product Rule Example Suppose variable name in a programming language can be either a single letter or a letter followed by a digit Find the number of possible name Use the product rule Counting Passwords Each user on a computer system ha a password which is six to eight character long where each character is an uppercase letter or a digit Each password must contain at least one digit How many possible password are there Let P be the total number of password and let P P and P be the password of length and By the sum rule P P P P To find each of P P and P we find the number of password of the specified length composed of letter and digit and subtract the number composed only of letter P P P Consequently P P P P Basic Coun ng Principles Subtrac on Rule Subtraction Rule If a task can be done either in one of n way or in one of n way then the total number of way to do the task is n n minus the number of way to do the task that are common to the two different way Also known a the principle of inclusion exclusion Coun ng Bit Strings How many bit string of length eight either start with a bit or end with the two bit Use the principle of inclusion exclusion Number of bit string of length eight that start with a bit Number of bit string of length eight that end with bit Number of bit string of length eight that start with a bit and end with bit Hence the number is Summary Sum Rule Subtraction Rule Applications to counting string The Pigeonhole Principle Section Video The Pigeonhole Principle The Pigeonhole Principle The Generalized Pigeonhole Principle The Pigeonhole Principle If a ock of pigeon life in a set of pigeonhole one of the pigeonhole must have more than pigeon The Pigeonhole Principle Pigeonhole Principle If k is a posifve integer and k object are placed into k box then at least one box contains two or more object Proof We use a proof by contraposifon Suppose none of the k box ha more than one object Then the total number of object would be at most", ",, -.## = 142 + 90 -12 = 220 integers less or equal 1000 are divisible by 7 or 11. Three Finite Sets Example How many positive integers less or equal 1000 are divisible by 5, 7, or 11? 1000 5 + 1000 7 + 1000 11 -1000 5 7 7 -1000 5 7 11 -1000 7 7 11 + 1000 5 7 7 7 11 = 200 + 142 + 90 -28 -18 -12 + 2 372 integers less or equal 1000 are divisible by 5, 7, or 11 The Principle of Inclusion-Exclusion Theorem 1. The Principle of Inclusion-Exclusion: Let A1, A2,..., An be finite sets. Then: The Principle of Inclusion-Exclusion Proof: Consider an element a that is a member of r of the sets A1,...., An where 1 ≤r ≤n. • It is counted C(r, 1) times by Σ|Ai| • It is counted C(r, 2) times by Σ|Ai<unk>Aj| • In general, it is counted C(r, m) times by the summation of m of the sets Ai. The Principle of Inclusion-Exclusion Thus the element is counted exactly C(r,1) -C(r, 2) + C(r, 3) -⋯+ (-1)r+1 C(r, r) times by the right hand side of the equation. Using the binomial theorem!!\"# $ (, )(-1)! =!!\"# $ (-1)! = (1 + -1 )$= 0 we obtain C(r, 0) -C(r, 1) + C(r, 2) -⋯+ (-1)r C(r, r) = 0. Hence, 1 = C(r, 0) = C(r, 1) -C(r, 2) + ⋯+ (-1)r+1 C(r, r). Summary • Principle of Inclusion-Exclusion for 2 sets • Principle of Inclusion-Exclusion for 3 sets • Principle of Inclusion-Exclusion for n sets Applications of Inclusion- Exclusion Section 8.6 Video 72: Applications of Inclusion-Exclusion • Onto", "value of a n way to choose a etc The product rule tell u that there are n n n n m such function Counting Subsets of a Finite Set Use the product rule to show that the number of di erent subset of a nite set S is S Proof When the element of S are listed in an arbitrary order there is a one to one correspondence between subset of S and bit string of length S When the ith element is in the subset the bit string ha a in the ith posiBon and a otherwise By the product rule there are S such bit string and therefore S subset Counting Cartesian Products If A A Am are nite set then the number of element in the Cartesian product of these set is the product of the number of element of each set Proof The task of choosing an element in the Cartesian product A A Am is done by choosing an element in A then an element in A and nally an element in Am By the product rule it follows that A A Am A A Am Summary Product Rule Applications of the Product Rule Counting function Counting subset Counting tuples Video The Sum Rule Sum Rule Subtraction Rule Basic Coun ng Principles The Sum Rule The Sum Rule Assume there are two task A and B There are n way to do A and n way to do B and none of the set of n way is the same a any of the set of n way Then there are n n way to do task A or B Example A student can choose a semester project from one of three laboratory The three laboratory offer and possible project respectively No project is offered by several laboratory How many possible project are there to choose from By the sum rule it follows that there are way to choose a project The Sum Rule in Terms of Sets The sum rule can be phrased a A B A B a long a A and B are disjoint set or more generally A A Am A A Am when Ai Aj for all i j The case where the set have element in common is di erent Combining the Sum and Product Rule Example Suppose variable name in a programming language can be either a single letter or a letter followed by a digit Find the number of possible name Use the product rule Counting Passwords Each user on a computer system ha a password which is six to eight character long where each character is an uppercase letter or a digit Each password must contain at least one digit How", ", out of which two are sparse, and the other is small. We call these terms a, b {\\displaystyle a,b} and c {\\displaystyle c} respectively. Now, if we normalize each term by summing over all the topics, we get: A = ∑ k = 1 K α β C k ¬ n + V β {\\displaystyle A=\\sum _{k=1}^{K}{\\frac {\\alpha \\beta }{C_{k}^{\\neg n}+V\\beta }}} B = ∑ k = 1 K C k d β C k ¬ n + V β {\\displaystyle B=\\sum _{k=1}^{K}{\\frac {C_{k}^{d}\\beta }{C_{k}^{\\neg n}+V\\beta }}} C = ∑ k = 1 K C k w ( α + C k d ) C k ¬ n + V β {\\displaystyle C=\\sum _{k=1}^{K}{\\frac {C_{k}^{w}(\\alpha +C_{k}^{d})}{C_{k}^{\\neg n}+V\\beta }}} Here, we can see that B {\\displaystyle B} is a summation of the topics that appear in document d {\\displaystyle d}, and C {\\displaystyle C} is also a sparse summation of the topics that a word w {\\displaystyle w} is assigned to across the whole corpus. A {\\displaystyle A} on the other hand, is dense but because of the small values of α {\\displaystyle \\alpha } & β {\\displaystyle \\beta }, the value is very small compared to the two other terms. Now, while sampling a topic, if we sample a random variable uniformly from s ∼ U ( s | ∣ A + B + C ) {\\displaystyle s\\sim U(s|\\mid A+B+C)}, we can check which bucket our sample lands in. Since A {\\displaystyle A} is small, we are very unlikely to fall into this bucket; however, if", "ple How many positive integers less or equal 1000 are divisible by 5, 7, or 11? 1000 5 + 1000 7 + 1000 11 -1000 5 7 7 -1000 5 7 11 -1000 7 7 11 + 1000 5 7 7 7 11 = 200 + 142 + 90 -28 -18 -12 + 2 372 integers less or equal 1000 are divisible by 5, 7, or 11 The Principle of Inclusion-Exclusion Theorem 1. The Principle of Inclusion-Exclusion: Let A1, A2,..., An be finite sets. Then: The Principle of Inclusion-Exclusion Proof: Consider an element a that is a member of r of the sets A1,...., An where 1 ≤r ≤n. • It is counted C(r, 1) times by Σ|Ai| • It is counted C(r, 2) times by Σ|Ai<unk>Aj| • In general, it is counted C(r, m) times by the summation of m of the sets Ai. The Principle of Inclusion-Exclusion Thus the element is counted exactly C(r,1) -C(r, 2) + C(r, 3) -⋯+ (-1)r+1 C(r, r) times by the right hand side of the equation. Using the binomial theorem!!\"# $ $ (-1)! = (1 + -1 )$ we obtain C r C r C r r C r r Hence C r C r C r r C r r Summary Principle of Inclusion Exclusion for set Principle of Inclusion Exclusion for set Principle of Inclusion Exclusion for n set Applications of Inclusion Exclusion Section Video Applications of Inclusion Exclusion Onto Functions Derangements Example The Number of Onto Functions How many onto surjective function are there from a set with element to a set with element Suppose that the element in the codomain are b b and b Let P P and P be the property that b b and b are not in the range of the function respectively The function is onto if none of the property P P and P hold Let N be the total number of function from a set with element to one with element By the inclusion exclu" ]
[ "5% support and 10% confidence", "10% support and 50% confidence", "5% support and 50% confidence", "10% support and 10% confidence" ]
5% support and 50% confidence