Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
2,001
|
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
|
2,002
|
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory , rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories , had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process faster.
|
2,003
|
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
|
2,004
|
More sophisticated high-level assemblers provide language abstractions such as:
|
2,005
|
See Language design below for more details.
|
2,006
|
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements , comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
|
2,007
|
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
|
2,008
|
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
|
2,009
|
Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
|
2,010
|
In some assembly languages the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
|
2,011
|
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data , depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:
|
2,012
|
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded to specify that both operands are registers, the source is AH, and the destination is AL.
|
2,013
|
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant , so only the 88 instruction can be applicable.
|
2,014
|
Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH.
|
2,015
|
Returning to the original example, while the x86 opcode 10110000 copies an 8-bit value into the AL register, 10110001 moves it into CL and 10110010 does so into DL. Assembly language examples for these follow.
|
2,016
|
The syntax of MOV can also be more complex as the following examples show.
|
2,017
|
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
|
2,018
|
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero ". Most full-featured assemblers also provide a rich macro language which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
|
2,019
|
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
|
2,020
|
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
|
2,021
|
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics , some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic , there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
|
2,022
|
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation . A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
|
2,023
|
Instructions in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction , and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate , registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP for BC with a mask of 0.
|
2,024
|
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction xchg ax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchg ax,ax. Some disassemblers recognize this and will decode the xchg ax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions.
|
2,025
|
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b. These are sometimes known as pseudo-opcodes.
|
2,026
|
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
|
2,027
|
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
|
2,028
|
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
|
2,029
|
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
|
2,030
|
Symbolic assemblers let programmers associate arbitrary names with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols .
|
2,031
|
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
|
2,032
|
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
|
2,033
|
Many assemblers support predefined macros, and others support programmer-defined macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file . Macros in this sense date to IBM autocoders of the 1950s.
|
2,034
|
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code , e.g., AIF and COPY in HLASM.
|
2,035
|
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
|
2,036
|
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
|
2,037
|
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 , which was written in the SNOBOL Implementation Language , an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
|
2,038
|
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems and credit card systems today.
|
2,039
|
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
|
2,040
|
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
|
2,041
|
Despite the power of macro processing, it fell into disuse in many high level languages while remaining a perennial for assemblers.
|
2,042
|
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
|
2,043
|
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
|
2,044
|
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills , and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s . IBM's High Level Assembler Toolkit includes such a macro package.
|
2,045
|
A curious design was A-Natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. . The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
|
2,046
|
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
|
2,047
|
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package :
|
2,048
|
Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
|
2,049
|
In late 1948, the Electronic Delay Storage Automatic Calculator had an assembler integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
|
2,050
|
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the late 1950s, their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems .
|
2,051
|
Numerous programs have been written entirely in assembly language. The Burroughs MCP was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language , an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
|
2,052
|
Assembly language has long been the primary development language for 8-bit home computers such Atari 8-bit family, Apple II, MSX, ZX Spectrum, and Commodore 64. Interpreted BASIC dialects on these systems offer insufficient execution speed and insufficient facilities to take full advantage of the available hardware. These systems have severe resource constraints, idiosyncratic memory and display architectures, and provide limited system services. There are also few high-level language compilers suitable for microcomputer use. Similarly, assembly language is the default choice for 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System.
|
2,053
|
Key software for IBM PC compatibles was written in assembly language, such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to get performance out of systems such as the Sega Saturn and as the primary language for arcade hardware based on the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam.
|
2,054
|
There has been debate over the usefulness and performance of assembly language relative to high-level languages.
|
2,055
|
Although assembly language has specific niche uses where it is important , there are other tools for optimization.
|
2,056
|
As of July 2017, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
|
2,057
|
There are some situations in which developers might choose to use assembly language:
|
2,058
|
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages.
|
2,059
|
It makes use of elaborative encoding, retrieval cues and imagery as specific tools to encode information in a way that allows for efficient storage and retrieval. It aids original information in becoming associated with something more accessible or meaningful—which in turn provides better retention of the information.
|
2,060
|
Commonly encountered mnemonics are often used for lists and in auditory form such as short poems, acronyms, initialisms or memorable phrases. They can also be used for other types of information and in visual or kinesthetic forms. Their use is based on the observation that the human mind more easily remembers spatial, personal, surprising, physical, sexual, humorous and otherwise "relatable" information rather than more abstract or impersonal forms of information.
|
2,061
|
Ancient Greeks and Romans distinguished between two types of memory: the "natural" memory and the "artificial" memory. The former is inborn and is the one that everyone uses instinctively. The latter in contrast has to be trained and developed through the learning and practice of a variety of mnemonic techniques.
|
2,062
|
Mnemonic systems are techniques or strategies consciously used to improve memory. They help use information already stored in long-term memory to make memorization an easier task.
|
2,063
|
Mnemonic is derived from the Ancient Greek word μνημονικός which means 'of memory' or 'relating to memory'. It is related to Mnemosyne, the name of the goddess of memory in Greek mythology. Both of these words are derived from μνήμη , 'remembrance, memory'. Mnemonics in antiquity were most often considered in the context of what is today known as the art of memory.
|
2,064
|
The general name of mnemonics, or memoria technica, was the name applied to devices for aiding the memory, to enable the mind to reproduce a relatively unfamiliar idea, and especially a series of dissociated ideas, by connecting it, or them, in some artificial whole, the parts of which are mutually suggestive. Mnemonic devices were much cultivated by Greek sophists and philosophers and are frequently referred to by Plato and Aristotle.
|
2,065
|
Philosopher Charmadas was famous for his outstanding memory and for his ability to memorize whole books and then recite them.
|
2,066
|
In later times, the poet Simonides was credited for development of these techniques, perhaps for no reason other than that the power of his memory was famous. Cicero, who attaches considerable importance to the art, but more to the principle of order as the best help to memory, speaks of Carneades of Athens and Metrodorus of Scepsis as distinguished examples of people who used well-ordered images to aid the memory. The Romans valued such helps in order to support facility in public speaking.
|
2,067
|
The Greek and the Roman system of mnemonics was founded on the use of mental places and signs or pictures, known as "topical" mnemonics. The most usual method was to choose a large house, of which the apartments, walls, windows, statues, furniture, etc., were each associated with certain names, phrases, events or ideas, by means of symbolic pictures. To recall these, an individual had only to search over the apartments of the house until discovering the places where images had been placed by the imagination.
|
2,068
|
In accordance with this system, if it were desired to fix a historic date in memory, it was localised in an imaginary town divided into a certain number of districts, each with ten houses, each house with ten rooms, and each room with a hundred quadrates or memory-places, partly on the floor, partly on the four walls, partly on the ceiling. Therefore, if it were desired to fix in the memory the date of the invention of printing , an imaginary book, or some other symbol of printing, would be placed in the thirty-sixth quadrate or memory-place of the fourth room of the first house of the historic district of the town. Except that the rules of mnemonics are referred to by Martianus Capella, nothing further is known regarding the practice until the 13th century.
|
2,069
|
Among the voluminous writings of Roger Bacon is a tractate De arte memorativa. Ramon Llull devoted special attention to mnemonics in connection with his ars generalis. The first important modification of the method of the Romans was that invented by the German poet Conrad Celtes, who, in his Epitoma in utramque Ciceronis rhetoricam cum arte memorativa nova , used letters of the alphabet for associations, rather than places. About the end of the 15th century, Peter of Ravenna provoked such astonishment in Italy by his mnemonic feats that he was believed by many to be a necromancer. His Phoenix artis memoriae went through as many as nine editions, the seventh being published at Cologne in 1608.
|
2,070
|
About the end of the 16th century, Lambert Schenkel , who taught mnemonics in France, Italy and Germany, similarly surprised people with his memory. He was denounced as a sorcerer by the University of Louvain, but in 1593 he published his tractate De memoria at Douai with the sanction of that celebrated theological faculty. The most complete account of his system is given in two works by his pupil Martin Sommer, published in Venice in 1619. In 1618 John Willis published Mnemonica; sive ars reminiscendi, containing a clear statement of the principles of topical or local mnemonics. Giordano Bruno included a memoria technica in his treatise De umbris idearum, as part of his study of the ars generalis of Llull. Other writers of this period are the Florentine Publicius ; Johannes Romberch ; Hieronimo Morafiot, Ars memoriae ;and B. Porta, Ars reminiscendi .
|
2,071
|
In 1648 Stanislaus Mink von Wennsshein revealed what he called the "most fertile secret" in mnemonics—using consonants for figures, thus expressing numbers by words , in order to create associations more readily remembered. The philosopher Gottfried Wilhelm Leibniz adopted an alphabet very similar to that of Wennsshein for his scheme of a form of writing common to all languages.
|
2,072
|
Wennsshein's method was adopted with slight changes afterward by the majority of subsequent "original" systems. It was modified and supplemented by Richard Grey , a priest who published a Memoria technica in 1730. The principal part of Grey's method is briefly this:
|
2,073
|
Wennsshein's method is comparable to a Hebrew system by which letters also stand for numerals, and therefore words for dates.
|
2,074
|
To assist in retaining the mnemonical words in the memory, they were formed into memorial lines. Such strange words in difficult hexameter scansion, are by no means easy to memorise. The vowel or consonant, which Grey connected with a particular figure, was chosen arbitrarily.
|
2,075
|
A later modification was made in 1806 Gregor von Feinaigle, a German monk from Salem near Constance. While living and working in Paris, he expounded a system of mnemonics in which the numerical figures are represented by letters chosen due to some similarity to the figure or an accidental connection with it. This alphabet was supplemented by a complicated system of localities and signs. Feinaigle, who apparently did not publish any written documentation of this method, travelled to England in 1811. The following year one of his pupils published The New Art of Memory , giving Feinaigle's system. In addition, it contains valuable historical material about previous systems.
|
2,076
|
Other mnemonists later published simplified forms, as the more complicated mnemonics were generally abandoned. Methods founded chiefly on the so-called laws of association were taught with some success in Germany.
|
2,077
|
A wide range of mnemonics are used for several purposes. The most commonly used mnemonics are those for lists, numerical sequences, foreign-language acquisition, and medical treatment for patients with memory deficits.
|
2,078
|
A common mnemonic technique for remembering a list is to create an easily remembered acronym. Another is to create a memorable phrase with words which share the same first letter as the list members. Mnemonic techniques can be applied to most memorization of novel materials.
|
2,079
|
Some common examples for first-letter mnemonics:
|
2,080
|
Mnemonic phrases or poems can be used to encode numeric sequences by various methods, one common one is to create a new phrase in which the number of letters in each word represents the according digit of pi. For example, the first 15 digits of the mathematical constant pi can be encoded as "Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics"; "Now", having 3 letters, represents the first number, 3. Piphilology is the practice dedicated to creating mnemonics for pi.
|
2,081
|
Another is used for "calculating" the multiples of 9 up to 9 × 10 using one's fingers. Begin by holding out both hands with all fingers stretched out. Now count left to right the number of fingers that indicates the multiple. For example, to figure 9 × 4, count four fingers from the left, ending at your left-hand index finger. Bend this finger down and count the remaining fingers. Fingers to the left of the bent finger represent tens, fingers to the right are ones. There are three fingers to the left and six to the right, which indicates 9 × 4 = 36. This works for 9 × 1 up through 9 × 10.
|
2,082
|
For remembering the rules in adding and multiplying two signed numbers, Balbuena and Buayan made the letter strategies LAUS and LPUN , respectively.
|
2,083
|
PUIMURI is a Finnish mnemonic regarding electricity: the first and last three letters can be arranged into the equations
P
=
U
×
I
{\displaystyle P=U\times I}
and
U
=
R
×
I
{\displaystyle U=R\times I}
.
|
2,084
|
Mnemonics may be helpful in learning foreign languages, for example by transposing difficult foreign words with words in a language the learner knows already, also called "cognates" which are very common in Romance languages and other Germanic languages. A useful such technique is to find linkwords, words that have the same pronunciation in a known language as the target word, and associate them visually or auditorially with the target word.
|
2,085
|
For example, in trying to assist the learner to remember ohel , the Hebrew word for tent, the linguist Ghil'ad Zuckermann proposes the memorable sentence "Oh hell, there's a raccoon in my tent". The memorable sentence "There's a fork in Ma's leg" helps the learner remember that the Hebrew word for fork is mazleg . Similarly, to remember the Hebrew word bayit , meaning house, one can use the sentence "that's a lovely house, I'd like to buy it." The linguist Michel Thomas taught students to remember that estar is the Spanish word for to be by using the phrase "to be a star".
|
2,086
|
Another Spanish example is by using the mnemonic "Vin Diesel Has Ten Weapons" to teach irregular command verbs in the you form. Spanish verb forms and tenses are regularly seen as the hardest part of learning the language. With a high number of verb tenses, and many verb forms that are not found in English, Spanish verbs can be hard to remember and then conjugate. The use of mnemonics has been proven to help students better learn foreign languages, and this holds true for Spanish verbs. A particularly hard verb tense to remember is command verbs. Command verbs in Spanish are conjugated differently depending on who the command is being given to. The phrase, when pronounced with a Spanish accent, is used to remember "Ven Di Sal Haz Ten Ve Pon Sé", all of the irregular Spanish command verbs in the you form. This mnemonic helps students attempting to memorize different verb tenses.
Another technique is for learners of gendered languages to associate their mental images of words with a colour that matches the gender in the target language. An example here is to remember the Spanish word for "foot", pie, with the image of a foot stepping on a pie, which then spills blue filling .
|
2,087
|
For French verbs which use être as an auxiliary verb for compound tenses: DR and MRS VANDERTRAMPP: descendre, rester, monter, revenir, sortir, venir, arriver, naître, devenir, entrer, rentrer, tomber, retourner, aller, mourir, partir, passer.
|
2,088
|
Masculine countries in French : "Neither can a breeze make a sane Japanese chilly in the USA." Netherlands , Canada, Brazil , Mexico , Senegal, Japan , Chile , & USA .
|
2,089
|
Mnemonics can be used in aiding patients with memory deficits that could be caused by head injuries, strokes, epilepsy, multiple sclerosis and other neurological conditions.
|
2,090
|
In a study conducted by Doornhein and De Haan, the patients were treated with six different memory strategies including the mnemonics technique. The results concluded that there were significant improvements on the immediate and delayed subtest of the RBMT, delayed recall on the Appointments test, and relatives rating on the MAC from the patients that received mnemonics treatment. However, in the case of stroke patients, the results did not reach statistical significance.
|
2,091
|
Academic study of the use of mnemonics has shown their effectiveness. In one such experiment, subjects of different ages who applied mnemonic techniques to learn novel vocabulary outperformed control groups that applied contextual learning and free-learning styles.
|
2,092
|
Mnemonics were seen to be more effective for groups of people who struggled with or had weak long-term memory, like the elderly. Five years after a mnemonic training study, a research team followed-up 112 community-dwelling older adults, 60 years of age and over. Delayed recall of a word list was assessed prior to, and immediately following mnemonic training, and at the 5-year follow-up. Overall, there was no significant difference between word recall prior to training and that exhibited at follow-up. However, pre-training performance gains scores in performance immediately post-training and use of the mnemonic predicted performance at follow-up. Individuals who self-reported using the mnemonic exhibited the highest performance overall, with scores significantly higher than at pre-training. The findings suggest that mnemonic training has long-term benefits for some older adults, particularly those who continue to employ the mnemonic.
|
2,093
|
This contrasts with a study from surveys of medical students that approximately only 20% frequently used mnemonic acronyms.
|
2,094
|
In humans, the process of aging particularly affects the medial temporal lobe and hippocampus, in which the episodic memory is synthesized. The episodic memory stores information about items, objects, or features with spatiotemporal contexts. Since mnemonics aid better in remembering spatial or physical information rather than more abstract forms, its effect may vary according to a subject's age and how well the subject's medial temporal lobe and hippocampus function.
|
2,095
|
This could be further explained by one recent study which indicates a general deficit in the memory for spatial locations in aged adults compared to young adults . At first, the difference in target recognition was not significant.
|
2,096
|
The researchers then divided the aged adults into two groups, aged unimpaired and aged impaired, according to a neuropsychological testing. With the aged groups split, there was an apparent deficit in target recognition in aged impaired adults compared to both young adults and aged unimpaired adults. This further supports the varying effectiveness of mnemonics in different age groups.
|
2,097
|
Moreover, different research was done previously with the same notion, which presented with similar results to that of Reagh et al. in a verbal mnemonics discrimination task.
|
2,098
|
Studies have suggested that the short-term memory of adult humans can hold only a limited number of items; grouping items into larger chunks such as in a mnemonic might be part of what permits the retention of a larger total amount of information in short-term memory, which in turn can aid in the creation of long-term memories.
|
2,099
|
The dictionary definition of mnemonic at Wiktionary
|
2,100
|
The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator , was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.