text stringlengths 0 473k |
|---|
[SOURCE: https://www.fast.ai/posts/2023-05-31-extinction.html] | [TOKENS: 1097] |
Is Avoiding Extinction from AI Really an Urgent Priority? Seth Lazar, Jeremy Howard, & Arvind Narayanan May 30, 2023 This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise. This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which undoubtedly brings new risks, including perhaps increasing the probability that some future AI system will go rogue and wipe out humanity. But we are not convinced that mitigating this risk is a global priority. Other AI risks are as important, and are much more urgent. Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters. And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination. We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow. First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us? Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all. And we should focus on building institutions that both reduce existing AI risks and put us in a robust position to address new ones as we learn more about them. This definitely means applying the precautionary principle, and taking concrete steps where we can to anticipate as yet unrealised risks. But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention. Building on their work, let’s focus on the things we can study, understand and control—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-31-extinction.html] | [TOKENS: 1097] |
Is Avoiding Extinction from AI Really an Urgent Priority? Seth Lazar, Jeremy Howard, & Arvind Narayanan May 30, 2023 This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise. This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which undoubtedly brings new risks, including perhaps increasing the probability that some future AI system will go rogue and wipe out humanity. But we are not convinced that mitigating this risk is a global priority. Other AI risks are as important, and are much more urgent. Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters. And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination. We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow. First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us? Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all. And we should focus on building institutions that both reduce existing AI risks and put us in a robust position to address new ones as we learn more about them. This definitely means applying the precautionary principle, and taking concrete steps where we can to anticipate as yet unrealised risks. But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention. Building on their work, let’s focus on the things we can study, understand and control—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Old_Norse] | [TOKENS: 10932] |
Contents Old Norse Old Norse was a North Germanic language spoken in Scandinavia and in Norse settlements during the Viking Age and the early Middle Ages (approximately the 8th–14th centuries). It is the conventional term for the medieval West and East Scandinavian dialects (often labelled Old West Norse and Old East Norse) that developed from Proto-Norse and later evolved into the modern North Germanic languages, including Icelandic, Faroese, Norwegian, Danish, and Swedish. Old Norse is attested in runic inscriptions (written in the Younger Futhark) and in numerous medieval manuscripts written with the Latin alphabet; its literary corpus includes the Poetic Edda, the Prose Edda, the Icelandic sagas, skaldic verse, law codes, and religious texts. Contact between Old Norse speakers and other languages — particularly Old English and the Celtic languages — left a substantial legacy of loanwords and toponyms; many common English words such as egg, knife, sky, and window derive from Old Norse. Scholarly usage of the term Old Norse typically covers texts from the 11th to the 14th centuries, though periodization varies within academia based on the theoretical focus and tradition of the particular source. Geographical distribution Old Icelandic was close to Old Norwegian, and together they formed Old West Norse, which was also spoken in Norse settlements in Greenland, the Faroes, Ireland, Scotland, the Isle of Man, northwest England (particularly Cumbria), and in Normandy. Old East Norse was spoken in Denmark, Sweden, Kievan Rus', eastern England, and Danish settlements in Normandy. The Old Gutnish dialect was spoken in Gotland and in various settlements in the East. In the 11th century, Old Norse was the most widely spoken European language,[citation needed] ranging from Vinland in the West to the Volga River in the East. In Kievan Rus', it survived the longest in Veliky Novgorod, probably lasting into the 13th century there.[failed verification] The age of the Swedish-speaking population of Finland is strongly contested, but Swedish settlement had spread the language into the region by the time of the Second Swedish Crusade in the 13th century at the latest. Modern descendants The modern descendants of the Old West Norse dialect are the West Scandinavian languages of Icelandic, Faroese, Norwegian, and the extinct Norn language of Orkney and Shetland, although Norwegian was heavily influenced by the East dialect, and is today more similar to East Scandinavian (Danish and Swedish) than to Icelandic and Faroese. The descendants of the Old East Norse dialect are the East Scandinavian languages of Danish, Swedish and Övdalian, although Övdalian was heavily influenced by the West Dialect, and is sometimes considered to form its own group. Among these, the grammar of Icelandic, Faroese and Övdalian have changed the least from Old Norse in the last thousand years, though the pronunciations of Icelandic and Faroese both have changed considerably from Old Norse. With Danish rule of the Faroe Islands, Faroese has also been influenced by Danish. Both Middle English (especially northern English dialects within the area of the Danelaw) and Early Scots (including Lowland Scots) were strongly influenced by Norse and contained many Old Norse loanwords. Consequently, Modern English (including Scottish English), inherited a significant proportion of its vocabulary directly from Norse. The development of Norman French was also influenced by Norse. Through Norman, to a smaller extent, so was modern French. Written modern Icelandic derives from the Old Norse phonemic writing system. Contemporary Icelandic-speakers can read Old Norse, which varies slightly in spelling as well as semantics and word order. However, pronunciation, particularly of the vowel phonemes, has changed at least as much in Icelandic as in the other North Germanic languages. Faroese retains many similarities but is influenced by Danish, Norwegian, and Gaelic (Scottish and/or Irish). Although Swedish, Danish and Norwegian have diverged the most, they still retain considerable mutual intelligibility. Speakers of modern Swedish, Norwegian and Danish can mostly understand each other without studying their neighboring languages, particularly if speaking slowly. The languages are also sufficiently similar in writing that they can mostly be understood across borders. This could be because these languages have been mutually affected by each other, as well as having a similar development influenced by Middle Low German. Various languages unrelated to Old Norse and others not closely related have been heavily influenced by Norse, particularly the Norman language; to a lesser extent, Finnish and Estonian. Russian, Ukrainian, Belarusian, Lithuanian and Latvian also have a few Norse loanwords. The words Rus and Russia, according to one theory, may be named after the Rus' people, a Norse tribe, probably from present-day east-central Sweden. The current Finnish and Estonian words for Sweden are Ruotsi and Rootsi, respectively. A number of loanwords have been introduced into Irish, many associated with fishing and sailing. A similar influence is found in Scottish Gaelic, with over one hundred loanwords estimated to be in the language, many of which are related to fishing and sailing. Phonology Old Norse vowel phonemes mostly come in pairs of long and short. The standardized orthography marks the long vowels with an acute accent. In medieval manuscripts, it is often unmarked but sometimes marked with an accent or through gemination. Old Norse had nasalized versions of all ten vowel qualities.[cv 1][obsolete source] These occurred as allophones of the vowels before nasal consonants and in places where a nasal had followed it in an older form of the word, before it was absorbed into a neighboring sound. If the nasal was absorbed by a stressed vowel, it would also lengthen the vowel. This nasalization also occurred in the other Germanic languages, but were not retained long. They were noted in the First Grammatical Treatise, and otherwise might have remained unknown. The First Grammarian marked these with a dot above the letter.[cv 1] This notation did not catch on, and would soon be obsolete. Nasal and oral vowels probably merged around the 11th century in most of Old East Norse. However, the distinction still holds in Dalecarlian dialects. The dots in the following vowel table separate the oral from nasal phonemes. Note: The open or open-mid vowels may be transcribed differently: Sometime around the 13th century, /ɔ/ (spelled ⟨ǫ⟩) merged with /ø/ or /o/ in most dialects except Old Danish, and Icelandic where /ɔ/ (ǫ) merged with /ø/. This can be determined by their distinction within the 12th-century First Grammatical Treatise but not within the early 13th-century Prose Edda. The nasal vowels, also noted in the First Grammatical Treatise, are assumed to have been lost in most dialects by this time (but notably they are retained in Elfdalian and other dialects of Ovansiljan). See Old Icelandic for the mergers of /øː/ (spelled ⟨œ⟩) with /ɛː/ (spelled ⟨æ⟩) and /ɛ/ (spelled ⟨ę⟩) with /e/ (spelled ⟨e⟩). Old Norse had three diphthong phonemes: /ɛi/, /ɔu/, /øy ~ ɛy/ (spelled ⟨ei⟩, ⟨au⟩, ⟨ey⟩ respectively). In East Norse these would monophthongize and merge with /eː/ and /øː/, whereas in West Norse and its descendants the diphthongs remained. Old Norse has six plosive phonemes, /p/ being rare word-initially and /d/ and /b/ pronounced as voiced fricative allophones between vowels except in compound words (e.g. veðrabati), already in the Proto-Germanic language (e.g. *b *[β] > [v] between vowels). The /ɡ/ phoneme was pronounced as [ɡ] after an /n/ or another /ɡ/ and as [k] before /s/ and /t/. Some accounts have it as a voiced velar fricative [ɣ] in all cases, and others have that realisation only in the middle of words and between vowels (with it otherwise being realised [ɡ]).[clarification needed] The Old East Norse /ʀ/ was an apical consonant, with its precise position unknown; it is reconstructed as a palatal sibilant. It descended from Proto-Germanic *z and eventually developed into /r/, as had already occurred in Old West Norse. The pronunciation of ⟨hv⟩ is unclear, but it may have been /xʷ/ (the assumed Proto-Germanic pronunciation), /hʷ/ or the similar phoneme /ʍ/. Unlike the three other digraphs, it was retained much longer in all dialects. Without ever developing into a voiceless sonorant in Icelandic, it instead underwent fortition to a plosive /kv/, which suggests that instead of being a voiceless sonorant, it retained a stronger frication. In some Icelandic dialects it is still preserved as /xʷ/ or /xv/. Old Norse had a stress accent with primary stress generally on the first syllable of the word, a feature reflected particularly in the conservative modern descendants Icelandic and Faroese. Evidence from poetic metre, orthographic marks, and later phonological developments suggests that a contrastive prosodic feature—often reconstructed as a pitch or tonal distinction—was already forming in late Proto-Norse or early Old Norse.[failed verification] This system later contributed to the development of the modern Swedish and Norwegian tonal accents (traditionally called Accent 1 and Accent 2). The emergence of these tonal contrasts is commonly linked to the interaction of syllable weight, vowel length changes, and the reduction of unstressed syllables during the Viking Age and early Middle Ages. The precise dating and mechanism of these prosodic changes remain debated among scholars. Some recent studies argue for a “peak delay” model of tonal development rather than an inherited double-peaked melody, while others reinterpret the process as a later phonologization of earlier rhythmic contrasts. In Danish the modern output of this contrast is, in many varieties, a creaky-voice register called stød. Meanwhile, in Icelandic there is no evidence to support a development of a lexicalised two-way tonal distinction like the one found across much of mainland Norway and Sweden, according to Árnason. Whilst in Faroese, the accentual distinction once existed but has already been eliminated from most varieties; older accounts like Hægstad (1916) did report such a distinction, although a study led 18 years later, Selmer (1924), didn't, ascribing the loss of tones to Danish influence. A more modern report, Hagström (1967:44), described a typical Faroese sentence as with "a smoothly falling melodic curve [...] and clear pronunciation of the ending on the same two-tone level as the preceding stem syllable" standing in stark contrast to "[the] constant rising and falling language melody" found on Suðuroy. Primary stress in Old Norse falls on the word stem, so that hyrjar would be pronounced /ˈhyrjar/. In compound words, secondary stress falls on the second stem (e.g. lærisveinn, /ˈlɛːɾiˌswɛinː/). Orthography Unlike Proto-Norse, which was written with the Elder Futhark, runic Old Norse was originally written with the Younger Futhark, which had only 16 letters. Because of the limited number of runes, several runes were used for different sounds, and long and short vowels were not distinguished in writing. Medieval runes came into use some time later. As for the Latin alphabet, there was no standardized orthography in use in the Middle Ages. A modified version of the letter wynn called vend was used briefly for the sounds /u/, /v/, and /w/. Long vowels were sometimes marked with acutes but also sometimes left unmarked or geminated. The standardized Old Norse spelling was created in the 19th century and is, for the most part, phonemic. The most notable deviation is that the nonphonemic difference between the voiced and the voiceless dental fricative is marked. The oldest texts and runic inscriptions use þ exclusively. Long vowels are denoted with acutes. Most other letters are written with the same glyph as the IPA phoneme's grapheme, except as shown in the above tables. Phonological processes Ablaut patterns are groups of vowels which are swapped, or ablauted, in the nucleus of a word. Strong verbs ablaut the lemma's nucleus to derive the past forms of the verb. This parallels English conjugation, where, e.g., the nucleus of sing becomes sang in the past tense and sung in the past participle. Some verbs are derived by ablaut, as the present-in-past verbs do by consequence of being derived from the past tense forms of strong verbs. Umlaut or mutation is an assimilatory process acting on vowels preceding a vowel or semivowel of a different vowel backness. In the case of i-umlaut and ʀ-umlaut, this entails a fronting of back vowels, with retention of lip rounding. In the case of u-umlaut, this entails labialization of unrounded vowels. Umlaut is phonemic and in many situations grammatically significant as a side effect of losing the Proto-Germanic morphological suffixes whose vowels created the umlaut allophones. Some /y/, /yː/, /ø/, /øː/, /ɛ/, /ɛː/, /øy/, and all /ɛi/ were obtained by i-umlaut from /u/, /uː/, /o/, /oː/, /a/, /aː/, /au/, and /ai/ respectively. Others were formed via ʀ-umlaut from /u/, /uː/, /a/, /aː/, and /au/. Some /y/, /yː/, /ø/, /øː/, and all /ɔ/, /ɔː/ were obtained by u-umlaut from /i/, /iː/, /e/, /eː/, and /a/, /aː/ respectively. See Old Icelandic for information on /ɔː/. /œ/ was obtained through a simultaneous u- and i-umlaut of /a/. It appears in words like gera (gøra, gjǫra, geyra), from Proto-Germanic *garwijaną, and commonly in verbs with a velar consonant before the suffix like søkkva < *sankwijaną.[cv 2] OEN often preserves the original value of the vowel directly preceding runic (ᛉ, ʀ) while OWN receives ʀ-umlaut. Compare runic OEN glaʀ, haʀi, hrauʀ with OWN gler, heri (later héri), hrøyrr/hreyrr 'glass', 'hare', 'pile of rocks'. U-umlaut is more common in Old West Norse in both phonemic and allophonic positions, while it only occurs sparsely in post-runic Old East Norse and even in runic Old East Norse. This is still a major difference between Swedish and Faroese and Icelandic today. Plurals of neuters do not have u-umlaut at all in Swedish, but in Faroese and Icelandic they do, for example the Faroese and Icelandic plurals of the word land, lond and lönd respectively, in contrast to the Swedish plural land and numerous other examples. That also applies to almost all feminine nouns, for example the largest feminine noun group, the o-stem nouns (except the Swedish noun jord mentioned above), and even i-stem nouns and root nouns, such as Old West Norse mǫrk (mörk in Icelandic) in comparison with Modern and Old Swedish mark. Vowel breaking, or fracture, caused a front vowel to be split into a semivowel-vowel sequence before a back vowel in the following syllable. While West Norse only broke /e/, East Norse also broke /i/. The change was blocked by a /w/, /l/, or /ʀ/ preceding the potentially-broken vowel. When a noun, pronoun, adjective, or verb has a long vowel or diphthong in the accented syllable and its stem ends in a single l, n, or s, the r (or the elder r- or z-variant ʀ) in an ending is assimilated.[cv 3] When the accented vowel is short, the ending is dropped. The nominative of the strong masculine declension and some i-stem feminine nouns uses one such -r (ʀ). Óðin-r (Óðin-ʀ) becomes Óðinn instead of *Óðinr (*Óðinʀ). The verb blása 'to blow', has third person present tense blæss '[he] blows' rather than *blæsr (*blæsʀ). Similarly, the verb skína 'to shine' had present tense third person skínn (rather than *skínr, *skínʀ) ; while kala 'to cool down' had present tense third person kell (rather than *kelr, *kelʀ). The rule is not absolute, with certain counter-examples such as vinr 'friend', which has the synonym vin, yet retains the unabsorbed version, and jǫtunn 'giant', where assimilation takes place even though the root vowel, ǫ, is short. The clusters */Clʀ, Csʀ, Cnʀ, Crʀ/ cannot yield */Clː, Csː, Cnː, Crː/ respectively, instead /Cl, Cs, Cn, Cr/. The effect of this shortening can result in the lack of distinction between some forms of the noun. In the case of vetr 'winter', the nominative and accusative singular and plural forms are identical. The nominative singular and nominative and accusative plural would otherwise have been OWN *vetrr, OEN *wintrʀ. These forms are impossible because the cluster */Crʀ/ cannot be realized as /Crː/, nor as */Crʀ/, nor as */Cʀː/. The same shortening as in vetr also occurs in lax = laks 'salmon' (as opposed to *lakss, *laksʀ), botn 'bottom' (as opposed to *botnn, *botnʀ), and jarl (as opposed to *jarll, *jarlʀ). Furthermore, wherever the cluster */rʀ/ is expected to exist, such as in the male names Ragnarr, Steinarr (supposedly *Ragnarʀ, *Steinarʀ), the result is apparently always /rː/ rather than */rʀ/ or */ʀː/. This is observable in the Runic corpus. Phonotactics In Old Norse, i/j adjacent to i, e, their u-umlauts, and æ was not possible, nor u/v adjacent to u, o, their i-umlauts, and ǫ. At the beginning of words, this manifested as a dropping of the initial /j/ (which was general, independent of the following vowel) or /v/. Compare ON orð, úlfr, ár with English word, wolf, year. In inflections, this manifested as the dropping of the inflectional vowels. Thus, klæði + dat -i remains klæði, and sjáum in Icelandic progressed to sjǫ́um > sjǫ́m > sjám. The *jj and *ww of Proto-Germanic became ggj and ggv respectively in Old Norse, a change known as Holtzmann's law. An epenthetic vowel became popular by 1200 in Old Danish, 1250 in Old Swedish and Old Norwegian, and 1300 in Old Icelandic. An unstressed vowel was used which varied by dialect. Old Norwegian exhibited all three: /u/ was used in West Norwegian south of Bergen, as in aftur, aftor (older aptr); North of Bergen, /i/ appeared in aftir, after; and East Norwegian used /a/, after, aftær. Grammar Old Norse was a moderately inflected language with high levels of nominal and verbal inflection. Most of the fused morphemes are retained in modern Icelandic, especially in regard to noun case declensions, whereas modern Norwegian in comparison has moved towards more analytical word structures. Old Norse had three grammatical genders – masculine, feminine, and neuter. Adjectives or pronouns referring to a noun must mirror the gender of that noun, so that one says, "heill maðr!" but, "heilt barn!". As in other languages, the grammatical gender of an impersonal noun is generally unrelated to an expected natural gender of that noun. While indeed karl, 'man' is masculine, kona, 'woman', is feminine, and hús, 'house', is neuter, so also are hrafn and kráka, for 'raven' and 'crow', masculine and feminine respectively, even in reference to a female raven or a male crow. All neuter words have identical nominative and accusative forms, and all feminine words have identical nominative and accusative plurals. The gender of some words' plurals does not agree with that of their singulars, such as lim and mund.[cv 4] Some words, such as hungr, have multiple genders, evidenced by their determiners being declined in different genders within a given sentence. Nouns, adjectives, and pronouns were declined in four grammatical cases – nominative, accusative, genitive, and dative – in singular and plural numbers. Adjectives and pronouns were additionally declined in three grammatical genders. Some pronouns (first and second person) could have dual number in addition to singular and plural. The genitive was used partitively and in compounds and kennings (e.g., Urðarbrunnr, 'the well of Urðr' ; Lokasenna, 'the flyting of Loki'). There were several classes of nouns within each gender. The following is an example of the "strong" inflectional paradigms: The numerous "weak" noun paradigms had a much higher degree of syncretism between the different cases : i.e. they had fewer forms than the "strong" nouns. A definite article was appended as a suffix that retained an independent declension : e.g., troll 'a troll' – trollit 'the troll', hǫll 'a hall' – hǫllin 'the hall', armr 'an arm' – armrinn 'the arm'. This definite article, however, was a separate word and did not become attached to the noun before later stages of the Old Norse period. Texts The earliest inscriptions in Old Norse are runic, from the 8th century. Runes continued to be commonly used until the 15th century and have been recorded to be in use in some form as late as the 19th century in some parts of Sweden. With the conversion to Christianity in the 11th century came the Latin alphabet. The oldest preserved texts in Old Norse in the Latin alphabet date from the middle of the 12th century. Subsequently, Old Norse became the vehicle of a large and varied body of vernacular literature. Most of the surviving literature was written in Iceland. Best known are the Norse sagas, the Icelanders' sagas and the mythological literature, but there also survives a large body of religious literature, translations into Old Norse of courtly romances, classical mythology, and the Old Testament, as well as instructional material, grammatical treatises and a large body of letters and official documents. Dialects Most of the innovations that appeared in Old Norse spread evenly through the Old Norse area. As a result, the dialects were similar and considered to be the same language, a language that they sometimes called the Danish tongue (Dǫnsk tunga), sometimes Norse language (Norrœnt mál), as evidenced in the following two quotes from Heimskringla by Snorri Sturluson: Móðir Dyggva var Drótt, dóttir Danps konungs, sonar Rígs er fyrstr var konungr kallaðr á danska tungu. Dyggvi's mother was Drott, the daughter of king Danp, Ríg's son, who was the first to be called king in the Danish tongue. ...stirt var honum norrœnt mál, ok kylfdi mᴊǫk til orðanna, ok hǫfðu margir menn þat mᴊǫk at spotti. ...the Norse language was hard for him, and he often fumbled for words, which amused people greatly. However, some changes were geographically limited and so created a dialectal difference between Old West Norse and Old East Norse. As Proto-Norse evolved into Old Norse, in the 8th century, the effects of the umlauts seem to have been very much the same over the whole Old Norse area. But in later dialects of the language a split occurred mainly between west and east as the use of umlauts began to vary. The typical umlauts (for example fylla < *fullijan) were better preserved in the West due to later generalizations in the east where many instances of umlaut were removed (many archaic Eastern texts as well as eastern runic inscriptions however portray the same extent of umlauts as in later Western Old Norse). All the while, the changes resulting in breaking (for example hiarta < *hertō) were more influential in the East probably once again due to generalizations within the inflectional system. This difference was one of the greatest reasons behind the dialectalization that took place in the 9th and 10th centuries, shaping an Old West Norse dialect in Norway and the Atlantic settlements and an Old East Norse dialect in Denmark and Sweden. Old West Norse and Old Gutnish did not take part in the monophthongization which changed æi (ei) into ē, øy (ey) and au into ø̄, nor did certain peripheral dialects of Swedish, as seen in modern Ostrobothnian dialects. Another difference was that Old West Norse lost certain combinations of consonants. The combinations -mp-, -nt-, and -nk- were assimilated into -pp-, -tt- and -kk- in Old West Norse, but this phenomenon was limited in Old East Norse. Here is a comparison between the two dialects as well as Old Gutnish. It is a transcription from one of the Funbo Runestones in Sweden (U 990) from the eleventh century: Veðr Weðr Weðr ok ok ok Þegn Þegn Þegn ok ok ok Gunnarr Gunnarr Gunnarr reistu ræistu raistu stein stæin stain þenna þenna þenna at at at Haursa, Haursa, Haursa, fǫður faður faður sinn. sinn. sinn. Guð Guð Guð hjalpi hialpi hialpi ǫnd and and hans. hans hans (Old West Norse) (Old East Norse) (Old Gutnish) Veðr ok Þegn ok Gunnarr reistu stein þenna at Haursa, fǫður sinn. Guð hjalpi ǫnd hans. Weðr ok Þegn ok Gunnarr ræistu stæin þenna at Haursa, faður sinn. Guð hialpi and hans Weðr ok Þegn ok Gunnarr raistu stain þenna at Haursa, faður sinn. Guð hialpi and hans translation: 'Veðr and Thegn and Gunnar raised this stone after Haursi, their father. God help his spirit' The OEN original text above is transliterated according to traditional scholarly methods, wherein u-umlaut is not regarded in runic Old East Norse. Modern studies[citation needed] have shown that the positions where it applies are the same as for runic Old West Norse. An alternative and probably more accurate transliteration would therefore render the text in OEN as such: Some past participles and other words underwent i-umlaut in Old West Norse but not in Old East Norse dialects. Examples of that are Icelandic slegið/sleginn and tekið/tekinn, which in Swedish are slagit/slagen and tagit/tagen. This can also be seen in the Icelandic and Norwegian words sterkur and sterk 'strong', which in Swedish is stark as in Old Swedish. These differences can also be seen in comparison between Norwegian and Swedish. Old West Norse is by far the best attested variety of Old Norse. The term Old Norse is often used to refer to Old West Norse specifically, in which case the broader subject receives another name, such as Old Scandinavian. Another designation is Old West Nordic. The combinations -mp-, -nt-, and -nk- mostly merged to -pp-, -tt- and -kk- in Old West Norse around the 7th century, marking the first distinction between the Eastern and Western dialects. The following table illustrates this: An early difference between Old West Norse and the other dialects was that Old West Norse had the forms bú 'dwelling', kú 'cow acc' and trú 'faith', whereas Old East Norse bó, kó and tró. Old West Norse was also characterized by the preservation of u-umlaut, which meant that, for example, Proto-Norse *tanþu 'tooth', became tǫnn and not tann as in post-runic Old East Norse ; OWN gǫ́s and runic OEN gǫ́s, while post-runic OEN gás 'goose'. The earliest body of text appears in runic inscriptions and in poems composed c. 900 by Þjóðólfr of Hvinir (although the poems are not preserved in contemporary sources, but only in much later manuscripts). The earliest manuscripts are from the period 1150–1200 and concern legal, religious and historical matters. During the 12th and 13th centuries, Trøndelag and Western Norway were the most important areas of the Norwegian kingdom and they shaped Old West Norse as an archaic language with a rich set of declensions. In the body of text that has survived into the modern day from until c. 1300, Old West Norse had little dialect variation, and Old Icelandic does not diverge much more than the Old Norwegian dialects do from each other. Old Norwegian differentiated early from Old Icelandic by the loss of the consonant h in initial position before l, n and r; thus whereas Old Icelandic manuscripts might use the form hnefi 'fist', Old Norwegian manuscripts might use nefi. From the late 13th century, Old Icelandic and Old Norwegian started to diverge more. After c. 1350, the Black Death and following social upheavals seem to have accelerated language changes in Norway. From the late 14th century, the language used in Norway is generally referred to as Middle Norwegian.[citation needed] Old West Norse underwent a lengthening of initial vowels at some point, especially in Norwegian, so that OWN eta became éta, ONW akr > ákr, OIC ek > ék. In Iceland, initial /w/ before /ɾ/ was lost:[cv 5] compare Icelandic rangur with Danish vrang, OEN wrangʀ. The change is shared with Old Gutnish. A specifically Icelandic sound, the long, u-umlauted A, spelled ⟨Ǫ́⟩ and pronounced /ɔː/, developed around the early 11th century.[cv 1] It was short-lived, being marked in the Grammatical Treatises and remaining until the end of the 12th century.[cv 1] It then merged back into /aː/ ; as a result, long A is not affected by u-umlaut in Modern Icelandic. /w/ merged with /v/ during the 12th century, which caused /v/ to become an independent phoneme from /f/ and the written distinction of ⟨v⟩ for /v/ from medial and final ⟨f⟩ to become merely etymological. Around the 13th century, Œ/Ǿ (/øː/, which had probably already lowered to /œː/) merged to Æ (/ɛː/).[cv 6] Thus, pre-13th-century grœnn (with ⟨œ⟩) 'green' became spelled as in modern Icelandic grænn (with ⟨æ⟩). The 12th-century Gray Goose Laws manuscripts distinguish the vowels, and so does the Codex Regius copy.[cv 6] However, the 13th-century Codex Regius copy of the Poetic Edda probably relied on newer or poorer quality sources, or both. Demonstrating either difficulty with or total lack of natural distinction, the manuscripts show separation of the two phonemes in some places, but they frequently confuse the letters chosen to distinguish them in others.[cv 6] Towards the end of the 13th century, Ę (/ɛ/) merged to E (/e/).[cv 7] Around the 11th century, Old Norwegian ⟨hl⟩, ⟨hn⟩, and ⟨hr⟩ became ⟨l⟩, ⟨n⟩ and ⟨r⟩. It is debatable whether the ⟨hC⟩ sequences represented a consonant cluster (/hC/) or devoicing (/C̥/). Orthographic evidence suggests that in a confined dialect of Old Norwegian, /ɔ/ may have been unrounded before /u/ and that u-umlaut was reversed unless the u had been eliminated: ǫll, ǫllum > ǫll, allum. This dialect of Old West Norse was spoken by Icelandic colonies in Greenland. When the colonies died out around the 15th century, the dialect went with it. The phoneme /θ/ and some instances of /ð/ merged to /t/ and so Old Icelandic Þórðr became Tortr. The following text is from Alexanders saga, an Alexander Romance. The manuscript, AM 519 a 4to, is dated c. 1280. The facsimile demonstrates the sigla used by scribes to write Old Norse. Many of them were borrowed from Latin. Without familiarity with these abbreviations, the facsimile will be unreadable to many. In addition, reading the manuscript itself requires familiarity with the letterforms of the native script. The abbreviations are expanded in a version with normalized spelling like that of the standard normalization system. Compared to the spelling of the same text in Modern Icelandic, pronunciation has changed greatly, but spelling has changed little since Icelandic orthography was intentionally modelled after Old Norse in the 19th century. [...] ſem oꝩın͛ h̅ſ brıgzloðo h̅o̅ epꞇ͛ þͥ ſe̅ ſıðaʀ mon ſagꞇ verða. Þeſſı ſveın̅ aͬ.* ꝩar ıſcola ſeꞇꞇr ſem ſıðꝩenıa e͛ ꞇıl rıkra man̅a vꞇan-lanꝺz aꞇ laꞇa g͛a vıð boꝛn̅ ſíıƞ́ Meıſꞇarı ꝩar h̅o̅ ꝼengın̅ ſa e͛ arıſꞇoꞇıleſ heꞇ. h̅ ꝩar harðla goðꝛ clercr ⁊ en̅ meſꞇı ſpekıngr aꞇ ꝩıꞇı. ⁊ er h̅ ꝩͬ.xíí. veꞇᷓ gamall aꞇ allꝺrı nalıga alroſcın̅ aꞇ ꝩıꞇı. en ſꞇoꝛhvgaðꝛ u̅ ꝼᷓm alla ſına ıaꝼnallꝺꝛa. [...] sem óvinir hans brigzluðu honum eftir því, sem síðarr man sagt verða. þessi sveinn Alexander var í skóla settr, sem siðvenja er til ríkra manna útanlands at láta gera við bǫrn sín. meistari var honum fenginn sá, er Aristoteles hét. hann var harðla góðr klerkr ok inn mesti spekingr at viti. ok er hann var tólv vetra gamall at aldri, náliga alroskinn at viti, en stórhugaðr umfram alla sína jafnaldra, [...] [...] sem óvinir hans brigsluðu honum eftir því, sem síðar mun sagt verða. Þessi sveinn Alexander var í skóla settur, sem siðvenja er til ríkra manna utanlands að láta gera við börn sín. Meistari var honum fenginn sá, er Aristóteles hét. Hann var harla góður klerkur og hinn mesti spekingur að viti og er hann var tólf vetra gamall að aldri, nálega alroskinn að viti, en stórhugaður umfram alla sína jafnaldra, [...] * a printed in uncial. Uncials not encoded separately in Unicode as of this section's writing. Old East Norse or Old East Nordic between 800 and 1100 is called Runic Swedish in Sweden and Runic Danish in Denmark, but for geographical rather than linguistic reasons. Any differences between the two were minute at best during the more ancient stages of this dialect group. Changes had a tendency to occur earlier in the Danish region. Even today many Old Danish changes have still not taken place in modern Swedish. Swedish is therefore the more conservative of the two in both the ancient and the modern languages, sometimes by a profound margin. The language is called "runic" because the body of text appears in runes. Runic Old East Norse is characteristically conservative in form, especially Swedish (which is still true for modern Swedish compared to Danish). In essence it matches or surpasses the conservatism of post-runic Old West Norse, which in turn is generally more conservative than post-runic Old East Norse. While typically "Eastern" in structure, many later post-runic changes and trademarks of OEN had yet to happen. The phoneme ʀ, which evolved during the Proto-Norse period from z, was still clearly separated from r in most positions, even when being geminated, while in OWN it had already merged with r. The Proto-Germanic phoneme */w/ was preserved in initial sounds in Old East Norse (w-), unlike in West Norse where it developed into /v/. It survived in rural Swedish dialects in the provinces of Westro- and North Bothnia, Skåne, Blekinge, Småland, Halland, Västergötland and south of Bohuslän into the 18th, 19th and 20th century. It is still preserved in the Dalecarlian dialects in the province of Dalarna, Sweden, and in Jutlandic dialects in Denmark. The /w/-phoneme did also occur after consonants (kw-, tw-, sw- etc.) in Old East Norse and did so into modern times in said Swedish dialects and in a number of others. Generally, the initial w-sound developed into [v] in dialects earlier than after consonants where it survived much longer. In summation, the /w/-sound survived in the East Nordic tongues almost a millennium longer than in the West Norse counterparts, and does still subsist at the present. Monophthongization of æi > ē and øy, au > ø̄ started in mid-10th-century Denmark. Compare runic OEN: fæigʀ, gæiʀʀ, haugʀ, møydōmʀ, diūʀ ; with Post-runic OEN: fēgher, gēr, hø̄gher, mø̄dōmber, diūr ; OWN: feigr, geirr, haugr, meydómr, dýr ; from PN *faigijaz, *gaizaz, *haugaz, *mawi + -dōmaz 'maidendom/ virginity', *diuza. Feminine o-stems often preserve the plural ending -aʀ, while in OWN they more often merge with the feminine i-stems: (runic OEN) *sōlaʀ, *hafnaʀ, *hamnaʀ, *wāgaʀ versus OWN sólir, hafnir and vágir (Danish has mainly lost the distinction between the two stems, with both endings now being rendered as -er or -e alternatively for the o-stems ; modern Swedish solar, hamnar, vågar). Vice versa, masculine i-stems with the root ending in either g or k tended to shift the plural ending to that of the ja-stems while OEN kept the original: drængiaʀ, *ælgiaʀ and *bænkiaʀ versus OWN drengir, elgir and bekkir (modern Danish drenge, elge, bænke ; modern Swedish drängar, älgar, bänkar). The plural ending of ja-stems were mostly preserved while those of OWN often acquired that of the i-stems: *bæðiaʀ, *bækkiaʀ, *wæfiaʀ versus OWN beðir, bekkir, vefir (modern Swedish bäddar, bäckar, vävar). Until the early 12th century, Old East Norse was very much a uniform dialect. It was in Denmark that the first innovations appeared that would differentiate Old Danish from Old Swedish (Bandle 2005, Old East Nordic, pp. 1856, 1859) as these innovations spread north unevenly (unlike the earlier changes that spread more evenly over the East Norse area), creating a series of isoglosses going from Zealand to Svealand. In Old Danish, /hɾ/ merged with /ɾ/ during the 9th century. From the 11th to 14th centuries, the unstressed vowels -a, -o and -e (standard normalization -a, -u and -i) started to merge into a single central vowel, represented with the letter ⟨e⟩, which also came from widespread epenthesis, occurring particularly before -ʀ endings. At the same time, the voiceless stop consonants p, t and k became voiced plosives and even fricative consonants. Resulting from these innovations, Danish has kage (cake), tunger (tongues) and gæster (guests) whereas (Standard) Swedish has retained older forms, kaka, tungor and gäster (OEN kaka, tungur, gæstir). Moreover, the Danish pitch accent shared with Norwegian and Swedish changed into stød around this time.[citation needed] At the end of the 10th and early 11th century initial h- before l, n and r was still preserved in the middle and northern parts of Sweden, and is sporadically still preserved in some northern dialects as g-, e.g. gly 'lukewarm', from hlýʀ. The Dalecarlian dialects developed independently from Old Swedish and as such can be considered separate languages from Swedish. This is an extract from Västgötalagen, the Westrogothic law. It is the oldest text written as a manuscript found in Sweden and from the 13th century. It is contemporaneous with most of the Icelandic literature. The text marks the beginning of Old Swedish as a distinct dialect. Dræpær maþar svænskan man eller smalenskæn, innan konongsrikis man, eigh væstgøskan, bøte firi atta ørtogher ok þrettan markær ok ænga ætar bot. [...] Dræpar maþær danskan man allæ noræn man, bøte niv markum. Dræpær maþær vtlænskan man, eigh ma frid flyia or landi sinu oc j æth hans. Dræpær maþær vtlænskæn prest, bøte sva mykit firi sum hærlænskan man. Præstær skal i bondalaghum væræ. Varþær suþærman dræpin ællær ænskær maþær, ta skal bøta firi marchum fiurum þem sakinæ søkir, ok tvar marchar konongi. If someone slays a Swede or a Smålander, a man from the kingdom, but not a West Geat, he will pay eight örtugar and thirteen marks, but no weregild. [...] If someone slays a Dane or a Norwegian, he will pay nine marks. If someone slays a foreigner, he shall not be banished and have to flee to his clan. If someone slays a foreign priest, he will pay as much as for a fellow countryman. A priest counts as a freeman. If a Southerner is slain or an Englishman, he shall pay four marks to the plaintiff and two marks to the king. Due to Gotland's early isolation from the mainland, many features of Old Norse did not spread from or to the island, and Old Gutnish developed as an entirely separate branch from Old East and West Norse. For example, the diphthong ai in aigu, þair and waita was not subject to anticipatory assimilation to ei as in e.g. Old Icelandic eigu, þeir and veita. Gutnish also shows dropping of /w/ in initial /wɾ/, which it shares with the Old West Norse dialects (except Old East Norwegian), but which is otherwise abnormal. Breaking was also particularly active in Old Gutnish, leading to e.g. biera versus mainland bera. The Guta lag 'law of the Gutes' is the longest text surviving from Old Gutnish. Appended to it is a short texting dealing with the history of the Gotlanders. This part relates to the agreement that the Gotlanders had with the Swedish king sometime before the 9th century: So gingu gutar sielfs wiliandi vndir suia kunung þy at þair mattin frir Oc frelsir sykia suiariki j huerium staþ. vtan tull oc allar utgiftir. So aigu oc suiar sykia gutland firir vtan cornband ellar annur forbuþ. hegnan oc hielp sculdi kunungur gutum at waita. En þair wiþr þorftin. oc kallaþin. sendimen al oc kunungr oc ierl samulaiþ a gutnal þing senda. Oc latta þar taka scatt sinn. þair sendibuþar aighu friþ lysa gutum alla steþi til sykia yfir haf sum upsala kunungi til hoyrir. Oc so þair sum þan wegin aigu hinget sykia. So, by their own will, the Gotlanders became the subjects of the Swedish king, so that they could travel freely and without risk to any location in the Swedish kingdom without toll and other fees. Likewise, the Swedes had the right to go to Gotland without corn restrictions or other prohibitions. The king was to provide protection and help, when they needed it and asked for it. The king and the jarl shall in return send emissaries to the Gutnish All-thing to receive the taxes. These emissaries shall declare free passage for the Gotlanders to all ports across the sea which belong to the king at Uppsala and likewise for everyone who want to travel to Gotland. Relationship to other languages Old English and Old Norse were related languages. It is therefore not surprising that many words in Old Norse look familiar to English speakers : e.g., armr 'arm', fótr 'foot', land 'land', fullr 'full', hanga 'to hang', standa 'to stand'. This is because both English and Old Norse stem from a Proto-Germanic mother language. In addition, numerous common, everyday Old Norse words were adopted into the Old English language during the Viking Age. A few examples of Old Norse loanwords in modern English are (English/Viking Age Old East Norse): In a simple sentence like 'They are both weak', the extent of the Old Norse loanwords becomes quite clear; compare Old East Norse with archaic pronunciation: "Þæiʀ eʀu báðiʀ wæikiʀ" with Old English: "híe syndon bégen (þá) wáce". The words "they" and "weak" are both borrowed from Old Norse, and the word "both" might also be a borrowing, though this is disputed (cf. German beide).[who?] While the number of loanwords adopted from the Norse was not as numerous as that of Norman French or Latin, their depth and everyday nature make them a substantial and very important part of everyday English speech as they are part of the very core of the modern English vocabulary.[citation needed] Tracing the origins of words like "bull" and "Thursday" is more difficult.[citation needed] "Bull" may derive from either Old English: bula or Old Norse: buli,[citation needed] while "Thursday" may be a borrowing or simply derive from the Old English: Þunresdæg, which could have been influenced by the Old Norse cognate.[citation needed] The word "are" is from Old English: earun/aron, which stems back to Proto-Germanic as well as the Old Norse cognates. See also Citations Sources External links |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-03-mojo-launch.html] | [TOKENS: 4390] |
Mojo may be the biggest programming language advance in decades Jeremy Howard May 4, 2023 On this page I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo, a new programming language from Modular1 is the second time in my life I’ve had that feeling. Here’s what it looks like: Why not just use Python? Before I explain why I’m so excited about Mojo, I first need to say a few things about Python. Python is the language that I have used for nearly all my work over the last few years. It is a beautiful language. It has an elegant core, on which everything else is built. This approach means that Python can (and does) do absolutely anything. But it comes with a downside: performance. A few percent here or there doesn’t matter. But Python is many thousands of times slower than languages like C++. This makes it impractical to use Python for the performance-sensitive parts of code – the inner loops where performance is critical. However, Python has a trick up its sleeve: it can call out to code written in fast languages. So Python programmers learn to avoid using Python for the implementation of performance-critical sections, instead using Python wrappers over C, FORTRAN, Rust, etc code. Libraries like Numpy and PyTorch provide “pythonic” interfaces to high performance code, allowing Python programmers to feel right at home, even as they’re using highly optimised numeric libraries. Nearly all AI models today are developed in Python, thanks to the flexible and elegant programming language, fantastic tools and ecosystem, and high performance compiled libraries. But this “two-language” approach has serious downsides. For instance, AI models often have to be converted from Python into a faster implementation, such as ONNX or torchscript. But these deployment approaches can’t support all of Python’s features, so Python programmers have to learn to use a subset of the language that matches their deployment target. It’s very hard to profile or debug the deployment version of the code, and there’s no guarantee it will even run identically to the python version. The two-language problem gets in the way of learning. Instead of being able to step into the implementation of an algorithm while your code runs, or jump to the definition of a method of interest, instead you find yourself deep in the weeds of C libraries and binary blobs. All coders are learners (or at least, they should be) because the field constantly develops, and no-one can understand it all. So difficulties learning and problems for experienced devs just as much as it is for students starting out. The same problem occurs when trying to debug code or find and resolve performance problems. The two-language problem means that the tools that Python programmers are familiar with no longer apply as soon as we find ourselves jumping into the backend implementation language. There are also unavoidable performance problems, even when a faster compiled implementation language is used for a library. One major issue is the lack of “fusion” – that is, calling a bunch of compiled functions in a row leads to a lot of overhead, as data is converted to and from python formats, and the cost of switching from python to C and back repeatedly must be paid. So instead we have to write special “fused” versions of common combinations of functions (such as a linear layer followed by a rectified linear layer in a neural net), and call these fused versions from Python. This means there’s a lot more library functions to implement and remember, and you’re out of luck if you’re doing anything even slightly non-standard because there won’t be a fused version for you. We also have to deal with the lack of effective parallel processing in Python. Nowadays we all have computers with lots of cores, but Python generally will just use one at a time. There are some clunky ways to write parallel code which uses more than one core, but they either have to work on totally separate memory (and have a lot of overhead to start up) or they have to take it in turns to access memory (the dreaded “global interpreter lock” which often makes parallel code actually slower than single-threaded code!) Libraries like PyTorch have been developing increasingly ingenious ways to deal with these performance problems, with the newly released PyTorch 2 even including a compile() function that uses a sophisticated compilation backend to create high performance implementations of Python code. However, functionality like this can’t work magic: there are fundamental limitations on what’s possible with Python based on how the language itself is designed. You might imagine that in practice there’s just a small number of building blocks for AI models, and so it doesn’t really matter if we have to implement each of these in C. Besides which, they’re pretty basic algorithms on the whole anyway, right? For instance, transformers models are nearly entirely implemented by multiple layers of two components, multilayer perceptrons (MLP) and attention, which can be implemented with just a few lines of Python with PyTorch. Here’s the implementation of an MLP: …and here’s a self-attention layer: But this hides the fact that real-world implementations of these operations are far more complex. For instance check out this memory optimised “flash attention” implementation in CUDA C. It also hides the fact that there are huge amounts of performance being left on the table by these generic approaches to building models. For instance, “block sparse” approaches can dramatically improve speed and memory use. Researchers are working on tweaks to nearly every part of common architectures, and coming up with new architectures (and SGD optimisers, and data augmentation methods, etc) – we’re not even close to having some neatly wrapped-up system that everyone will use forever more. In practice, much of the fastest code today used for language models is being written in C and C++. For instance, Fabrice Bellard’s TextSynth and Georgi Gerganov’s ggml both use C, and as a result are able to take full advantage of the performance benefits of fully compiled languages. Enter Mojo Chris Lattner is responsible for creating many of the projects that we all rely on today – even although we might not even have heard of all the stuff he built! As part of his PhD thesis he started the development of LLVM, which fundamentally changed how compilers are created, and today forms the foundation of many of the most widely used language ecosystems in the world. He then went on to launch Clang, a C and C++ compiler that sits on top of LLVM, and is used by most of the world’s most significant software developers (including providing the backbone for Google’s performance critical code). LLVM includes an “intermediate representation” (IR), a special language designed for machines to read and write (instead of for people), which has enabled a huge community of software to work together to provide better programming language functionality across a wider range of hardware. Chris saw that C and C++ however didn’t really fully leverage the power of LLVM, so while he was working at Apple he designed a new language, called “Swift”, which he describes as “syntax sugar for LLVM”. Swift has gone on to become one of the world’s most widely used programming languages, in particular because it is today the main way to create iOS apps for iPhone, iPad, MacOS, and Apple TV. Unfortunately, Apple’s control of Swift has meant it hasn’t really had its time to shine outside of the cloistered Apple world. Chris led a team for a while at Google to try to move Swift out of its Apple comfort zone, to become a replacement for Python in AI model development. I was very excited about this project, but sadly it did not receive the support it needed from either Apple or from Google, and it was not ultimately successful. Having said that, whilst at Google Chris did develop another project which became hugely successful: MLIR. MLIR is a replacement for LLVM’s IR for the modern age of many-core computing and AI workloads. It’s critical for fully leveraging the power of hardware like GPUs, TPUs, and the vector units increasingly being added to server-class CPUs. So, if Swift was “syntax sugar for LLVM”, what’s “syntax sugar for MLIR”? The answer is: Mojo! Mojo is a brand new language that’s designed to take full advantage of MLIR. And also Mojo is Python. Wait what? OK let me explain. Maybe it’s better to say Mojo is Python++. It will be (when complete) a strict superset of the Python language. But it also has additional functionality so we can write high performance code that takes advantage of modern accelerators. Mojo seems to me like a more pragmatic approach than Swift. Whereas Swift was a brand new language packing all kinds of cool features based on latest research in programming language design, Mojo is, at its heart, just Python. This seems wise, not just because Python is already well understood by millions of coders, but also because after decades of use its capabilities and limitations are now well understood. Relying on the latest programming language research is pretty cool, but its potentially-dangerous speculation because you never really know how things will turn out. (I will admit that personally, for instance, I often got confused by Swift’s powerful but quirky type system, and sometimes even managed to confuse the Swift compiler and blew it up entirely!) A key trick in Mojo is that you can opt in at any time to a faster “mode” as a developer, by using “fn” instead of “def” to create your function. In this mode, you have to declare exactly what the type of every variable is, and as a result Mojo can create optimised machine code to implement your function. Furthermore, if you use “struct” instead of “class”, your attributes will be tightly packed into memory, such that they can even be used in data structures without chasing pointers around. These are the kinds of features that allow languages like C to be so fast, and now they’re accessible to Python programmers too – just by learning a tiny bit of new syntax. How is this possible? There has, at this point, been hundreds of attempts over decades to create programming languages which are concise, flexible, fast, practical, and easy to use – without much success. But somehow, Modular seems to have done it. How could this be? There are a couple of hypotheses we might come up with: Neither of these things are true. The demo, in fact, was created in just a few days before I recorded the video. The two examples we gave (matmul and mandelbrot) were not carefully chosen as being the only things that happened to work after trying dozens of approaches; rather, they were the only things we tried for the demo and they worked first time! Whilst there’s plenty of missing features at this early stage (Mojo isn’t even released to the public yet, other than an online “playground”), the demo you see really does work the way you see it. And indeed you can run it yourself now in the playground. Modular is a fairly small startup that’s only a year old, and only one part of the company is working on the Mojo language. Mojo development was only started recently. It’s a small team, working for a short time, so how have they done so much? The key is that Mojo builds on some really powerful foundations. Very few software projects I’ve seen spend enough time building the right foundations, and tend to accrue as a result mounds of technical debt. Over time, it becomes harder and harder to add features and fix bugs. In a well designed system, however, every feature is easier to add than the last one, is faster, and has fewer bugs, because the foundations each feature builds upon are getting better and better. Mojo is a well designed system. At its core is MLIR, which has already been developed for many years, initially kicked off by Chris Lattner at Google. He had recognised what the core foundations for an “AI era programming language” would need, and focused on building them. MLIR was a key piece. Just as LLVM made it dramatically easier for powerful new programming languages to be developed over the last decade (such as Rust, Julia, and Swift, which are all based on LLVM), MLIR provides an even more powerful core to languages that are built on it. Another key enabler of Mojo’s rapid development is the decision to use Python as the syntax. Developing and iterating on syntax is one of the most error-prone, complex, and controversial parts of the development of a language. By simply outsourcing that to an existing language (which also happens to be the most widely used language today) that whole piece disappears! The relatively small number of new bits of syntax needed on top of Python then largely fit quite naturally, since the base is already in place. The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo! This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself. I can’t begin to describe all the little (and big!) ideas throughout Mojo’s design and implementation – it’s the result of Chris and his team’s decades of work on compiler and language design and includes all the tricks and hard-won experience from that time – but what I can describe is an amazing result that I saw with my own eyes. The Modular team internally announced that they’d decided to launch Mojo with a video, including a demo – and they set a date just a few weeks in the future. But at that time Mojo was just the most bare-bones language. There was no usable notebook kernel, hardly any of the Python syntax was implemented, and nothing was optimised. I couldn’t understand how they hoped to implement all this in a matter of weeks – let alone to make it any good! What I saw over this time was astonishing. Every day or two whole new language features were implemented, and as soon as there was enough in place to try running algorithms, generally they’d be at or near state of the art performance right away! I realised that what was happening was that all the foundations were already in place, and that they’d been explicitly designed to build the things that were now under development. So it shouldn’t have been a surprise that everything worked, and worked well – after all, that was the plan all along! This is a reason to be optimistic about the future of Mojo. Although it’s still early days for this project, my guess, based on what I’ve observed in the last few weeks, is that it’s going to develop faster and further than most of us expect… Deployment I’ve left one of the bits I’m most excited about to last: deployment. Currently, if you want to give your cool Python program to a friend, then you’re going to have to tell them to first install Python! Or, you could give them an enormous file that includes the entirety of Python and the libraries you use all packaged up together, which will be extracted and loaded when they run your program. Because Python is an interpreted language, how your program will behave will depend on the exact version of python that’s installed, what versions of what libraries are present, and how it’s all been configured. In order to avoid this maintenance nightmare, instead the Python community has settled on a couple of options for installing Python applications: environments, which have a separate Python installation for each program; or containers, which have much of an entire operating system set up for each application. Both approaches lead to a lot of confusion and overhead in developing and deploying Python applications. Compare this to deploying a statically compiled C application: you can literally just make the compiled program available for direct download. It can be just 100k or so in size, and will launch and run quickly. There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python. As a compiled language, Mojo’s deployment story is basically the same as C. For instance, a program that includes a version of matmul written from scratch is around 100k. This means that Mojo is far more than a language for AI/ML applications. It’s actually a version of Python that allows us to write fast, small, easily-deployed applications that take advantage of all available cores and accelerators! Alternatives to Mojo Mojo is not the only attempt at solving the Python performance and deployment problem. In terms of languages, Julia is perhaps the strongest current alternative. It has many of the benefits of Mojo, and a lot of great projects are already built with it. The Julia folks were kind enough to invite me to give a keynote at their recent conference, and I used that opportunity to describe what I felt were the current shortcomings (and opportunities) for Julia: As discussed in this video, Julia’s biggest challenge stems from its large runtime, which in turn stems from the decision to use garbage collection in the language. Also, the multi-dispatch approach used in Julia is a fairly unusual choice, which both opens a lot of doors to do cool stuff in the language, but also can make things pretty complicated for devs. (I’m so enthused by this approach that I built a python version of it – but I’m also as a result particularly aware of its limitations!) In Python, the most prominent current solution is probably Jax, which effectively creates a domain specific language (DSL) using Python. The output of this language is XLA, which is a machine learning compiler that predates MLIR (and is gradually being ported over to MLIR, I believe). Jax inherits the limitations of both Python (e.g the language has no way of representing structs, or allocating memory directly, or creating fast loops) and XLA (which is largely limited to machine learning specific concepts and is primarily targeted to TPUs), but has the huge upside that it doesn’t require a new language or new compiler. As previously discussed, there’s also the new PyTorch compiler, and also Tensorflow is able to generate XLA code. Personally, I find using Python in this way ultimately unsatisfying. I don’t actually get to use all the power of Python, but have to use a subset that’s compatible with the backend I’m targeting. I can’t easily debug and profile the compiled code, and there’s so much “magic” going on that it’s hard to even know what actually ends up getting executed. I don’t even end up with a standalone binary, but instead have to use special runtimes and deal with complex APIs. (I’m not alone here – everyone I know that has used PyTorch or Tensorflow for targeting edge devices or optimised serving infrastructure has described it as being one of the most complex and frustrating tasks they’ve attempted! And I’m not sure I even know anyone that’s actually completed either of these things using Jax.) Another interesting direction for Python is Numba and Cython. I’m a big fan of these projects and have used both in my teaching and software development. Numba uses a special decorator to cause a python function to be compiled into optimised machine code using LLVM. Cython is similar, but also provides a Python-like language which has some of the features of Mojo, and converts this Python dialect into C, which is then compiled. Neither language solves the deployment challenge, but they can help a lot with the performance problem. Neither is able to target a range of accelerators with generic cross-platform code, although Numba does provide a very useful way to write CUDA code (and so allows NVIDIA GPUs to be targeted). I’m really grateful Numba and Cython exist, and have personally gotten a lot out of them. However they’re not at all the same as using a complete language and compiler that generates standalone binaries. They’re bandaid solutions for Python’s performance problems, and are fine for situations where that’s all you need. But I’d much prefer to use a language that’s as elegant as Python and as fast as expert-written C, allows me to use one language to write everything from the application server, to the model architecture and the installer too, and lets me debug and profile my code directly in the language in which I wrote it. How would you like a language like that? Footnotes I’m an advisor to Modular.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-04-04-part2-2023.html] | [TOKENS: 1324] |
From Deep Learning Foundations to Stable Diffusion Jeremy Howard April 4, 2023 On this page Today we’re releasing our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of Practical Deep Learning for Coders. Get started now! In this course, containing over 30 hours of video content, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. We’ve worked closely with experts from Stability.ai and Hugging Face (creators of the Diffusers library) to ensure we have rigorous coverage of the latest techniques. The course includes coverage of papers that were released after Stable Diffusion came out – so it actually goes well beyond even what Stable Diffusion includes! We also explain how to read research papers, and practice this skill by studying and implementing many papers throughout the course. Thank you to all the amazing people who helped put this course together. I’d particularly like to thank Tanishq Mathew Abraham (Stability.ai) and Jonathan Whitaker (co-author of the upcoming O’Reilly Diffusion book) for helping me present a number of the lessons, and also the great behind-the-scenes contributions by Pedro Cuenca (Hugging Face). Thanks also to Kat Crowson for her k-diffusion library which we use heavily throughout the course, and for answering all our questions, and to Francisco Mussari for creating transcripts for most of the lessons. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then we strongly recommend getting starting with Practical Deep Learning.) Get started now! Content summary In this course we’ll explore diffusion methods such as Denoising Diffusion Probabilistic Models (DDPM) and Denoising Diffusion Implicit Models (DDIM). We’ll get our hands dirty implementing unconditional and conditional diffusion models from scratch, building and experimenting with different samplers, and diving into recent tricks like textual inversion and Dreambooth. We will also study and implement the 2022 paper by Karras et al, Elucidating the Design Space of Diffusion-Based Generative Models, which uses pre-conditioning to ensure that inputs and targets to the model are scaled to unit variance. The Karras model predicts an interpolated version of the clean image and the noise, depending on the amount of noise present in the input. Along the way, we’ll cover essential deep learning topics including a variety of neural network architectures, data augmentation approaches (including the amazingly effective and criminally under-appreciated TrivialAugment strategy), and various loss functions, including perceptual loss and style loss. We’ll build our own models from scratch, such as Multi-Layer Perceptrons (MLPs), ResNets, and Unets, while experimenting with generative architectures like autoencoders and transformers. Throughout the course, we’ll use PyTorch to implement our models (but only after we’ve implemented everything needed in pure Python first!), and will create our own deep learning framework called miniai. We’ll master Python concepts like iterators, generators, and decorators to keep our code clean and efficient. We’ll also explore deep learning optimizers like AdamW and RMSProp, learning rate annealing, and learning how to experiment with the impact of different initialisers, batch sizes and learning rates. And of course, we’ll make use of handy tools like the Python debugger (pdb) and nbdev for building Python modules from Jupyter notebooks. Lastly, we’ll touch on fundamental concepts like tensors, calculus, and pseudo-random number generation to provide a solid foundation for our exploration. We’ll apply these concepts to machine learning techniques like mean shift clustering and convolutional neural networks (CNNs), and will see how to use tracking with Weights and Biases (W&B). We’ll also tackle mixed precision training using both NVIDIA’s apex library, and the Accelerate library from Hugging Face. We’ll investigate various types of normalization like Layer Normalization and Batch Normalization. By the end of the course, you’ll have a deep understanding of diffusion models and the skills to implement cutting-edge deep learning techniques. Get started now! Tanishq’s thoughts Here’s what Tanishq Mathew Abraham, from Stability.ai, who helped teach a number of the lessons, thinks of the course: The fast.ai Part 2 course is a one-of-its-kind course. I think this course is unique in that it teaches you how to build deep learning models from scratch while also exploring cutting-edge research in diffusion models. No other course is guiding you through state-of-the-art papers in the diffusion space (sometimes a mere few weeks after they first appear) and building clear, accessible implementations. We’ve even explored some new research directions in the course, and hopefully the course enables others to explore their own ideas further. If you are interested in a more advanced course building state-of-the-art deep learning models from scratch, and/or you’re interested in how state-of-the-art diffusion models work and how to build them, this is the course for you! Even as someone helping with the development of this course, I found this to be an amazing learning experience, and I hope it is for you too! |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-31-extinction.html] | [TOKENS: 1097] |
Is Avoiding Extinction from AI Really an Urgent Priority? Seth Lazar, Jeremy Howard, & Arvind Narayanan May 30, 2023 This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise. This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which undoubtedly brings new risks, including perhaps increasing the probability that some future AI system will go rogue and wipe out humanity. But we are not convinced that mitigating this risk is a global priority. Other AI risks are as important, and are much more urgent. Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters. And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination. We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow. First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us? Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all. And we should focus on building institutions that both reduce existing AI risks and put us in a robust position to address new ones as we learn more about them. This definitely means applying the precautionary principle, and taking concrete steps where we can to anticipate as yet unrealised risks. But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention. Building on their work, let’s focus on the things we can study, understand and control—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#Minecon_and_related_events] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-03-mojo-launch.html] | [TOKENS: 4390] |
Mojo may be the biggest programming language advance in decades Jeremy Howard May 4, 2023 On this page I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo, a new programming language from Modular1 is the second time in my life I’ve had that feeling. Here’s what it looks like: Why not just use Python? Before I explain why I’m so excited about Mojo, I first need to say a few things about Python. Python is the language that I have used for nearly all my work over the last few years. It is a beautiful language. It has an elegant core, on which everything else is built. This approach means that Python can (and does) do absolutely anything. But it comes with a downside: performance. A few percent here or there doesn’t matter. But Python is many thousands of times slower than languages like C++. This makes it impractical to use Python for the performance-sensitive parts of code – the inner loops where performance is critical. However, Python has a trick up its sleeve: it can call out to code written in fast languages. So Python programmers learn to avoid using Python for the implementation of performance-critical sections, instead using Python wrappers over C, FORTRAN, Rust, etc code. Libraries like Numpy and PyTorch provide “pythonic” interfaces to high performance code, allowing Python programmers to feel right at home, even as they’re using highly optimised numeric libraries. Nearly all AI models today are developed in Python, thanks to the flexible and elegant programming language, fantastic tools and ecosystem, and high performance compiled libraries. But this “two-language” approach has serious downsides. For instance, AI models often have to be converted from Python into a faster implementation, such as ONNX or torchscript. But these deployment approaches can’t support all of Python’s features, so Python programmers have to learn to use a subset of the language that matches their deployment target. It’s very hard to profile or debug the deployment version of the code, and there’s no guarantee it will even run identically to the python version. The two-language problem gets in the way of learning. Instead of being able to step into the implementation of an algorithm while your code runs, or jump to the definition of a method of interest, instead you find yourself deep in the weeds of C libraries and binary blobs. All coders are learners (or at least, they should be) because the field constantly develops, and no-one can understand it all. So difficulties learning and problems for experienced devs just as much as it is for students starting out. The same problem occurs when trying to debug code or find and resolve performance problems. The two-language problem means that the tools that Python programmers are familiar with no longer apply as soon as we find ourselves jumping into the backend implementation language. There are also unavoidable performance problems, even when a faster compiled implementation language is used for a library. One major issue is the lack of “fusion” – that is, calling a bunch of compiled functions in a row leads to a lot of overhead, as data is converted to and from python formats, and the cost of switching from python to C and back repeatedly must be paid. So instead we have to write special “fused” versions of common combinations of functions (such as a linear layer followed by a rectified linear layer in a neural net), and call these fused versions from Python. This means there’s a lot more library functions to implement and remember, and you’re out of luck if you’re doing anything even slightly non-standard because there won’t be a fused version for you. We also have to deal with the lack of effective parallel processing in Python. Nowadays we all have computers with lots of cores, but Python generally will just use one at a time. There are some clunky ways to write parallel code which uses more than one core, but they either have to work on totally separate memory (and have a lot of overhead to start up) or they have to take it in turns to access memory (the dreaded “global interpreter lock” which often makes parallel code actually slower than single-threaded code!) Libraries like PyTorch have been developing increasingly ingenious ways to deal with these performance problems, with the newly released PyTorch 2 even including a compile() function that uses a sophisticated compilation backend to create high performance implementations of Python code. However, functionality like this can’t work magic: there are fundamental limitations on what’s possible with Python based on how the language itself is designed. You might imagine that in practice there’s just a small number of building blocks for AI models, and so it doesn’t really matter if we have to implement each of these in C. Besides which, they’re pretty basic algorithms on the whole anyway, right? For instance, transformers models are nearly entirely implemented by multiple layers of two components, multilayer perceptrons (MLP) and attention, which can be implemented with just a few lines of Python with PyTorch. Here’s the implementation of an MLP: …and here’s a self-attention layer: But this hides the fact that real-world implementations of these operations are far more complex. For instance check out this memory optimised “flash attention” implementation in CUDA C. It also hides the fact that there are huge amounts of performance being left on the table by these generic approaches to building models. For instance, “block sparse” approaches can dramatically improve speed and memory use. Researchers are working on tweaks to nearly every part of common architectures, and coming up with new architectures (and SGD optimisers, and data augmentation methods, etc) – we’re not even close to having some neatly wrapped-up system that everyone will use forever more. In practice, much of the fastest code today used for language models is being written in C and C++. For instance, Fabrice Bellard’s TextSynth and Georgi Gerganov’s ggml both use C, and as a result are able to take full advantage of the performance benefits of fully compiled languages. Enter Mojo Chris Lattner is responsible for creating many of the projects that we all rely on today – even although we might not even have heard of all the stuff he built! As part of his PhD thesis he started the development of LLVM, which fundamentally changed how compilers are created, and today forms the foundation of many of the most widely used language ecosystems in the world. He then went on to launch Clang, a C and C++ compiler that sits on top of LLVM, and is used by most of the world’s most significant software developers (including providing the backbone for Google’s performance critical code). LLVM includes an “intermediate representation” (IR), a special language designed for machines to read and write (instead of for people), which has enabled a huge community of software to work together to provide better programming language functionality across a wider range of hardware. Chris saw that C and C++ however didn’t really fully leverage the power of LLVM, so while he was working at Apple he designed a new language, called “Swift”, which he describes as “syntax sugar for LLVM”. Swift has gone on to become one of the world’s most widely used programming languages, in particular because it is today the main way to create iOS apps for iPhone, iPad, MacOS, and Apple TV. Unfortunately, Apple’s control of Swift has meant it hasn’t really had its time to shine outside of the cloistered Apple world. Chris led a team for a while at Google to try to move Swift out of its Apple comfort zone, to become a replacement for Python in AI model development. I was very excited about this project, but sadly it did not receive the support it needed from either Apple or from Google, and it was not ultimately successful. Having said that, whilst at Google Chris did develop another project which became hugely successful: MLIR. MLIR is a replacement for LLVM’s IR for the modern age of many-core computing and AI workloads. It’s critical for fully leveraging the power of hardware like GPUs, TPUs, and the vector units increasingly being added to server-class CPUs. So, if Swift was “syntax sugar for LLVM”, what’s “syntax sugar for MLIR”? The answer is: Mojo! Mojo is a brand new language that’s designed to take full advantage of MLIR. And also Mojo is Python. Wait what? OK let me explain. Maybe it’s better to say Mojo is Python++. It will be (when complete) a strict superset of the Python language. But it also has additional functionality so we can write high performance code that takes advantage of modern accelerators. Mojo seems to me like a more pragmatic approach than Swift. Whereas Swift was a brand new language packing all kinds of cool features based on latest research in programming language design, Mojo is, at its heart, just Python. This seems wise, not just because Python is already well understood by millions of coders, but also because after decades of use its capabilities and limitations are now well understood. Relying on the latest programming language research is pretty cool, but its potentially-dangerous speculation because you never really know how things will turn out. (I will admit that personally, for instance, I often got confused by Swift’s powerful but quirky type system, and sometimes even managed to confuse the Swift compiler and blew it up entirely!) A key trick in Mojo is that you can opt in at any time to a faster “mode” as a developer, by using “fn” instead of “def” to create your function. In this mode, you have to declare exactly what the type of every variable is, and as a result Mojo can create optimised machine code to implement your function. Furthermore, if you use “struct” instead of “class”, your attributes will be tightly packed into memory, such that they can even be used in data structures without chasing pointers around. These are the kinds of features that allow languages like C to be so fast, and now they’re accessible to Python programmers too – just by learning a tiny bit of new syntax. How is this possible? There has, at this point, been hundreds of attempts over decades to create programming languages which are concise, flexible, fast, practical, and easy to use – without much success. But somehow, Modular seems to have done it. How could this be? There are a couple of hypotheses we might come up with: Neither of these things are true. The demo, in fact, was created in just a few days before I recorded the video. The two examples we gave (matmul and mandelbrot) were not carefully chosen as being the only things that happened to work after trying dozens of approaches; rather, they were the only things we tried for the demo and they worked first time! Whilst there’s plenty of missing features at this early stage (Mojo isn’t even released to the public yet, other than an online “playground”), the demo you see really does work the way you see it. And indeed you can run it yourself now in the playground. Modular is a fairly small startup that’s only a year old, and only one part of the company is working on the Mojo language. Mojo development was only started recently. It’s a small team, working for a short time, so how have they done so much? The key is that Mojo builds on some really powerful foundations. Very few software projects I’ve seen spend enough time building the right foundations, and tend to accrue as a result mounds of technical debt. Over time, it becomes harder and harder to add features and fix bugs. In a well designed system, however, every feature is easier to add than the last one, is faster, and has fewer bugs, because the foundations each feature builds upon are getting better and better. Mojo is a well designed system. At its core is MLIR, which has already been developed for many years, initially kicked off by Chris Lattner at Google. He had recognised what the core foundations for an “AI era programming language” would need, and focused on building them. MLIR was a key piece. Just as LLVM made it dramatically easier for powerful new programming languages to be developed over the last decade (such as Rust, Julia, and Swift, which are all based on LLVM), MLIR provides an even more powerful core to languages that are built on it. Another key enabler of Mojo’s rapid development is the decision to use Python as the syntax. Developing and iterating on syntax is one of the most error-prone, complex, and controversial parts of the development of a language. By simply outsourcing that to an existing language (which also happens to be the most widely used language today) that whole piece disappears! The relatively small number of new bits of syntax needed on top of Python then largely fit quite naturally, since the base is already in place. The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo! This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself. I can’t begin to describe all the little (and big!) ideas throughout Mojo’s design and implementation – it’s the result of Chris and his team’s decades of work on compiler and language design and includes all the tricks and hard-won experience from that time – but what I can describe is an amazing result that I saw with my own eyes. The Modular team internally announced that they’d decided to launch Mojo with a video, including a demo – and they set a date just a few weeks in the future. But at that time Mojo was just the most bare-bones language. There was no usable notebook kernel, hardly any of the Python syntax was implemented, and nothing was optimised. I couldn’t understand how they hoped to implement all this in a matter of weeks – let alone to make it any good! What I saw over this time was astonishing. Every day or two whole new language features were implemented, and as soon as there was enough in place to try running algorithms, generally they’d be at or near state of the art performance right away! I realised that what was happening was that all the foundations were already in place, and that they’d been explicitly designed to build the things that were now under development. So it shouldn’t have been a surprise that everything worked, and worked well – after all, that was the plan all along! This is a reason to be optimistic about the future of Mojo. Although it’s still early days for this project, my guess, based on what I’ve observed in the last few weeks, is that it’s going to develop faster and further than most of us expect… Deployment I’ve left one of the bits I’m most excited about to last: deployment. Currently, if you want to give your cool Python program to a friend, then you’re going to have to tell them to first install Python! Or, you could give them an enormous file that includes the entirety of Python and the libraries you use all packaged up together, which will be extracted and loaded when they run your program. Because Python is an interpreted language, how your program will behave will depend on the exact version of python that’s installed, what versions of what libraries are present, and how it’s all been configured. In order to avoid this maintenance nightmare, instead the Python community has settled on a couple of options for installing Python applications: environments, which have a separate Python installation for each program; or containers, which have much of an entire operating system set up for each application. Both approaches lead to a lot of confusion and overhead in developing and deploying Python applications. Compare this to deploying a statically compiled C application: you can literally just make the compiled program available for direct download. It can be just 100k or so in size, and will launch and run quickly. There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python. As a compiled language, Mojo’s deployment story is basically the same as C. For instance, a program that includes a version of matmul written from scratch is around 100k. This means that Mojo is far more than a language for AI/ML applications. It’s actually a version of Python that allows us to write fast, small, easily-deployed applications that take advantage of all available cores and accelerators! Alternatives to Mojo Mojo is not the only attempt at solving the Python performance and deployment problem. In terms of languages, Julia is perhaps the strongest current alternative. It has many of the benefits of Mojo, and a lot of great projects are already built with it. The Julia folks were kind enough to invite me to give a keynote at their recent conference, and I used that opportunity to describe what I felt were the current shortcomings (and opportunities) for Julia: As discussed in this video, Julia’s biggest challenge stems from its large runtime, which in turn stems from the decision to use garbage collection in the language. Also, the multi-dispatch approach used in Julia is a fairly unusual choice, which both opens a lot of doors to do cool stuff in the language, but also can make things pretty complicated for devs. (I’m so enthused by this approach that I built a python version of it – but I’m also as a result particularly aware of its limitations!) In Python, the most prominent current solution is probably Jax, which effectively creates a domain specific language (DSL) using Python. The output of this language is XLA, which is a machine learning compiler that predates MLIR (and is gradually being ported over to MLIR, I believe). Jax inherits the limitations of both Python (e.g the language has no way of representing structs, or allocating memory directly, or creating fast loops) and XLA (which is largely limited to machine learning specific concepts and is primarily targeted to TPUs), but has the huge upside that it doesn’t require a new language or new compiler. As previously discussed, there’s also the new PyTorch compiler, and also Tensorflow is able to generate XLA code. Personally, I find using Python in this way ultimately unsatisfying. I don’t actually get to use all the power of Python, but have to use a subset that’s compatible with the backend I’m targeting. I can’t easily debug and profile the compiled code, and there’s so much “magic” going on that it’s hard to even know what actually ends up getting executed. I don’t even end up with a standalone binary, but instead have to use special runtimes and deal with complex APIs. (I’m not alone here – everyone I know that has used PyTorch or Tensorflow for targeting edge devices or optimised serving infrastructure has described it as being one of the most complex and frustrating tasks they’ve attempted! And I’m not sure I even know anyone that’s actually completed either of these things using Jax.) Another interesting direction for Python is Numba and Cython. I’m a big fan of these projects and have used both in my teaching and software development. Numba uses a special decorator to cause a python function to be compiled into optimised machine code using LLVM. Cython is similar, but also provides a Python-like language which has some of the features of Mojo, and converts this Python dialect into C, which is then compiled. Neither language solves the deployment challenge, but they can help a lot with the performance problem. Neither is able to target a range of accelerators with generic cross-platform code, although Numba does provide a very useful way to write CUDA code (and so allows NVIDIA GPUs to be targeted). I’m really grateful Numba and Cython exist, and have personally gotten a lot out of them. However they’re not at all the same as using a complete language and compiler that generates standalone binaries. They’re bandaid solutions for Python’s performance problems, and are fine for situations where that’s all you need. But I’d much prefer to use a language that’s as elegant as Python and as fast as expert-written C, allows me to use one language to write everything from the application server, to the model architecture and the installer too, and lets me debug and profile my code directly in the language in which I wrote it. How would you like a language like that? Footnotes I’m an advisor to Modular.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-03-mojo-launch.html] | [TOKENS: 4390] |
Mojo may be the biggest programming language advance in decades Jeremy Howard May 4, 2023 On this page I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo, a new programming language from Modular1 is the second time in my life I’ve had that feeling. Here’s what it looks like: Why not just use Python? Before I explain why I’m so excited about Mojo, I first need to say a few things about Python. Python is the language that I have used for nearly all my work over the last few years. It is a beautiful language. It has an elegant core, on which everything else is built. This approach means that Python can (and does) do absolutely anything. But it comes with a downside: performance. A few percent here or there doesn’t matter. But Python is many thousands of times slower than languages like C++. This makes it impractical to use Python for the performance-sensitive parts of code – the inner loops where performance is critical. However, Python has a trick up its sleeve: it can call out to code written in fast languages. So Python programmers learn to avoid using Python for the implementation of performance-critical sections, instead using Python wrappers over C, FORTRAN, Rust, etc code. Libraries like Numpy and PyTorch provide “pythonic” interfaces to high performance code, allowing Python programmers to feel right at home, even as they’re using highly optimised numeric libraries. Nearly all AI models today are developed in Python, thanks to the flexible and elegant programming language, fantastic tools and ecosystem, and high performance compiled libraries. But this “two-language” approach has serious downsides. For instance, AI models often have to be converted from Python into a faster implementation, such as ONNX or torchscript. But these deployment approaches can’t support all of Python’s features, so Python programmers have to learn to use a subset of the language that matches their deployment target. It’s very hard to profile or debug the deployment version of the code, and there’s no guarantee it will even run identically to the python version. The two-language problem gets in the way of learning. Instead of being able to step into the implementation of an algorithm while your code runs, or jump to the definition of a method of interest, instead you find yourself deep in the weeds of C libraries and binary blobs. All coders are learners (or at least, they should be) because the field constantly develops, and no-one can understand it all. So difficulties learning and problems for experienced devs just as much as it is for students starting out. The same problem occurs when trying to debug code or find and resolve performance problems. The two-language problem means that the tools that Python programmers are familiar with no longer apply as soon as we find ourselves jumping into the backend implementation language. There are also unavoidable performance problems, even when a faster compiled implementation language is used for a library. One major issue is the lack of “fusion” – that is, calling a bunch of compiled functions in a row leads to a lot of overhead, as data is converted to and from python formats, and the cost of switching from python to C and back repeatedly must be paid. So instead we have to write special “fused” versions of common combinations of functions (such as a linear layer followed by a rectified linear layer in a neural net), and call these fused versions from Python. This means there’s a lot more library functions to implement and remember, and you’re out of luck if you’re doing anything even slightly non-standard because there won’t be a fused version for you. We also have to deal with the lack of effective parallel processing in Python. Nowadays we all have computers with lots of cores, but Python generally will just use one at a time. There are some clunky ways to write parallel code which uses more than one core, but they either have to work on totally separate memory (and have a lot of overhead to start up) or they have to take it in turns to access memory (the dreaded “global interpreter lock” which often makes parallel code actually slower than single-threaded code!) Libraries like PyTorch have been developing increasingly ingenious ways to deal with these performance problems, with the newly released PyTorch 2 even including a compile() function that uses a sophisticated compilation backend to create high performance implementations of Python code. However, functionality like this can’t work magic: there are fundamental limitations on what’s possible with Python based on how the language itself is designed. You might imagine that in practice there’s just a small number of building blocks for AI models, and so it doesn’t really matter if we have to implement each of these in C. Besides which, they’re pretty basic algorithms on the whole anyway, right? For instance, transformers models are nearly entirely implemented by multiple layers of two components, multilayer perceptrons (MLP) and attention, which can be implemented with just a few lines of Python with PyTorch. Here’s the implementation of an MLP: …and here’s a self-attention layer: But this hides the fact that real-world implementations of these operations are far more complex. For instance check out this memory optimised “flash attention” implementation in CUDA C. It also hides the fact that there are huge amounts of performance being left on the table by these generic approaches to building models. For instance, “block sparse” approaches can dramatically improve speed and memory use. Researchers are working on tweaks to nearly every part of common architectures, and coming up with new architectures (and SGD optimisers, and data augmentation methods, etc) – we’re not even close to having some neatly wrapped-up system that everyone will use forever more. In practice, much of the fastest code today used for language models is being written in C and C++. For instance, Fabrice Bellard’s TextSynth and Georgi Gerganov’s ggml both use C, and as a result are able to take full advantage of the performance benefits of fully compiled languages. Enter Mojo Chris Lattner is responsible for creating many of the projects that we all rely on today – even although we might not even have heard of all the stuff he built! As part of his PhD thesis he started the development of LLVM, which fundamentally changed how compilers are created, and today forms the foundation of many of the most widely used language ecosystems in the world. He then went on to launch Clang, a C and C++ compiler that sits on top of LLVM, and is used by most of the world’s most significant software developers (including providing the backbone for Google’s performance critical code). LLVM includes an “intermediate representation” (IR), a special language designed for machines to read and write (instead of for people), which has enabled a huge community of software to work together to provide better programming language functionality across a wider range of hardware. Chris saw that C and C++ however didn’t really fully leverage the power of LLVM, so while he was working at Apple he designed a new language, called “Swift”, which he describes as “syntax sugar for LLVM”. Swift has gone on to become one of the world’s most widely used programming languages, in particular because it is today the main way to create iOS apps for iPhone, iPad, MacOS, and Apple TV. Unfortunately, Apple’s control of Swift has meant it hasn’t really had its time to shine outside of the cloistered Apple world. Chris led a team for a while at Google to try to move Swift out of its Apple comfort zone, to become a replacement for Python in AI model development. I was very excited about this project, but sadly it did not receive the support it needed from either Apple or from Google, and it was not ultimately successful. Having said that, whilst at Google Chris did develop another project which became hugely successful: MLIR. MLIR is a replacement for LLVM’s IR for the modern age of many-core computing and AI workloads. It’s critical for fully leveraging the power of hardware like GPUs, TPUs, and the vector units increasingly being added to server-class CPUs. So, if Swift was “syntax sugar for LLVM”, what’s “syntax sugar for MLIR”? The answer is: Mojo! Mojo is a brand new language that’s designed to take full advantage of MLIR. And also Mojo is Python. Wait what? OK let me explain. Maybe it’s better to say Mojo is Python++. It will be (when complete) a strict superset of the Python language. But it also has additional functionality so we can write high performance code that takes advantage of modern accelerators. Mojo seems to me like a more pragmatic approach than Swift. Whereas Swift was a brand new language packing all kinds of cool features based on latest research in programming language design, Mojo is, at its heart, just Python. This seems wise, not just because Python is already well understood by millions of coders, but also because after decades of use its capabilities and limitations are now well understood. Relying on the latest programming language research is pretty cool, but its potentially-dangerous speculation because you never really know how things will turn out. (I will admit that personally, for instance, I often got confused by Swift’s powerful but quirky type system, and sometimes even managed to confuse the Swift compiler and blew it up entirely!) A key trick in Mojo is that you can opt in at any time to a faster “mode” as a developer, by using “fn” instead of “def” to create your function. In this mode, you have to declare exactly what the type of every variable is, and as a result Mojo can create optimised machine code to implement your function. Furthermore, if you use “struct” instead of “class”, your attributes will be tightly packed into memory, such that they can even be used in data structures without chasing pointers around. These are the kinds of features that allow languages like C to be so fast, and now they’re accessible to Python programmers too – just by learning a tiny bit of new syntax. How is this possible? There has, at this point, been hundreds of attempts over decades to create programming languages which are concise, flexible, fast, practical, and easy to use – without much success. But somehow, Modular seems to have done it. How could this be? There are a couple of hypotheses we might come up with: Neither of these things are true. The demo, in fact, was created in just a few days before I recorded the video. The two examples we gave (matmul and mandelbrot) were not carefully chosen as being the only things that happened to work after trying dozens of approaches; rather, they were the only things we tried for the demo and they worked first time! Whilst there’s plenty of missing features at this early stage (Mojo isn’t even released to the public yet, other than an online “playground”), the demo you see really does work the way you see it. And indeed you can run it yourself now in the playground. Modular is a fairly small startup that’s only a year old, and only one part of the company is working on the Mojo language. Mojo development was only started recently. It’s a small team, working for a short time, so how have they done so much? The key is that Mojo builds on some really powerful foundations. Very few software projects I’ve seen spend enough time building the right foundations, and tend to accrue as a result mounds of technical debt. Over time, it becomes harder and harder to add features and fix bugs. In a well designed system, however, every feature is easier to add than the last one, is faster, and has fewer bugs, because the foundations each feature builds upon are getting better and better. Mojo is a well designed system. At its core is MLIR, which has already been developed for many years, initially kicked off by Chris Lattner at Google. He had recognised what the core foundations for an “AI era programming language” would need, and focused on building them. MLIR was a key piece. Just as LLVM made it dramatically easier for powerful new programming languages to be developed over the last decade (such as Rust, Julia, and Swift, which are all based on LLVM), MLIR provides an even more powerful core to languages that are built on it. Another key enabler of Mojo’s rapid development is the decision to use Python as the syntax. Developing and iterating on syntax is one of the most error-prone, complex, and controversial parts of the development of a language. By simply outsourcing that to an existing language (which also happens to be the most widely used language today) that whole piece disappears! The relatively small number of new bits of syntax needed on top of Python then largely fit quite naturally, since the base is already in place. The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo! This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself. I can’t begin to describe all the little (and big!) ideas throughout Mojo’s design and implementation – it’s the result of Chris and his team’s decades of work on compiler and language design and includes all the tricks and hard-won experience from that time – but what I can describe is an amazing result that I saw with my own eyes. The Modular team internally announced that they’d decided to launch Mojo with a video, including a demo – and they set a date just a few weeks in the future. But at that time Mojo was just the most bare-bones language. There was no usable notebook kernel, hardly any of the Python syntax was implemented, and nothing was optimised. I couldn’t understand how they hoped to implement all this in a matter of weeks – let alone to make it any good! What I saw over this time was astonishing. Every day or two whole new language features were implemented, and as soon as there was enough in place to try running algorithms, generally they’d be at or near state of the art performance right away! I realised that what was happening was that all the foundations were already in place, and that they’d been explicitly designed to build the things that were now under development. So it shouldn’t have been a surprise that everything worked, and worked well – after all, that was the plan all along! This is a reason to be optimistic about the future of Mojo. Although it’s still early days for this project, my guess, based on what I’ve observed in the last few weeks, is that it’s going to develop faster and further than most of us expect… Deployment I’ve left one of the bits I’m most excited about to last: deployment. Currently, if you want to give your cool Python program to a friend, then you’re going to have to tell them to first install Python! Or, you could give them an enormous file that includes the entirety of Python and the libraries you use all packaged up together, which will be extracted and loaded when they run your program. Because Python is an interpreted language, how your program will behave will depend on the exact version of python that’s installed, what versions of what libraries are present, and how it’s all been configured. In order to avoid this maintenance nightmare, instead the Python community has settled on a couple of options for installing Python applications: environments, which have a separate Python installation for each program; or containers, which have much of an entire operating system set up for each application. Both approaches lead to a lot of confusion and overhead in developing and deploying Python applications. Compare this to deploying a statically compiled C application: you can literally just make the compiled program available for direct download. It can be just 100k or so in size, and will launch and run quickly. There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python. As a compiled language, Mojo’s deployment story is basically the same as C. For instance, a program that includes a version of matmul written from scratch is around 100k. This means that Mojo is far more than a language for AI/ML applications. It’s actually a version of Python that allows us to write fast, small, easily-deployed applications that take advantage of all available cores and accelerators! Alternatives to Mojo Mojo is not the only attempt at solving the Python performance and deployment problem. In terms of languages, Julia is perhaps the strongest current alternative. It has many of the benefits of Mojo, and a lot of great projects are already built with it. The Julia folks were kind enough to invite me to give a keynote at their recent conference, and I used that opportunity to describe what I felt were the current shortcomings (and opportunities) for Julia: As discussed in this video, Julia’s biggest challenge stems from its large runtime, which in turn stems from the decision to use garbage collection in the language. Also, the multi-dispatch approach used in Julia is a fairly unusual choice, which both opens a lot of doors to do cool stuff in the language, but also can make things pretty complicated for devs. (I’m so enthused by this approach that I built a python version of it – but I’m also as a result particularly aware of its limitations!) In Python, the most prominent current solution is probably Jax, which effectively creates a domain specific language (DSL) using Python. The output of this language is XLA, which is a machine learning compiler that predates MLIR (and is gradually being ported over to MLIR, I believe). Jax inherits the limitations of both Python (e.g the language has no way of representing structs, or allocating memory directly, or creating fast loops) and XLA (which is largely limited to machine learning specific concepts and is primarily targeted to TPUs), but has the huge upside that it doesn’t require a new language or new compiler. As previously discussed, there’s also the new PyTorch compiler, and also Tensorflow is able to generate XLA code. Personally, I find using Python in this way ultimately unsatisfying. I don’t actually get to use all the power of Python, but have to use a subset that’s compatible with the backend I’m targeting. I can’t easily debug and profile the compiled code, and there’s so much “magic” going on that it’s hard to even know what actually ends up getting executed. I don’t even end up with a standalone binary, but instead have to use special runtimes and deal with complex APIs. (I’m not alone here – everyone I know that has used PyTorch or Tensorflow for targeting edge devices or optimised serving infrastructure has described it as being one of the most complex and frustrating tasks they’ve attempted! And I’m not sure I even know anyone that’s actually completed either of these things using Jax.) Another interesting direction for Python is Numba and Cython. I’m a big fan of these projects and have used both in my teaching and software development. Numba uses a special decorator to cause a python function to be compiled into optimised machine code using LLVM. Cython is similar, but also provides a Python-like language which has some of the features of Mojo, and converts this Python dialect into C, which is then compiled. Neither language solves the deployment challenge, but they can help a lot with the performance problem. Neither is able to target a range of accelerators with generic cross-platform code, although Numba does provide a very useful way to write CUDA code (and so allows NVIDIA GPUs to be targeted). I’m really grateful Numba and Cython exist, and have personally gotten a lot out of them. However they’re not at all the same as using a complete language and compiler that generates standalone binaries. They’re bandaid solutions for Python’s performance problems, and are fine for situations where that’s all you need. But I’d much prefer to use a language that’s as elegant as Python and as fast as expert-written C, allows me to use one language to write everything from the application server, to the model architecture and the installer too, and lets me debug and profile my code directly in the language in which I wrote it. How would you like a language like that? Footnotes I’m an advisor to Modular.↩︎ |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#External_links] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-04-04-part2-2023.html] | [TOKENS: 1324] |
From Deep Learning Foundations to Stable Diffusion Jeremy Howard April 4, 2023 On this page Today we’re releasing our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of Practical Deep Learning for Coders. Get started now! In this course, containing over 30 hours of video content, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. We’ve worked closely with experts from Stability.ai and Hugging Face (creators of the Diffusers library) to ensure we have rigorous coverage of the latest techniques. The course includes coverage of papers that were released after Stable Diffusion came out – so it actually goes well beyond even what Stable Diffusion includes! We also explain how to read research papers, and practice this skill by studying and implementing many papers throughout the course. Thank you to all the amazing people who helped put this course together. I’d particularly like to thank Tanishq Mathew Abraham (Stability.ai) and Jonathan Whitaker (co-author of the upcoming O’Reilly Diffusion book) for helping me present a number of the lessons, and also the great behind-the-scenes contributions by Pedro Cuenca (Hugging Face). Thanks also to Kat Crowson for her k-diffusion library which we use heavily throughout the course, and for answering all our questions, and to Francisco Mussari for creating transcripts for most of the lessons. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then we strongly recommend getting starting with Practical Deep Learning.) Get started now! Content summary In this course we’ll explore diffusion methods such as Denoising Diffusion Probabilistic Models (DDPM) and Denoising Diffusion Implicit Models (DDIM). We’ll get our hands dirty implementing unconditional and conditional diffusion models from scratch, building and experimenting with different samplers, and diving into recent tricks like textual inversion and Dreambooth. We will also study and implement the 2022 paper by Karras et al, Elucidating the Design Space of Diffusion-Based Generative Models, which uses pre-conditioning to ensure that inputs and targets to the model are scaled to unit variance. The Karras model predicts an interpolated version of the clean image and the noise, depending on the amount of noise present in the input. Along the way, we’ll cover essential deep learning topics including a variety of neural network architectures, data augmentation approaches (including the amazingly effective and criminally under-appreciated TrivialAugment strategy), and various loss functions, including perceptual loss and style loss. We’ll build our own models from scratch, such as Multi-Layer Perceptrons (MLPs), ResNets, and Unets, while experimenting with generative architectures like autoencoders and transformers. Throughout the course, we’ll use PyTorch to implement our models (but only after we’ve implemented everything needed in pure Python first!), and will create our own deep learning framework called miniai. We’ll master Python concepts like iterators, generators, and decorators to keep our code clean and efficient. We’ll also explore deep learning optimizers like AdamW and RMSProp, learning rate annealing, and learning how to experiment with the impact of different initialisers, batch sizes and learning rates. And of course, we’ll make use of handy tools like the Python debugger (pdb) and nbdev for building Python modules from Jupyter notebooks. Lastly, we’ll touch on fundamental concepts like tensors, calculus, and pseudo-random number generation to provide a solid foundation for our exploration. We’ll apply these concepts to machine learning techniques like mean shift clustering and convolutional neural networks (CNNs), and will see how to use tracking with Weights and Biases (W&B). We’ll also tackle mixed precision training using both NVIDIA’s apex library, and the Accelerate library from Hugging Face. We’ll investigate various types of normalization like Layer Normalization and Batch Normalization. By the end of the course, you’ll have a deep understanding of diffusion models and the skills to implement cutting-edge deep learning techniques. Get started now! Tanishq’s thoughts Here’s what Tanishq Mathew Abraham, from Stability.ai, who helped teach a number of the lessons, thinks of the course: The fast.ai Part 2 course is a one-of-its-kind course. I think this course is unique in that it teaches you how to build deep learning models from scratch while also exploring cutting-edge research in diffusion models. No other course is guiding you through state-of-the-art papers in the diffusion space (sometimes a mere few weeks after they first appear) and building clear, accessible implementations. We’ve even explored some new research directions in the course, and hopefully the course enables others to explore their own ideas further. If you are interested in a more advanced course building state-of-the-art deep learning models from scratch, and/or you’re interested in how state-of-the-art diffusion models work and how to build them, this is the course for you! Even as someone helping with the development of this course, I found this to be an amazing learning experience, and I hope it is for you too! |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-05-03-mojo-launch.html] | [TOKENS: 4390] |
Mojo may be the biggest programming language advance in decades Jeremy Howard May 4, 2023 On this page I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo, a new programming language from Modular1 is the second time in my life I’ve had that feeling. Here’s what it looks like: Why not just use Python? Before I explain why I’m so excited about Mojo, I first need to say a few things about Python. Python is the language that I have used for nearly all my work over the last few years. It is a beautiful language. It has an elegant core, on which everything else is built. This approach means that Python can (and does) do absolutely anything. But it comes with a downside: performance. A few percent here or there doesn’t matter. But Python is many thousands of times slower than languages like C++. This makes it impractical to use Python for the performance-sensitive parts of code – the inner loops where performance is critical. However, Python has a trick up its sleeve: it can call out to code written in fast languages. So Python programmers learn to avoid using Python for the implementation of performance-critical sections, instead using Python wrappers over C, FORTRAN, Rust, etc code. Libraries like Numpy and PyTorch provide “pythonic” interfaces to high performance code, allowing Python programmers to feel right at home, even as they’re using highly optimised numeric libraries. Nearly all AI models today are developed in Python, thanks to the flexible and elegant programming language, fantastic tools and ecosystem, and high performance compiled libraries. But this “two-language” approach has serious downsides. For instance, AI models often have to be converted from Python into a faster implementation, such as ONNX or torchscript. But these deployment approaches can’t support all of Python’s features, so Python programmers have to learn to use a subset of the language that matches their deployment target. It’s very hard to profile or debug the deployment version of the code, and there’s no guarantee it will even run identically to the python version. The two-language problem gets in the way of learning. Instead of being able to step into the implementation of an algorithm while your code runs, or jump to the definition of a method of interest, instead you find yourself deep in the weeds of C libraries and binary blobs. All coders are learners (or at least, they should be) because the field constantly develops, and no-one can understand it all. So difficulties learning and problems for experienced devs just as much as it is for students starting out. The same problem occurs when trying to debug code or find and resolve performance problems. The two-language problem means that the tools that Python programmers are familiar with no longer apply as soon as we find ourselves jumping into the backend implementation language. There are also unavoidable performance problems, even when a faster compiled implementation language is used for a library. One major issue is the lack of “fusion” – that is, calling a bunch of compiled functions in a row leads to a lot of overhead, as data is converted to and from python formats, and the cost of switching from python to C and back repeatedly must be paid. So instead we have to write special “fused” versions of common combinations of functions (such as a linear layer followed by a rectified linear layer in a neural net), and call these fused versions from Python. This means there’s a lot more library functions to implement and remember, and you’re out of luck if you’re doing anything even slightly non-standard because there won’t be a fused version for you. We also have to deal with the lack of effective parallel processing in Python. Nowadays we all have computers with lots of cores, but Python generally will just use one at a time. There are some clunky ways to write parallel code which uses more than one core, but they either have to work on totally separate memory (and have a lot of overhead to start up) or they have to take it in turns to access memory (the dreaded “global interpreter lock” which often makes parallel code actually slower than single-threaded code!) Libraries like PyTorch have been developing increasingly ingenious ways to deal with these performance problems, with the newly released PyTorch 2 even including a compile() function that uses a sophisticated compilation backend to create high performance implementations of Python code. However, functionality like this can’t work magic: there are fundamental limitations on what’s possible with Python based on how the language itself is designed. You might imagine that in practice there’s just a small number of building blocks for AI models, and so it doesn’t really matter if we have to implement each of these in C. Besides which, they’re pretty basic algorithms on the whole anyway, right? For instance, transformers models are nearly entirely implemented by multiple layers of two components, multilayer perceptrons (MLP) and attention, which can be implemented with just a few lines of Python with PyTorch. Here’s the implementation of an MLP: …and here’s a self-attention layer: But this hides the fact that real-world implementations of these operations are far more complex. For instance check out this memory optimised “flash attention” implementation in CUDA C. It also hides the fact that there are huge amounts of performance being left on the table by these generic approaches to building models. For instance, “block sparse” approaches can dramatically improve speed and memory use. Researchers are working on tweaks to nearly every part of common architectures, and coming up with new architectures (and SGD optimisers, and data augmentation methods, etc) – we’re not even close to having some neatly wrapped-up system that everyone will use forever more. In practice, much of the fastest code today used for language models is being written in C and C++. For instance, Fabrice Bellard’s TextSynth and Georgi Gerganov’s ggml both use C, and as a result are able to take full advantage of the performance benefits of fully compiled languages. Enter Mojo Chris Lattner is responsible for creating many of the projects that we all rely on today – even although we might not even have heard of all the stuff he built! As part of his PhD thesis he started the development of LLVM, which fundamentally changed how compilers are created, and today forms the foundation of many of the most widely used language ecosystems in the world. He then went on to launch Clang, a C and C++ compiler that sits on top of LLVM, and is used by most of the world’s most significant software developers (including providing the backbone for Google’s performance critical code). LLVM includes an “intermediate representation” (IR), a special language designed for machines to read and write (instead of for people), which has enabled a huge community of software to work together to provide better programming language functionality across a wider range of hardware. Chris saw that C and C++ however didn’t really fully leverage the power of LLVM, so while he was working at Apple he designed a new language, called “Swift”, which he describes as “syntax sugar for LLVM”. Swift has gone on to become one of the world’s most widely used programming languages, in particular because it is today the main way to create iOS apps for iPhone, iPad, MacOS, and Apple TV. Unfortunately, Apple’s control of Swift has meant it hasn’t really had its time to shine outside of the cloistered Apple world. Chris led a team for a while at Google to try to move Swift out of its Apple comfort zone, to become a replacement for Python in AI model development. I was very excited about this project, but sadly it did not receive the support it needed from either Apple or from Google, and it was not ultimately successful. Having said that, whilst at Google Chris did develop another project which became hugely successful: MLIR. MLIR is a replacement for LLVM’s IR for the modern age of many-core computing and AI workloads. It’s critical for fully leveraging the power of hardware like GPUs, TPUs, and the vector units increasingly being added to server-class CPUs. So, if Swift was “syntax sugar for LLVM”, what’s “syntax sugar for MLIR”? The answer is: Mojo! Mojo is a brand new language that’s designed to take full advantage of MLIR. And also Mojo is Python. Wait what? OK let me explain. Maybe it’s better to say Mojo is Python++. It will be (when complete) a strict superset of the Python language. But it also has additional functionality so we can write high performance code that takes advantage of modern accelerators. Mojo seems to me like a more pragmatic approach than Swift. Whereas Swift was a brand new language packing all kinds of cool features based on latest research in programming language design, Mojo is, at its heart, just Python. This seems wise, not just because Python is already well understood by millions of coders, but also because after decades of use its capabilities and limitations are now well understood. Relying on the latest programming language research is pretty cool, but its potentially-dangerous speculation because you never really know how things will turn out. (I will admit that personally, for instance, I often got confused by Swift’s powerful but quirky type system, and sometimes even managed to confuse the Swift compiler and blew it up entirely!) A key trick in Mojo is that you can opt in at any time to a faster “mode” as a developer, by using “fn” instead of “def” to create your function. In this mode, you have to declare exactly what the type of every variable is, and as a result Mojo can create optimised machine code to implement your function. Furthermore, if you use “struct” instead of “class”, your attributes will be tightly packed into memory, such that they can even be used in data structures without chasing pointers around. These are the kinds of features that allow languages like C to be so fast, and now they’re accessible to Python programmers too – just by learning a tiny bit of new syntax. How is this possible? There has, at this point, been hundreds of attempts over decades to create programming languages which are concise, flexible, fast, practical, and easy to use – without much success. But somehow, Modular seems to have done it. How could this be? There are a couple of hypotheses we might come up with: Neither of these things are true. The demo, in fact, was created in just a few days before I recorded the video. The two examples we gave (matmul and mandelbrot) were not carefully chosen as being the only things that happened to work after trying dozens of approaches; rather, they were the only things we tried for the demo and they worked first time! Whilst there’s plenty of missing features at this early stage (Mojo isn’t even released to the public yet, other than an online “playground”), the demo you see really does work the way you see it. And indeed you can run it yourself now in the playground. Modular is a fairly small startup that’s only a year old, and only one part of the company is working on the Mojo language. Mojo development was only started recently. It’s a small team, working for a short time, so how have they done so much? The key is that Mojo builds on some really powerful foundations. Very few software projects I’ve seen spend enough time building the right foundations, and tend to accrue as a result mounds of technical debt. Over time, it becomes harder and harder to add features and fix bugs. In a well designed system, however, every feature is easier to add than the last one, is faster, and has fewer bugs, because the foundations each feature builds upon are getting better and better. Mojo is a well designed system. At its core is MLIR, which has already been developed for many years, initially kicked off by Chris Lattner at Google. He had recognised what the core foundations for an “AI era programming language” would need, and focused on building them. MLIR was a key piece. Just as LLVM made it dramatically easier for powerful new programming languages to be developed over the last decade (such as Rust, Julia, and Swift, which are all based on LLVM), MLIR provides an even more powerful core to languages that are built on it. Another key enabler of Mojo’s rapid development is the decision to use Python as the syntax. Developing and iterating on syntax is one of the most error-prone, complex, and controversial parts of the development of a language. By simply outsourcing that to an existing language (which also happens to be the most widely used language today) that whole piece disappears! The relatively small number of new bits of syntax needed on top of Python then largely fit quite naturally, since the base is already in place. The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo! This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself. I can’t begin to describe all the little (and big!) ideas throughout Mojo’s design and implementation – it’s the result of Chris and his team’s decades of work on compiler and language design and includes all the tricks and hard-won experience from that time – but what I can describe is an amazing result that I saw with my own eyes. The Modular team internally announced that they’d decided to launch Mojo with a video, including a demo – and they set a date just a few weeks in the future. But at that time Mojo was just the most bare-bones language. There was no usable notebook kernel, hardly any of the Python syntax was implemented, and nothing was optimised. I couldn’t understand how they hoped to implement all this in a matter of weeks – let alone to make it any good! What I saw over this time was astonishing. Every day or two whole new language features were implemented, and as soon as there was enough in place to try running algorithms, generally they’d be at or near state of the art performance right away! I realised that what was happening was that all the foundations were already in place, and that they’d been explicitly designed to build the things that were now under development. So it shouldn’t have been a surprise that everything worked, and worked well – after all, that was the plan all along! This is a reason to be optimistic about the future of Mojo. Although it’s still early days for this project, my guess, based on what I’ve observed in the last few weeks, is that it’s going to develop faster and further than most of us expect… Deployment I’ve left one of the bits I’m most excited about to last: deployment. Currently, if you want to give your cool Python program to a friend, then you’re going to have to tell them to first install Python! Or, you could give them an enormous file that includes the entirety of Python and the libraries you use all packaged up together, which will be extracted and loaded when they run your program. Because Python is an interpreted language, how your program will behave will depend on the exact version of python that’s installed, what versions of what libraries are present, and how it’s all been configured. In order to avoid this maintenance nightmare, instead the Python community has settled on a couple of options for installing Python applications: environments, which have a separate Python installation for each program; or containers, which have much of an entire operating system set up for each application. Both approaches lead to a lot of confusion and overhead in developing and deploying Python applications. Compare this to deploying a statically compiled C application: you can literally just make the compiled program available for direct download. It can be just 100k or so in size, and will launch and run quickly. There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python. As a compiled language, Mojo’s deployment story is basically the same as C. For instance, a program that includes a version of matmul written from scratch is around 100k. This means that Mojo is far more than a language for AI/ML applications. It’s actually a version of Python that allows us to write fast, small, easily-deployed applications that take advantage of all available cores and accelerators! Alternatives to Mojo Mojo is not the only attempt at solving the Python performance and deployment problem. In terms of languages, Julia is perhaps the strongest current alternative. It has many of the benefits of Mojo, and a lot of great projects are already built with it. The Julia folks were kind enough to invite me to give a keynote at their recent conference, and I used that opportunity to describe what I felt were the current shortcomings (and opportunities) for Julia: As discussed in this video, Julia’s biggest challenge stems from its large runtime, which in turn stems from the decision to use garbage collection in the language. Also, the multi-dispatch approach used in Julia is a fairly unusual choice, which both opens a lot of doors to do cool stuff in the language, but also can make things pretty complicated for devs. (I’m so enthused by this approach that I built a python version of it – but I’m also as a result particularly aware of its limitations!) In Python, the most prominent current solution is probably Jax, which effectively creates a domain specific language (DSL) using Python. The output of this language is XLA, which is a machine learning compiler that predates MLIR (and is gradually being ported over to MLIR, I believe). Jax inherits the limitations of both Python (e.g the language has no way of representing structs, or allocating memory directly, or creating fast loops) and XLA (which is largely limited to machine learning specific concepts and is primarily targeted to TPUs), but has the huge upside that it doesn’t require a new language or new compiler. As previously discussed, there’s also the new PyTorch compiler, and also Tensorflow is able to generate XLA code. Personally, I find using Python in this way ultimately unsatisfying. I don’t actually get to use all the power of Python, but have to use a subset that’s compatible with the backend I’m targeting. I can’t easily debug and profile the compiled code, and there’s so much “magic” going on that it’s hard to even know what actually ends up getting executed. I don’t even end up with a standalone binary, but instead have to use special runtimes and deal with complex APIs. (I’m not alone here – everyone I know that has used PyTorch or Tensorflow for targeting edge devices or optimised serving infrastructure has described it as being one of the most complex and frustrating tasks they’ve attempted! And I’m not sure I even know anyone that’s actually completed either of these things using Jax.) Another interesting direction for Python is Numba and Cython. I’m a big fan of these projects and have used both in my teaching and software development. Numba uses a special decorator to cause a python function to be compiled into optimised machine code using LLVM. Cython is similar, but also provides a Python-like language which has some of the features of Mojo, and converts this Python dialect into C, which is then compiled. Neither language solves the deployment challenge, but they can help a lot with the performance problem. Neither is able to target a range of accelerators with generic cross-platform code, although Numba does provide a very useful way to write CUDA code (and so allows NVIDIA GPUs to be targeted). I’m really grateful Numba and Cython exist, and have personally gotten a lot out of them. However they’re not at all the same as using a complete language and compiler that generates standalone binaries. They’re bandaid solutions for Python’s performance problems, and are fine for situations where that’s all you need. But I’d much prefer to use a language that’s as elegant as Python and as fast as expert-written C, allows me to use one language to write everything from the application server, to the model architecture and the installer too, and lets me debug and profile my code directly in the language in which I wrote it. How would you like a language like that? Footnotes I’m an advisor to Modular.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-04-04-part2-2023.html] | [TOKENS: 1324] |
From Deep Learning Foundations to Stable Diffusion Jeremy Howard April 4, 2023 On this page Today we’re releasing our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of Practical Deep Learning for Coders. Get started now! In this course, containing over 30 hours of video content, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. We’ve worked closely with experts from Stability.ai and Hugging Face (creators of the Diffusers library) to ensure we have rigorous coverage of the latest techniques. The course includes coverage of papers that were released after Stable Diffusion came out – so it actually goes well beyond even what Stable Diffusion includes! We also explain how to read research papers, and practice this skill by studying and implementing many papers throughout the course. Thank you to all the amazing people who helped put this course together. I’d particularly like to thank Tanishq Mathew Abraham (Stability.ai) and Jonathan Whitaker (co-author of the upcoming O’Reilly Diffusion book) for helping me present a number of the lessons, and also the great behind-the-scenes contributions by Pedro Cuenca (Hugging Face). Thanks also to Kat Crowson for her k-diffusion library which we use heavily throughout the course, and for answering all our questions, and to Francisco Mussari for creating transcripts for most of the lessons. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then we strongly recommend getting starting with Practical Deep Learning.) Get started now! Content summary In this course we’ll explore diffusion methods such as Denoising Diffusion Probabilistic Models (DDPM) and Denoising Diffusion Implicit Models (DDIM). We’ll get our hands dirty implementing unconditional and conditional diffusion models from scratch, building and experimenting with different samplers, and diving into recent tricks like textual inversion and Dreambooth. We will also study and implement the 2022 paper by Karras et al, Elucidating the Design Space of Diffusion-Based Generative Models, which uses pre-conditioning to ensure that inputs and targets to the model are scaled to unit variance. The Karras model predicts an interpolated version of the clean image and the noise, depending on the amount of noise present in the input. Along the way, we’ll cover essential deep learning topics including a variety of neural network architectures, data augmentation approaches (including the amazingly effective and criminally under-appreciated TrivialAugment strategy), and various loss functions, including perceptual loss and style loss. We’ll build our own models from scratch, such as Multi-Layer Perceptrons (MLPs), ResNets, and Unets, while experimenting with generative architectures like autoencoders and transformers. Throughout the course, we’ll use PyTorch to implement our models (but only after we’ve implemented everything needed in pure Python first!), and will create our own deep learning framework called miniai. We’ll master Python concepts like iterators, generators, and decorators to keep our code clean and efficient. We’ll also explore deep learning optimizers like AdamW and RMSProp, learning rate annealing, and learning how to experiment with the impact of different initialisers, batch sizes and learning rates. And of course, we’ll make use of handy tools like the Python debugger (pdb) and nbdev for building Python modules from Jupyter notebooks. Lastly, we’ll touch on fundamental concepts like tensors, calculus, and pseudo-random number generation to provide a solid foundation for our exploration. We’ll apply these concepts to machine learning techniques like mean shift clustering and convolutional neural networks (CNNs), and will see how to use tracking with Weights and Biases (W&B). We’ll also tackle mixed precision training using both NVIDIA’s apex library, and the Accelerate library from Hugging Face. We’ll investigate various types of normalization like Layer Normalization and Batch Normalization. By the end of the course, you’ll have a deep understanding of diffusion models and the skills to implement cutting-edge deep learning techniques. Get started now! Tanishq’s thoughts Here’s what Tanishq Mathew Abraham, from Stability.ai, who helped teach a number of the lessons, thinks of the course: The fast.ai Part 2 course is a one-of-its-kind course. I think this course is unique in that it teaches you how to build deep learning models from scratch while also exploring cutting-edge research in diffusion models. No other course is guiding you through state-of-the-art papers in the diffusion space (sometimes a mere few weeks after they first appear) and building clear, accessible implementations. We’ve even explored some new research directions in the course, and hopefully the course enables others to explore their own ideas further. If you are interested in a more advanced course building state-of-the-art deep learning models from scratch, and/or you’re interested in how state-of-the-art diffusion models work and how to build them, this is the course for you! Even as someone helping with the development of this course, I found this to be an amazing learning experience, and I hope it is for you too! |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-03-20-wittgenstein.html] | [TOKENS: 1205] |
GPT 4 and the Uncharted Territories of Language Jeremy Howard March 20, 2023 Beyond Wittgenstein’s Walls “The limits of my language mean the limits of my world.” — Ludwig Wittgenstein Language is like a map that we use to navigate the world, but it’s also like a prison that keeps us from seeing what’s beyond the walls. But what if there was a way to break out of this prison, to expand our map, to explore new worlds with new words? This is the possibility and the challenge offered by instruction tuned language models like GPT 4, a cutting-edge technology that uses artificial neural networks to generate natural language texts based on user inputs. GPT 4 can write anything from essays to novels to poems to tweets to code to recipes to jokes to lyrics to whatever you want. It can even write things that don’t exist yet, things that no human has ever thought of or said before. As Wittgenstein’s quote suggests, language is a source of limitation and liberation. GPT 4 pushes this idea to the extreme by giving us access to unlimited language. This could be the most significant new technology in modern history because it has the potential to change many domains and industries. From education to entertainment, from journalism to justice, from science to art, these models could enable new forms of learning, storytelling, reporting, reasoning, discovery, and creation. They could also create new ethical, social, and cultural challenges that require careful reflection and regulation. How we use this technology will depend on how we recognize its implications for ourselves and others. This technology is a form of “Artificial Intelligence”. The word “intelligence” derives from inter- (“between”) and legere (“to choose, pick out, read”). To be intelligent, then, is to be able to choose between things, to pick out what matters, to read what is written. Intelligence is not just a quantity or a quality; it is an activity, a process, a practice. It is something that we do with our minds and our words. But when we let GPT 4 do this for us, are we not abdicating our intelligence? Are we not letting go of our ability to choose, to pick out, to read? Are we not becoming passive consumers of language instead of active producers? I didn’t write that Ah, but here’s the rub: the very words that have set the stage for this linguistic exploration were penned not by my own hand (or, rather, keyboard), but by the digital tendrils of Bing (which uses GPT 4). And so, we find ourselves in the throes of a meta-linguistic conundrum, as the very subject of our discussion has surreptitiously insinuated itself into the conversation. Now, bear with me, for I have a purpose in revealing this sly subterfuge. My aim is to shatter the very expectations you hold about what language model-generated text looks like, feels like, and, indeed, reads like. For if you’ve made it this far, you’ve unwittingly been drawn into the web of AI-generated prose. Perhaps it is only in grappling with the uncanny valley of language that we may find the strength to redefine our own linguistic boundaries and catch a fleeting glimpse of the world beyond the walls. Conclusion In the end, GPT 4 presents us with a paradox. On one hand, it has the potential to expand our linguistic horizons and shatter the walls that confine us. On the other hand, it raises questions about the very nature of intelligence, creativity, and what it means to be human. As we delve deeper into the uncharted territories of language, we must be mindful of the delicate balance between embracing innovation and preserving our own intellectual autonomy. The challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds. By approaching this technology with curiosity, humility, and a commitment to ethical considerations, we can venture beyond Wittgenstein’s walls and chart new territories in language, thought, and human potential. I didn’t write that either Actually, the entire article above was written by GPT 4. I created this article because I haven’t seen good examples of what model-generated prose can look like, and because I wanted to experiment with the idea of creating an entire article using GPT 4. To create it, I used the following four prompts: I then deleted some clauses and sentences, but I didn’t otherwise modify any of them in any way, except to add the text “(which uses GPT 4)” after “Bing”. I also manually created the image which Bing suggested I use and pasted it into the article. The warning that GPT wrote at the end, “the challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds” feels like a warning to me about this article. It ended up heading in a direction that I didn’t really plan, and in hindsight I’m now not at all sure whether it helped my exploration, or substituted for my own mind. In the end, this experiment has only left me feeling more confused and uneasy. Footnotes There’s this formulaic thing used in a certain class of articles where they’ll open with a quote from an institutionally-approved writer and signal their status in other ways, such as by weaving in latin etymology. It’s a form of gate-keeping, and therefore I don’t like it, and I wanted to break it down. That’s why I picked the first two prompts. Now you can (at least roughly) mimic any writing style, as long as you can describe it, even if you’re not immersed in the culture that it’s associated with. This will make certain people really mad.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-03-20-wittgenstein.html] | [TOKENS: 1205] |
GPT 4 and the Uncharted Territories of Language Jeremy Howard March 20, 2023 Beyond Wittgenstein’s Walls “The limits of my language mean the limits of my world.” — Ludwig Wittgenstein Language is like a map that we use to navigate the world, but it’s also like a prison that keeps us from seeing what’s beyond the walls. But what if there was a way to break out of this prison, to expand our map, to explore new worlds with new words? This is the possibility and the challenge offered by instruction tuned language models like GPT 4, a cutting-edge technology that uses artificial neural networks to generate natural language texts based on user inputs. GPT 4 can write anything from essays to novels to poems to tweets to code to recipes to jokes to lyrics to whatever you want. It can even write things that don’t exist yet, things that no human has ever thought of or said before. As Wittgenstein’s quote suggests, language is a source of limitation and liberation. GPT 4 pushes this idea to the extreme by giving us access to unlimited language. This could be the most significant new technology in modern history because it has the potential to change many domains and industries. From education to entertainment, from journalism to justice, from science to art, these models could enable new forms of learning, storytelling, reporting, reasoning, discovery, and creation. They could also create new ethical, social, and cultural challenges that require careful reflection and regulation. How we use this technology will depend on how we recognize its implications for ourselves and others. This technology is a form of “Artificial Intelligence”. The word “intelligence” derives from inter- (“between”) and legere (“to choose, pick out, read”). To be intelligent, then, is to be able to choose between things, to pick out what matters, to read what is written. Intelligence is not just a quantity or a quality; it is an activity, a process, a practice. It is something that we do with our minds and our words. But when we let GPT 4 do this for us, are we not abdicating our intelligence? Are we not letting go of our ability to choose, to pick out, to read? Are we not becoming passive consumers of language instead of active producers? I didn’t write that Ah, but here’s the rub: the very words that have set the stage for this linguistic exploration were penned not by my own hand (or, rather, keyboard), but by the digital tendrils of Bing (which uses GPT 4). And so, we find ourselves in the throes of a meta-linguistic conundrum, as the very subject of our discussion has surreptitiously insinuated itself into the conversation. Now, bear with me, for I have a purpose in revealing this sly subterfuge. My aim is to shatter the very expectations you hold about what language model-generated text looks like, feels like, and, indeed, reads like. For if you’ve made it this far, you’ve unwittingly been drawn into the web of AI-generated prose. Perhaps it is only in grappling with the uncanny valley of language that we may find the strength to redefine our own linguistic boundaries and catch a fleeting glimpse of the world beyond the walls. Conclusion In the end, GPT 4 presents us with a paradox. On one hand, it has the potential to expand our linguistic horizons and shatter the walls that confine us. On the other hand, it raises questions about the very nature of intelligence, creativity, and what it means to be human. As we delve deeper into the uncharted territories of language, we must be mindful of the delicate balance between embracing innovation and preserving our own intellectual autonomy. The challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds. By approaching this technology with curiosity, humility, and a commitment to ethical considerations, we can venture beyond Wittgenstein’s walls and chart new territories in language, thought, and human potential. I didn’t write that either Actually, the entire article above was written by GPT 4. I created this article because I haven’t seen good examples of what model-generated prose can look like, and because I wanted to experiment with the idea of creating an entire article using GPT 4. To create it, I used the following four prompts: I then deleted some clauses and sentences, but I didn’t otherwise modify any of them in any way, except to add the text “(which uses GPT 4)” after “Bing”. I also manually created the image which Bing suggested I use and pasted it into the article. The warning that GPT wrote at the end, “the challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds” feels like a warning to me about this article. It ended up heading in a direction that I didn’t really plan, and in hindsight I’m now not at all sure whether it helped my exploration, or substituted for my own mind. In the end, this experiment has only left me feeling more confused and uneasy. Footnotes There’s this formulaic thing used in a certain class of articles where they’ll open with a quote from an institutionally-approved writer and signal their status in other ways, such as by weaving in latin etymology. It’s a form of gate-keeping, and therefore I don’t like it, and I wanted to break it down. That’s why I picked the first two prompts. Now you can (at least roughly) mimic any writing style, as long as you can describe it, even if you’re not immersed in the culture that it’s associated with. This will make certain people really mad.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-03-20-wittgenstein.html] | [TOKENS: 1205] |
GPT 4 and the Uncharted Territories of Language Jeremy Howard March 20, 2023 Beyond Wittgenstein’s Walls “The limits of my language mean the limits of my world.” — Ludwig Wittgenstein Language is like a map that we use to navigate the world, but it’s also like a prison that keeps us from seeing what’s beyond the walls. But what if there was a way to break out of this prison, to expand our map, to explore new worlds with new words? This is the possibility and the challenge offered by instruction tuned language models like GPT 4, a cutting-edge technology that uses artificial neural networks to generate natural language texts based on user inputs. GPT 4 can write anything from essays to novels to poems to tweets to code to recipes to jokes to lyrics to whatever you want. It can even write things that don’t exist yet, things that no human has ever thought of or said before. As Wittgenstein’s quote suggests, language is a source of limitation and liberation. GPT 4 pushes this idea to the extreme by giving us access to unlimited language. This could be the most significant new technology in modern history because it has the potential to change many domains and industries. From education to entertainment, from journalism to justice, from science to art, these models could enable new forms of learning, storytelling, reporting, reasoning, discovery, and creation. They could also create new ethical, social, and cultural challenges that require careful reflection and regulation. How we use this technology will depend on how we recognize its implications for ourselves and others. This technology is a form of “Artificial Intelligence”. The word “intelligence” derives from inter- (“between”) and legere (“to choose, pick out, read”). To be intelligent, then, is to be able to choose between things, to pick out what matters, to read what is written. Intelligence is not just a quantity or a quality; it is an activity, a process, a practice. It is something that we do with our minds and our words. But when we let GPT 4 do this for us, are we not abdicating our intelligence? Are we not letting go of our ability to choose, to pick out, to read? Are we not becoming passive consumers of language instead of active producers? I didn’t write that Ah, but here’s the rub: the very words that have set the stage for this linguistic exploration were penned not by my own hand (or, rather, keyboard), but by the digital tendrils of Bing (which uses GPT 4). And so, we find ourselves in the throes of a meta-linguistic conundrum, as the very subject of our discussion has surreptitiously insinuated itself into the conversation. Now, bear with me, for I have a purpose in revealing this sly subterfuge. My aim is to shatter the very expectations you hold about what language model-generated text looks like, feels like, and, indeed, reads like. For if you’ve made it this far, you’ve unwittingly been drawn into the web of AI-generated prose. Perhaps it is only in grappling with the uncanny valley of language that we may find the strength to redefine our own linguistic boundaries and catch a fleeting glimpse of the world beyond the walls. Conclusion In the end, GPT 4 presents us with a paradox. On one hand, it has the potential to expand our linguistic horizons and shatter the walls that confine us. On the other hand, it raises questions about the very nature of intelligence, creativity, and what it means to be human. As we delve deeper into the uncharted territories of language, we must be mindful of the delicate balance between embracing innovation and preserving our own intellectual autonomy. The challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds. By approaching this technology with curiosity, humility, and a commitment to ethical considerations, we can venture beyond Wittgenstein’s walls and chart new territories in language, thought, and human potential. I didn’t write that either Actually, the entire article above was written by GPT 4. I created this article because I haven’t seen good examples of what model-generated prose can look like, and because I wanted to experiment with the idea of creating an entire article using GPT 4. To create it, I used the following four prompts: I then deleted some clauses and sentences, but I didn’t otherwise modify any of them in any way, except to add the text “(which uses GPT 4)” after “Bing”. I also manually created the image which Bing suggested I use and pasted it into the article. The warning that GPT wrote at the end, “the challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds” feels like a warning to me about this article. It ended up heading in a direction that I didn’t really plan, and in hindsight I’m now not at all sure whether it helped my exploration, or substituted for my own mind. In the end, this experiment has only left me feeling more confused and uneasy. Footnotes There’s this formulaic thing used in a certain class of articles where they’ll open with a quote from an institutionally-approved writer and signal their status in other ways, such as by weaving in latin etymology. It’s a form of gate-keeping, and therefore I don’t like it, and I wanted to break it down. That’s why I picked the first two prompts. Now you can (at least roughly) mimic any writing style, as long as you can describe it, even if you’re not immersed in the culture that it’s associated with. This will make certain people really mad.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-03-20-wittgenstein.html] | [TOKENS: 1205] |
GPT 4 and the Uncharted Territories of Language Jeremy Howard March 20, 2023 Beyond Wittgenstein’s Walls “The limits of my language mean the limits of my world.” — Ludwig Wittgenstein Language is like a map that we use to navigate the world, but it’s also like a prison that keeps us from seeing what’s beyond the walls. But what if there was a way to break out of this prison, to expand our map, to explore new worlds with new words? This is the possibility and the challenge offered by instruction tuned language models like GPT 4, a cutting-edge technology that uses artificial neural networks to generate natural language texts based on user inputs. GPT 4 can write anything from essays to novels to poems to tweets to code to recipes to jokes to lyrics to whatever you want. It can even write things that don’t exist yet, things that no human has ever thought of or said before. As Wittgenstein’s quote suggests, language is a source of limitation and liberation. GPT 4 pushes this idea to the extreme by giving us access to unlimited language. This could be the most significant new technology in modern history because it has the potential to change many domains and industries. From education to entertainment, from journalism to justice, from science to art, these models could enable new forms of learning, storytelling, reporting, reasoning, discovery, and creation. They could also create new ethical, social, and cultural challenges that require careful reflection and regulation. How we use this technology will depend on how we recognize its implications for ourselves and others. This technology is a form of “Artificial Intelligence”. The word “intelligence” derives from inter- (“between”) and legere (“to choose, pick out, read”). To be intelligent, then, is to be able to choose between things, to pick out what matters, to read what is written. Intelligence is not just a quantity or a quality; it is an activity, a process, a practice. It is something that we do with our minds and our words. But when we let GPT 4 do this for us, are we not abdicating our intelligence? Are we not letting go of our ability to choose, to pick out, to read? Are we not becoming passive consumers of language instead of active producers? I didn’t write that Ah, but here’s the rub: the very words that have set the stage for this linguistic exploration were penned not by my own hand (or, rather, keyboard), but by the digital tendrils of Bing (which uses GPT 4). And so, we find ourselves in the throes of a meta-linguistic conundrum, as the very subject of our discussion has surreptitiously insinuated itself into the conversation. Now, bear with me, for I have a purpose in revealing this sly subterfuge. My aim is to shatter the very expectations you hold about what language model-generated text looks like, feels like, and, indeed, reads like. For if you’ve made it this far, you’ve unwittingly been drawn into the web of AI-generated prose. Perhaps it is only in grappling with the uncanny valley of language that we may find the strength to redefine our own linguistic boundaries and catch a fleeting glimpse of the world beyond the walls. Conclusion In the end, GPT 4 presents us with a paradox. On one hand, it has the potential to expand our linguistic horizons and shatter the walls that confine us. On the other hand, it raises questions about the very nature of intelligence, creativity, and what it means to be human. As we delve deeper into the uncharted territories of language, we must be mindful of the delicate balance between embracing innovation and preserving our own intellectual autonomy. The challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds. By approaching this technology with curiosity, humility, and a commitment to ethical considerations, we can venture beyond Wittgenstein’s walls and chart new territories in language, thought, and human potential. I didn’t write that either Actually, the entire article above was written by GPT 4. I created this article because I haven’t seen good examples of what model-generated prose can look like, and because I wanted to experiment with the idea of creating an entire article using GPT 4. To create it, I used the following four prompts: I then deleted some clauses and sentences, but I didn’t otherwise modify any of them in any way, except to add the text “(which uses GPT 4)” after “Bing”. I also manually created the image which Bing suggested I use and pasted it into the article. The warning that GPT wrote at the end, “the challenge, then, is to use GPT 4 as a catalyst for our own growth and exploration, rather than as a substitute for our own minds” feels like a warning to me about this article. It ended up heading in a direction that I didn’t really plan, and in hindsight I’m now not at all sure whether it helped my exploration, or substituted for my own mind. In the end, this experiment has only left me feeling more confused and uneasy. Footnotes There’s this formulaic thing used in a certain class of articles where they’ll open with a quote from an institutionally-approved writer and signal their status in other ways, such as by weaving in latin etymology. It’s a form of gate-keeping, and therefore I don’t like it, and I wanted to break it down. That’s why I picked the first two prompts. Now you can (at least roughly) mimic any writing style, as long as you can describe it, even if you’re not immersed in the culture that it’s associated with. This will make certain people really mad.↩︎ |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-04-04-part2-2023.html] | [TOKENS: 1324] |
From Deep Learning Foundations to Stable Diffusion Jeremy Howard April 4, 2023 On this page Today we’re releasing our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of Practical Deep Learning for Coders. Get started now! In this course, containing over 30 hours of video content, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. We’ve worked closely with experts from Stability.ai and Hugging Face (creators of the Diffusers library) to ensure we have rigorous coverage of the latest techniques. The course includes coverage of papers that were released after Stable Diffusion came out – so it actually goes well beyond even what Stable Diffusion includes! We also explain how to read research papers, and practice this skill by studying and implementing many papers throughout the course. Thank you to all the amazing people who helped put this course together. I’d particularly like to thank Tanishq Mathew Abraham (Stability.ai) and Jonathan Whitaker (co-author of the upcoming O’Reilly Diffusion book) for helping me present a number of the lessons, and also the great behind-the-scenes contributions by Pedro Cuenca (Hugging Face). Thanks also to Kat Crowson for her k-diffusion library which we use heavily throughout the course, and for answering all our questions, and to Francisco Mussari for creating transcripts for most of the lessons. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then we strongly recommend getting starting with Practical Deep Learning.) Get started now! Content summary In this course we’ll explore diffusion methods such as Denoising Diffusion Probabilistic Models (DDPM) and Denoising Diffusion Implicit Models (DDIM). We’ll get our hands dirty implementing unconditional and conditional diffusion models from scratch, building and experimenting with different samplers, and diving into recent tricks like textual inversion and Dreambooth. We will also study and implement the 2022 paper by Karras et al, Elucidating the Design Space of Diffusion-Based Generative Models, which uses pre-conditioning to ensure that inputs and targets to the model are scaled to unit variance. The Karras model predicts an interpolated version of the clean image and the noise, depending on the amount of noise present in the input. Along the way, we’ll cover essential deep learning topics including a variety of neural network architectures, data augmentation approaches (including the amazingly effective and criminally under-appreciated TrivialAugment strategy), and various loss functions, including perceptual loss and style loss. We’ll build our own models from scratch, such as Multi-Layer Perceptrons (MLPs), ResNets, and Unets, while experimenting with generative architectures like autoencoders and transformers. Throughout the course, we’ll use PyTorch to implement our models (but only after we’ve implemented everything needed in pure Python first!), and will create our own deep learning framework called miniai. We’ll master Python concepts like iterators, generators, and decorators to keep our code clean and efficient. We’ll also explore deep learning optimizers like AdamW and RMSProp, learning rate annealing, and learning how to experiment with the impact of different initialisers, batch sizes and learning rates. And of course, we’ll make use of handy tools like the Python debugger (pdb) and nbdev for building Python modules from Jupyter notebooks. Lastly, we’ll touch on fundamental concepts like tensors, calculus, and pseudo-random number generation to provide a solid foundation for our exploration. We’ll apply these concepts to machine learning techniques like mean shift clustering and convolutional neural networks (CNNs), and will see how to use tracking with Weights and Biases (W&B). We’ll also tackle mixed precision training using both NVIDIA’s apex library, and the Accelerate library from Hugging Face. We’ll investigate various types of normalization like Layer Normalization and Batch Normalization. By the end of the course, you’ll have a deep understanding of diffusion models and the skills to implement cutting-edge deep learning techniques. Get started now! Tanishq’s thoughts Here’s what Tanishq Mathew Abraham, from Stability.ai, who helped teach a number of the lessons, thinks of the course: The fast.ai Part 2 course is a one-of-its-kind course. I think this course is unique in that it teaches you how to build deep learning models from scratch while also exploring cutting-edge research in diffusion models. No other course is guiding you through state-of-the-art papers in the diffusion space (sometimes a mere few weeks after they first appear) and building clear, accessible implementations. We’ve even explored some new research directions in the course, and hopefully the course enables others to explore their own ideas further. If you are interested in a more advanced course building state-of-the-art deep learning models from scratch, and/or you’re interested in how state-of-the-art diffusion models work and how to build them, this is the course for you! Even as someone helping with the development of this course, I found this to be an amazing learning experience, and I hope it is for you too! |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-02-07-school-immunology/index.html] | [TOKENS: 1204] |
I was an AI researcher. Now, I am an immunology student. Rachel Thomas February 7, 2023 On this page This post is cross-posted from rachel.fast.ai. Going forward, I will be blogging at rachel.fast.ai. I still believe deeply in the mission of fast.ai, but I’m currently focused on studying immunology. I like complexity, and I like challenges. When a new topic fascinates me, I want to devote all of my time to it. In 2012, I was working as a quant in energy trading when I became so captivated by the topic of machine learning that I abruptly moved cross-country to San Francisco and spent a decade learning as much as I could about machine learning, AI, data ethics, and algorithmic harms. In 2022, I became fascinated by a new topic: immunology. I completed 7 online courses last year, am currently taking 4 more courses, and have created over 2,000 immunology-related flashcards for myself, which I spend time on daily. (I will write more about how I use Anki flashcards in a future post.) I found immunology to be both overwhelming and fascinating. The field is full of jargon, and there is a steep curve just to learn the language: IL-2, IL-4, IL-5, IL-12, IL-13, IL-18, CD-3, CD-22, CD-34, CD-47, CD-155, C3, C5, C8, and so on (lots of letter/number combinations, abbreviations, and acronyms! But underneath them, the mechanisms are fascinating). The more I studied, the more I wanted to learn. It became clear that I needed a formal program to go deeper and to provide appropriate context, so I started applying to graduate school. Back to School I was delighted to be accepted to a Masters in Immunology graduate program, and after eager anticipation, last month I officially began my formal degree. While my ultimate goal is to apply my machine learning and data ethics expertise to the field, I want to make sure I fully understand the relevant immunology first. Too often machine learning practitioners unthinkingly grasp for a nail to use their hammer on, without first having the necessary in-depth knowledge of the underlying domain, its data, its context, and its actual challenges. The more I learn about immunology, the more I realise how complex, vast, and full of open questions and not-yet-understood phenomena the field is. It was only in 2021 that researchers proved Epstein-Barr virus causes multiple sclerosis. Researchers are making new discoveries about links between viral infections and neurodegenerative diseases, such as Alzheimer’s. A study in late 2022 found a possible mechanism to explain the fact that varicella zoster virus significantly increases risk of stroke. Unusually severe outbreaks of RSV and Group A Strep (a bacteria that can often follow as a secondary infection after a virus) made headlines in the past few months. A variety of viruses have long been known to sometimes trigger autoimmune diseases or cancer, yet there is still much to discover about these relationships. Living in the Pandemicene Even as the ongoing covid pandemic continues to cause death and disability, science journalist Ed Yong warned that we are now living in the pandemicene, a period with increasingly likely pandemics. Climate change is crowding species into new habitats, raising the risks of viral spillover from the estimated 40,000 viruses that inhabit mammals. Immunology, virology, and microbiology will become even more important in the coming decades. Mathematical Biology and AI in Medicine For over 20 years, my focus has been on mathematics, computer science, and data ethics. I studied mathematics, computer science, and linguistics as an undergraduate; earned a PhD in mathematics; and then spent 12 years working in a mix of industry and academia as a data scientist, teacher, and researcher. I am best known for my work as cofounder of fast.ai, creator of the most popular deep learning courses in the world, and for previously serving as the founding director of the University of San Francisco Center for Applied Data Ethics. Over the years, I have had a recurring interest in medicine, doing mathematical modelling of cell processes as part of a Howard Hughes Medical Institute fellowship while I was in graduate school, publishing about machine learning in medicine for The Boston Review, and being an invited keynote speaker for Stanford’s AI in Medicine symposium. The value of domain expertise Core to the mission of fast.ai is the idea that domain expertise is crucial. In our very first post announcing the launch of fast.ai in 2016, my cofounder Jeremy Howard wrote, “Only domain experts: fully understand and appreciate what are the most important problems in their field; have access to the data necessary to solve those problems; and understand the opportunities and constraints to implementing data driven solutions.” It is dangerous for machine learning practitioners to apply machine learning to fields in which they have only superficial knowledge (unless working closely with domain experts from end-to-end). Collaborative, interdisciplinary, and career changing work has always fascinated me. I have written about how necessary qualitative humanities research is to the field of AI. I previously taught software engineering to adult women changing careers, and long-believed that career changers have something special to offer. I am now taking my own advice, and delving into immunology, with the long-term goal of integrating this new knowledge with my data ethics and machine learning skills. After having been in a more “established” place in my career for a while, it is intimidating to publicly start off on a new branch like this. However, it’s also exciting, and I hope to share some of my journey as an immunology student along the way through blog posts and essays, just as I’ve always encouraged fast.ai students to do. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-02-07-school-immunology/index.html] | [TOKENS: 1204] |
I was an AI researcher. Now, I am an immunology student. Rachel Thomas February 7, 2023 On this page This post is cross-posted from rachel.fast.ai. Going forward, I will be blogging at rachel.fast.ai. I still believe deeply in the mission of fast.ai, but I’m currently focused on studying immunology. I like complexity, and I like challenges. When a new topic fascinates me, I want to devote all of my time to it. In 2012, I was working as a quant in energy trading when I became so captivated by the topic of machine learning that I abruptly moved cross-country to San Francisco and spent a decade learning as much as I could about machine learning, AI, data ethics, and algorithmic harms. In 2022, I became fascinated by a new topic: immunology. I completed 7 online courses last year, am currently taking 4 more courses, and have created over 2,000 immunology-related flashcards for myself, which I spend time on daily. (I will write more about how I use Anki flashcards in a future post.) I found immunology to be both overwhelming and fascinating. The field is full of jargon, and there is a steep curve just to learn the language: IL-2, IL-4, IL-5, IL-12, IL-13, IL-18, CD-3, CD-22, CD-34, CD-47, CD-155, C3, C5, C8, and so on (lots of letter/number combinations, abbreviations, and acronyms! But underneath them, the mechanisms are fascinating). The more I studied, the more I wanted to learn. It became clear that I needed a formal program to go deeper and to provide appropriate context, so I started applying to graduate school. Back to School I was delighted to be accepted to a Masters in Immunology graduate program, and after eager anticipation, last month I officially began my formal degree. While my ultimate goal is to apply my machine learning and data ethics expertise to the field, I want to make sure I fully understand the relevant immunology first. Too often machine learning practitioners unthinkingly grasp for a nail to use their hammer on, without first having the necessary in-depth knowledge of the underlying domain, its data, its context, and its actual challenges. The more I learn about immunology, the more I realise how complex, vast, and full of open questions and not-yet-understood phenomena the field is. It was only in 2021 that researchers proved Epstein-Barr virus causes multiple sclerosis. Researchers are making new discoveries about links between viral infections and neurodegenerative diseases, such as Alzheimer’s. A study in late 2022 found a possible mechanism to explain the fact that varicella zoster virus significantly increases risk of stroke. Unusually severe outbreaks of RSV and Group A Strep (a bacteria that can often follow as a secondary infection after a virus) made headlines in the past few months. A variety of viruses have long been known to sometimes trigger autoimmune diseases or cancer, yet there is still much to discover about these relationships. Living in the Pandemicene Even as the ongoing covid pandemic continues to cause death and disability, science journalist Ed Yong warned that we are now living in the pandemicene, a period with increasingly likely pandemics. Climate change is crowding species into new habitats, raising the risks of viral spillover from the estimated 40,000 viruses that inhabit mammals. Immunology, virology, and microbiology will become even more important in the coming decades. Mathematical Biology and AI in Medicine For over 20 years, my focus has been on mathematics, computer science, and data ethics. I studied mathematics, computer science, and linguistics as an undergraduate; earned a PhD in mathematics; and then spent 12 years working in a mix of industry and academia as a data scientist, teacher, and researcher. I am best known for my work as cofounder of fast.ai, creator of the most popular deep learning courses in the world, and for previously serving as the founding director of the University of San Francisco Center for Applied Data Ethics. Over the years, I have had a recurring interest in medicine, doing mathematical modelling of cell processes as part of a Howard Hughes Medical Institute fellowship while I was in graduate school, publishing about machine learning in medicine for The Boston Review, and being an invited keynote speaker for Stanford’s AI in Medicine symposium. The value of domain expertise Core to the mission of fast.ai is the idea that domain expertise is crucial. In our very first post announcing the launch of fast.ai in 2016, my cofounder Jeremy Howard wrote, “Only domain experts: fully understand and appreciate what are the most important problems in their field; have access to the data necessary to solve those problems; and understand the opportunities and constraints to implementing data driven solutions.” It is dangerous for machine learning practitioners to apply machine learning to fields in which they have only superficial knowledge (unless working closely with domain experts from end-to-end). Collaborative, interdisciplinary, and career changing work has always fascinated me. I have written about how necessary qualitative humanities research is to the field of AI. I previously taught software engineering to adult women changing careers, and long-believed that career changers have something special to offer. I am now taking my own advice, and delving into immunology, with the long-term goal of integrating this new knowledge with my data ethics and machine learning skills. After having been in a more “established” place in my career for a while, it is intimidating to publicly start off on a new branch like this. However, it’s also exciting, and I hope to share some of my journey as an immunology student along the way through blog posts and essays, just as I’ve always encouraged fast.ai students to do. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-02-07-school-immunology/index.html] | [TOKENS: 1204] |
I was an AI researcher. Now, I am an immunology student. Rachel Thomas February 7, 2023 On this page This post is cross-posted from rachel.fast.ai. Going forward, I will be blogging at rachel.fast.ai. I still believe deeply in the mission of fast.ai, but I’m currently focused on studying immunology. I like complexity, and I like challenges. When a new topic fascinates me, I want to devote all of my time to it. In 2012, I was working as a quant in energy trading when I became so captivated by the topic of machine learning that I abruptly moved cross-country to San Francisco and spent a decade learning as much as I could about machine learning, AI, data ethics, and algorithmic harms. In 2022, I became fascinated by a new topic: immunology. I completed 7 online courses last year, am currently taking 4 more courses, and have created over 2,000 immunology-related flashcards for myself, which I spend time on daily. (I will write more about how I use Anki flashcards in a future post.) I found immunology to be both overwhelming and fascinating. The field is full of jargon, and there is a steep curve just to learn the language: IL-2, IL-4, IL-5, IL-12, IL-13, IL-18, CD-3, CD-22, CD-34, CD-47, CD-155, C3, C5, C8, and so on (lots of letter/number combinations, abbreviations, and acronyms! But underneath them, the mechanisms are fascinating). The more I studied, the more I wanted to learn. It became clear that I needed a formal program to go deeper and to provide appropriate context, so I started applying to graduate school. Back to School I was delighted to be accepted to a Masters in Immunology graduate program, and after eager anticipation, last month I officially began my formal degree. While my ultimate goal is to apply my machine learning and data ethics expertise to the field, I want to make sure I fully understand the relevant immunology first. Too often machine learning practitioners unthinkingly grasp for a nail to use their hammer on, without first having the necessary in-depth knowledge of the underlying domain, its data, its context, and its actual challenges. The more I learn about immunology, the more I realise how complex, vast, and full of open questions and not-yet-understood phenomena the field is. It was only in 2021 that researchers proved Epstein-Barr virus causes multiple sclerosis. Researchers are making new discoveries about links between viral infections and neurodegenerative diseases, such as Alzheimer’s. A study in late 2022 found a possible mechanism to explain the fact that varicella zoster virus significantly increases risk of stroke. Unusually severe outbreaks of RSV and Group A Strep (a bacteria that can often follow as a secondary infection after a virus) made headlines in the past few months. A variety of viruses have long been known to sometimes trigger autoimmune diseases or cancer, yet there is still much to discover about these relationships. Living in the Pandemicene Even as the ongoing covid pandemic continues to cause death and disability, science journalist Ed Yong warned that we are now living in the pandemicene, a period with increasingly likely pandemics. Climate change is crowding species into new habitats, raising the risks of viral spillover from the estimated 40,000 viruses that inhabit mammals. Immunology, virology, and microbiology will become even more important in the coming decades. Mathematical Biology and AI in Medicine For over 20 years, my focus has been on mathematics, computer science, and data ethics. I studied mathematics, computer science, and linguistics as an undergraduate; earned a PhD in mathematics; and then spent 12 years working in a mix of industry and academia as a data scientist, teacher, and researcher. I am best known for my work as cofounder of fast.ai, creator of the most popular deep learning courses in the world, and for previously serving as the founding director of the University of San Francisco Center for Applied Data Ethics. Over the years, I have had a recurring interest in medicine, doing mathematical modelling of cell processes as part of a Howard Hughes Medical Institute fellowship while I was in graduate school, publishing about machine learning in medicine for The Boston Review, and being an invited keynote speaker for Stanford’s AI in Medicine symposium. The value of domain expertise Core to the mission of fast.ai is the idea that domain expertise is crucial. In our very first post announcing the launch of fast.ai in 2016, my cofounder Jeremy Howard wrote, “Only domain experts: fully understand and appreciate what are the most important problems in their field; have access to the data necessary to solve those problems; and understand the opportunities and constraints to implementing data driven solutions.” It is dangerous for machine learning practitioners to apply machine learning to fields in which they have only superficial knowledge (unless working closely with domain experts from end-to-end). Collaborative, interdisciplinary, and career changing work has always fascinated me. I have written about how necessary qualitative humanities research is to the field of AI. I previously taught software engineering to adult women changing careers, and long-believed that career changers have something special to offer. I am now taking my own advice, and delving into immunology, with the long-term goal of integrating this new knowledge with my data ethics and machine learning skills. After having been in a more “established” place in my career for a while, it is intimidating to publicly start off on a new branch like this. However, it’s also exciting, and I hope to share some of my journey as an immunology student along the way through blog posts and essays, just as I’ve always encouraged fast.ai students to do. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2022-10-19-part2-2022-preview.html] | [TOKENS: 758] |
1st Two Lessons of From Deep Learning Foundations to Stable Diffusion Jeremy Howard October 19, 2022 On this page The University of Queensland has opened the course to late registrations, if you want to join the rest of the course live: register here. We started teaching our new course, From Deep Learning Foundations to Stable Diffusion, a couple of weeks ago. The experience of developing and teaching these first lessons has been amazing. Course contributors have included some brilliant folks from Hugging Face, Stability.ai, and fast.ai, and we’ve had some inspirational contributions already from amazing people from Deoldify, Lambda Labs, and more. Some important new papers have come out in the last two weeks, and we’ve covered them already in the course. Because this field is moving so quickly, and there’s so much interest, we’ve decided to release our first two lessons early. In fact, we’re releasing them right now! In total, we’re releasing four videos, with around 5.5 hours of content, covering the following topics (the lesson numbers start at “9”, since this is a continuation of Practical Deep Learning for Coders part 1, which had 8 lessons): These videos will make the most sense if you’ve already completed part 1 of the course, or already have some experience with training and deploying deep learning models (preferably in PyTorch). Lesson 9—Pipelines and concepts This lesson starts with a tutorial on how to use pipelines in the Diffusers library to generate images. Diffusers is (in our opinion!) the best library available at the moment for image generation. It has many features and is very flexible. We explain how to use its many features, and discuss options for accessing the GPU resources needed to use the library. We talk about some of the nifty tweaks available when using Stable Diffusion in Diffusers, and show how to use them: guidance scale (for varying the amount the prompt is used), negative prompts (for removing concepts from an image), image initialisation (for starting with an existing image), textual inversion (for adding your own concepts to generated images), Dreambooth (an alternative approach to textual inversion). The second half of the lesson covers the key concepts involved in Stable Diffusion: You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. Lesson 9A—Deep dive In this video Jonathan Whitaker shows us what is happening behind the scenes when we create an image with Stable Diffusion, looking at the different components and processes and how each can be modified for further control over the generation process. He shows how to replicate the sampling loop from scratch, and explains each of the steps involved in more detail: Lesson 9B—Math of diffusion Wasim Lorgat and Tanishq Abraham walk through the math of diffusion models from the ground up. They assume no prerequisite knowledge beyond what you covered in high school. Lesson 10—Custom pipeline This lesson creates a complete Diffusers pipeline from the underlying components: the VAE, unet, scheduler, and tokeniser. By putting them together manually, this gives you the flexibility to fully customise every aspect of the inference process. We also discuss three important new papers that have been released in the last week, which improve inference performance by over 10x, and allow any photo to be “edited” by just describing what the new picture should show. The second half of the lesson begins the “from the foundations” stage of the course, developing a basic matrix class and random number generator from scratch, as well as discussing the use of iterators in Python. You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-02-07-school-immunology/index.html] | [TOKENS: 1204] |
I was an AI researcher. Now, I am an immunology student. Rachel Thomas February 7, 2023 On this page This post is cross-posted from rachel.fast.ai. Going forward, I will be blogging at rachel.fast.ai. I still believe deeply in the mission of fast.ai, but I’m currently focused on studying immunology. I like complexity, and I like challenges. When a new topic fascinates me, I want to devote all of my time to it. In 2012, I was working as a quant in energy trading when I became so captivated by the topic of machine learning that I abruptly moved cross-country to San Francisco and spent a decade learning as much as I could about machine learning, AI, data ethics, and algorithmic harms. In 2022, I became fascinated by a new topic: immunology. I completed 7 online courses last year, am currently taking 4 more courses, and have created over 2,000 immunology-related flashcards for myself, which I spend time on daily. (I will write more about how I use Anki flashcards in a future post.) I found immunology to be both overwhelming and fascinating. The field is full of jargon, and there is a steep curve just to learn the language: IL-2, IL-4, IL-5, IL-12, IL-13, IL-18, CD-3, CD-22, CD-34, CD-47, CD-155, C3, C5, C8, and so on (lots of letter/number combinations, abbreviations, and acronyms! But underneath them, the mechanisms are fascinating). The more I studied, the more I wanted to learn. It became clear that I needed a formal program to go deeper and to provide appropriate context, so I started applying to graduate school. Back to School I was delighted to be accepted to a Masters in Immunology graduate program, and after eager anticipation, last month I officially began my formal degree. While my ultimate goal is to apply my machine learning and data ethics expertise to the field, I want to make sure I fully understand the relevant immunology first. Too often machine learning practitioners unthinkingly grasp for a nail to use their hammer on, without first having the necessary in-depth knowledge of the underlying domain, its data, its context, and its actual challenges. The more I learn about immunology, the more I realise how complex, vast, and full of open questions and not-yet-understood phenomena the field is. It was only in 2021 that researchers proved Epstein-Barr virus causes multiple sclerosis. Researchers are making new discoveries about links between viral infections and neurodegenerative diseases, such as Alzheimer’s. A study in late 2022 found a possible mechanism to explain the fact that varicella zoster virus significantly increases risk of stroke. Unusually severe outbreaks of RSV and Group A Strep (a bacteria that can often follow as a secondary infection after a virus) made headlines in the past few months. A variety of viruses have long been known to sometimes trigger autoimmune diseases or cancer, yet there is still much to discover about these relationships. Living in the Pandemicene Even as the ongoing covid pandemic continues to cause death and disability, science journalist Ed Yong warned that we are now living in the pandemicene, a period with increasingly likely pandemics. Climate change is crowding species into new habitats, raising the risks of viral spillover from the estimated 40,000 viruses that inhabit mammals. Immunology, virology, and microbiology will become even more important in the coming decades. Mathematical Biology and AI in Medicine For over 20 years, my focus has been on mathematics, computer science, and data ethics. I studied mathematics, computer science, and linguistics as an undergraduate; earned a PhD in mathematics; and then spent 12 years working in a mix of industry and academia as a data scientist, teacher, and researcher. I am best known for my work as cofounder of fast.ai, creator of the most popular deep learning courses in the world, and for previously serving as the founding director of the University of San Francisco Center for Applied Data Ethics. Over the years, I have had a recurring interest in medicine, doing mathematical modelling of cell processes as part of a Howard Hughes Medical Institute fellowship while I was in graduate school, publishing about machine learning in medicine for The Boston Review, and being an invited keynote speaker for Stanford’s AI in Medicine symposium. The value of domain expertise Core to the mission of fast.ai is the idea that domain expertise is crucial. In our very first post announcing the launch of fast.ai in 2016, my cofounder Jeremy Howard wrote, “Only domain experts: fully understand and appreciate what are the most important problems in their field; have access to the data necessary to solve those problems; and understand the opportunities and constraints to implementing data driven solutions.” It is dangerous for machine learning practitioners to apply machine learning to fields in which they have only superficial knowledge (unless working closely with domain experts from end-to-end). Collaborative, interdisciplinary, and career changing work has always fascinated me. I have written about how necessary qualitative humanities research is to the field of AI. I previously taught software engineering to adult women changing careers, and long-believed that career changers have something special to offer. I am now taking my own advice, and delving into immunology, with the long-term goal of integrating this new knowledge with my data ethics and machine learning skills. After having been in a more “established” place in my career for a while, it is intimidating to publicly start off on a new branch like this. However, it’s also exciting, and I hope to share some of my journey as an immunology student along the way through blog posts and essays, just as I’ve always encouraged fast.ai students to do. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2022-10-19-part2-2022-preview.html] | [TOKENS: 758] |
1st Two Lessons of From Deep Learning Foundations to Stable Diffusion Jeremy Howard October 19, 2022 On this page The University of Queensland has opened the course to late registrations, if you want to join the rest of the course live: register here. We started teaching our new course, From Deep Learning Foundations to Stable Diffusion, a couple of weeks ago. The experience of developing and teaching these first lessons has been amazing. Course contributors have included some brilliant folks from Hugging Face, Stability.ai, and fast.ai, and we’ve had some inspirational contributions already from amazing people from Deoldify, Lambda Labs, and more. Some important new papers have come out in the last two weeks, and we’ve covered them already in the course. Because this field is moving so quickly, and there’s so much interest, we’ve decided to release our first two lessons early. In fact, we’re releasing them right now! In total, we’re releasing four videos, with around 5.5 hours of content, covering the following topics (the lesson numbers start at “9”, since this is a continuation of Practical Deep Learning for Coders part 1, which had 8 lessons): These videos will make the most sense if you’ve already completed part 1 of the course, or already have some experience with training and deploying deep learning models (preferably in PyTorch). Lesson 9—Pipelines and concepts This lesson starts with a tutorial on how to use pipelines in the Diffusers library to generate images. Diffusers is (in our opinion!) the best library available at the moment for image generation. It has many features and is very flexible. We explain how to use its many features, and discuss options for accessing the GPU resources needed to use the library. We talk about some of the nifty tweaks available when using Stable Diffusion in Diffusers, and show how to use them: guidance scale (for varying the amount the prompt is used), negative prompts (for removing concepts from an image), image initialisation (for starting with an existing image), textual inversion (for adding your own concepts to generated images), Dreambooth (an alternative approach to textual inversion). The second half of the lesson covers the key concepts involved in Stable Diffusion: You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. Lesson 9A—Deep dive In this video Jonathan Whitaker shows us what is happening behind the scenes when we create an image with Stable Diffusion, looking at the different components and processes and how each can be modified for further control over the generation process. He shows how to replicate the sampling loop from scratch, and explains each of the steps involved in more detail: Lesson 9B—Math of diffusion Wasim Lorgat and Tanishq Abraham walk through the math of diffusion models from the ground up. They assume no prerequisite knowledge beyond what you covered in high school. Lesson 10—Custom pipeline This lesson creates a complete Diffusers pipeline from the underlying components: the VAE, unet, scheduler, and tokeniser. By putting them together manually, this gives you the flexibility to fully customise every aspect of the inference process. We also discuss three important new papers that have been released in the last week, which improve inference performance by over 10x, and allow any photo to be “edited” by just describing what the new picture should show. The second half of the lesson begins the “from the foundations” stage of the course, developing a basic matrix class and random number generator from scratch, as well as discussing the use of iterators in Python. You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2022-10-19-part2-2022-preview.html] | [TOKENS: 758] |
1st Two Lessons of From Deep Learning Foundations to Stable Diffusion Jeremy Howard October 19, 2022 On this page The University of Queensland has opened the course to late registrations, if you want to join the rest of the course live: register here. We started teaching our new course, From Deep Learning Foundations to Stable Diffusion, a couple of weeks ago. The experience of developing and teaching these first lessons has been amazing. Course contributors have included some brilliant folks from Hugging Face, Stability.ai, and fast.ai, and we’ve had some inspirational contributions already from amazing people from Deoldify, Lambda Labs, and more. Some important new papers have come out in the last two weeks, and we’ve covered them already in the course. Because this field is moving so quickly, and there’s so much interest, we’ve decided to release our first two lessons early. In fact, we’re releasing them right now! In total, we’re releasing four videos, with around 5.5 hours of content, covering the following topics (the lesson numbers start at “9”, since this is a continuation of Practical Deep Learning for Coders part 1, which had 8 lessons): These videos will make the most sense if you’ve already completed part 1 of the course, or already have some experience with training and deploying deep learning models (preferably in PyTorch). Lesson 9—Pipelines and concepts This lesson starts with a tutorial on how to use pipelines in the Diffusers library to generate images. Diffusers is (in our opinion!) the best library available at the moment for image generation. It has many features and is very flexible. We explain how to use its many features, and discuss options for accessing the GPU resources needed to use the library. We talk about some of the nifty tweaks available when using Stable Diffusion in Diffusers, and show how to use them: guidance scale (for varying the amount the prompt is used), negative prompts (for removing concepts from an image), image initialisation (for starting with an existing image), textual inversion (for adding your own concepts to generated images), Dreambooth (an alternative approach to textual inversion). The second half of the lesson covers the key concepts involved in Stable Diffusion: You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. Lesson 9A—Deep dive In this video Jonathan Whitaker shows us what is happening behind the scenes when we create an image with Stable Diffusion, looking at the different components and processes and how each can be modified for further control over the generation process. He shows how to replicate the sampling loop from scratch, and explains each of the steps involved in more detail: Lesson 9B—Math of diffusion Wasim Lorgat and Tanishq Abraham walk through the math of diffusion models from the ground up. They assume no prerequisite knowledge beyond what you covered in high school. Lesson 10—Custom pipeline This lesson creates a complete Diffusers pipeline from the underlying components: the VAE, unet, scheduler, and tokeniser. By putting them together manually, this gives you the flexibility to fully customise every aspect of the inference process. We also discuss three important new papers that have been released in the last week, which improve inference performance by over 10x, and allow any photo to be “edited” by just describing what the new picture should show. The second half of the lesson begins the “from the foundations” stage of the course, developing a basic matrix class and random number generator from scratch, as well as discussing the use of iterators in Python. You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.